Behind the Pretty Frames: PRAGMATA

Introduction

The moment Capcom showed their new IP years ago, i kinda fall in love of it before even learning anything about the gameplay—which is a whole different thing to be fair!

The idea of having a little kinda weak character co-operating with a larger kinda powerful-yet-limited one, and both making their way together in a strange world, is something i can relate to very quickly. In case you never visited my about page, one of my all time favorites —and one of the few games i didn’t care about wasting my money on buying its collector’s editor— is the masterpiece The Last Guardian, this game has a very strange magic, coming from Ico, it tops everything been in Ico and/or Shadow of the Colossus. These 3 games, are very underrated regardless how much credit they get!

Now fast forward…

And fast forward few more years, and pass some delays…

The demo released on Steam, the moment i got the email notification from Steam, i though it was the game released, shortly i knew that it is just a demo, but anyways it is better than nothing. So, i installed it on Steam Deck, and went through my first play thought the demo. And i was hooked from the first hack execution!

Fast forward again, and few weeks ago the final trailer for the game dropped, and by playing the demo and watching the new trailer, you can tell that for the past 5 years or so, the team has been cooking so hard! You can easily spot the huge boost in overall quality and specially the hair tech and characters, cloth & hard surface rendering,… i won’t hesitate to doubt that the team did an overhaul of Diana’s model or her head at least!

Now tell me, how come it is a Capcom game, a new IP, it has unique gameplay, a character bond matching some of my all time favorite games, built on one of the best modern game engines, a 15 minutes demo hooked me with the mechanics & sold it for me pretty well, has a very beautiful visual identity with some technical advancements & challenges, and above all it has a cute little android girl,…and you don’t expect me to celebrate that game in my own way..a frame digging for my own joy and a Behind the Pretty Frames article for knowledge share!

Configs

While i used to play the past few articles on one of my all time favorite cards, the RTX 3080, but unfortuanlty it is not the case anymore, and due to some personal reasons beyond the scope here, i had to move entirely to Ge Force RTX 5070 Ti (which i don’t like, but i had to), Ryzen 9 9950X and 64G RAM.

i pushed pretty much everything in the graphics settings to the maximum available settings, played on 4k 3840*2160 (UHD-1), no HDR, and shut off pretty much every visual joy ruining feature, that includes but not limited to Upscaling, Dynamic Resolution, Frame Generation and Reflex.

Once more, i still on my old habit, i do take multiple captures —despite the fact taking gpu captures from recent D3D12 shipping-config titles is quite a horrible experience— for multiple areas to cover, figure out and understand topics better, as one capture could hold more clear insight about something than another capture. Also i do take multiple of captures for the same capture as the same area/view, just in case.

Also, there are gameplay as well as Cinematic captures, and as you might’ve noticed from earlier articles, that i’m usually more biased to refer to the captures from the cinematic sequences rather than captures from actual gameplay most of the time, not for anything but because game engines usually push the bar during runtime cinematics. But i’ve got both anyways, and still both are at runtime, and both at the same engine, and both are using and having pretty similar render queues, just Cinematic shot would have some extra flavors enabled.

If something was not runtime rendering, and it was a pre-rendered video, i’ll leave a note about that.

Behind the Frame

This game is based on RE Engine, and there are some similarities between it and the Resident Evil remake (2, 3, Resistance, Village). If you did not visit this yet, i highly recommend you to do so, the frame structure has some similarities to the one we study here, and it is a great way to see the evolution of the RE Engine and how it changed in the past 4 or 5 years. Also the Resident Evil article includes a lot of tiny details that will not be mentioned here either because it is not relevant to PRAGMATA or because it is explained in there much better and no point in duplicating them here. A good example, while writing the Resident Evil article, i broke down the inner workings of every type of AO method that can be used or toggled in the graphical settings such as HBAO+, SSAO, CACAO.

General Note[s]

  • All images you can click on and open the full resolution.
  • i decided to got with JPG for most of the images, in order to reduce the loading & uploading time and quote, but if an image doesn’t look good or lacking details, let me know, i do have the source PNG for all of them.
  • There are some targets i played a little in their color values or gamma to make them acceptable, as sometime you know how images would look like at GPU’s point of view due to the image formats, something he can see, but we can’t!
  • Most of GIFs you can click them to open a YouTube video in full internal resolution (4K).
  • For the very tiny image resources, If you click on the images you open the original files in new tabs, which could be as small as 1*1 pixels. But, if you right click on images and “Open image in new tab”, you will open the upscaled detailed version.

D3D12

By now you’re pretty familiar with the fact that the backbone of RE Engine on PC is D3D12, and the API has been well utilized in the past few games (i looked only in Resident Evil series). Starting from hardware raytracing, mesh shaders, a ton of compute, possibility of shading rates, indirect drawing, bindless,… and many more D3D12 specific API commands that are not commonly used.

D3D12Core 1.618.1.0
NVIDIA Driver 595.97
HLSL Shader Model 6.8

Compute

There is always an on-going movement of moving things to compute or finding an innovative ways to utilize the GPU to the maximum by moving stuff to compute. Or even some new techniques that happens to be innovated and find their way to compute right away. And RE Engine is not a stranger. Last time i checked an RE Engine game it had a good compute utilization, and the game we’re checking here today is taking this even further to some more uncharted territories for the engine. The engine has improved, things are vanishing from compute due to being redundant or not needed anymore, things such as SSR, but there are other things that came on the table since last time, like Ray Tracing, Denoising, Meshlets, Instance Culling, and Hair physics. The compute queue is quite busy in PRAGMATA pretty much all the time, and there isn’t a single step in the graphics pipeline that is not making a benefit of compute beforehand in way or another.

Here are below a list of core compute utilizations that are exists in PRAGMATA!

Compute Dispatches Queue (not in a specific order)

  • Fast Clear/Copy
  • Skinning
  • Culling
    • Instance Culling (SDF AABB tests)
    • Meshlet Culling
    • Occlusion Culling
    • HiZ
  • Cloth Simulation
  • Mesh/Vertex Deformation (including Blendshapes)
  • Calculate Normals
  • Strand Hair Physics
  • Light Probes
  • Meshlet sorting
  • GPU Particles
  • Instancing (IDI & Draw Indirect)
  • Light Culling
  • Histogram Adjustment
  • SSS
  • Compress Depth Normal
  • Shadow Occlusion / Shadow Miss / Shadow Caster Culling
  • SSAO
  • Shrink Shadow Map
  • Volumetric Fog
  • Ray Tracing (Ray payload, Hits, BVH, Denoising)
  • Strand Rendering
  • Motion Blur
  • Radial Blur
  • DOF
  • CAS
  • GUI Composite

Blue Noise

Blue Noise became in the past few years the little sweet thing that can be added to any technique and it helps it delivering. It is not a secret that Ray Tracing and Denoising are the most hot topics nowadays to benefit from that. Few years ago, i would barley notice a blue noise while investigating any game, but things are changing. Here are below some uses of Blue Noise in PRAGMATA.

Things that benefit from Blue Noise

  • Shadows
  • Volumetric Fog
  • Shadow Occlusion
  • SSS
  • Denoising
  • Ray Tracing
  • Post-Processing

..and possibly other things that i forgot to take notes of!

PRAGMATA Vertex

In the past i might have criticized RE Engine for having more than needed variations of vertex descriptions, but in fact, with this iteration and while looking into the version used for PRAGMATA i’ve noticed that there were quite a few, a lot fewer than what i observed in past in the Resident Evil remakes + Village.

Below are pretty much everything i was able to spot while looking into multiple frames of multiple natures, there may be one or two slipped or i intentionally skipped such as the ones that were made of a single component.

PRAGMATA’s Vertex Description – Skinned Character Meshes (Girl Legs, Cloth, Suit, Suit Attachments,…etc.) — Most of Skinned

PRAGMATA’s Vertex Description – Environment mesh (Probe, Wall, Regular Particles…etc.) — Most of the Static Scene

PRAGMATA’s Vertex Description – Special Particles (Polygon 3D Cutout particle, Billboard 3D Particle,…etc.)

PRAGMATA’s Vertex Description – Ribbon Particles (Triangle Strips)

PRAGMATA’s Vertex Description – Quad Effect (Lens Flare,…etc.)

PRAGMATA’s Vertex Description – UI Element 1

PRAGMATA’s Vertex Description – UI Element 2

Frame

Compute Stuff Kicked

Many compute dispatches kickstarting, things such as Cloth Simulation, Skinning (Pre-Transform Skinning), Hair Strand Simulation, Culling, Fast Copies, Update global Distance Fields, Calculate Normals, GPU Particle Emitters, Light Probes into 3D Texture.

Compute generated “Global SDF Texture” – 128*64*520 of R16_FLOAT

At this point, everything is 99% in a form of arrays or buffers, totally data and not that much visual friendly stuff to see or share here.

These are not everything, there ere other patch of compute stuff that kicks later shortly after the next step.

UI Early Prepare [Not Always]

This step as you’ll explore it below, takes place only during most moments of Gameplay, but it is of course absent during Cinematics..

There is not doubt that the UI in PRAGMATA is one of the best in the past few years. If you played the game/demo you probably have noticed that. The UI pipeline is quite different and full of tricks that you’ll see one by one while we’re going into the frame. But having the UI by nature of game engines as a final touch and final stone to put in the frame to complete, doesn’t mean it has always to wait to the end of the frame to be put in the right place. Nope! not in PRAGMATA!

PRAGMATA enjoying some unique visual identity for some of the UI elements, from the regular HUD to the 3D at worldspace UI elements/widgets. One of the very unique visual effects i’ve noticed in the UI is some sort of “Layers Ghosting Effect” for some core HUD compound elements. Such effect would be very costly if these compound & heavily packed UI elements get drawn multiple times to simulate the layers ghosting effect, hence the game taking quite a different approach for that. And this is the core reason behind this very early UI step in the frame lifetime.

The idea is simple, and i believe it is hand selected based on flag or something, consider this final frame

It may not be very clear, but the core elements of the experience in the HUD such as the weapons selector and health bar (well, it is always the 3 bottom HUD elements anyways), has some ghosting. It may not be very noticeable at first, but once you see it, you can’t unsee it. It looks interesting in motion. These elements (health bar & weapons selector) are made of may sub-elements such as texts and images.

Here is a closer look to these elements, hopefully you can see the layers ghosting now, at least in the left element due to the intensity of that part of the frame.

And while that entire widgets are needed to be drawn multiple times in the final frame to deliver the ghosting effect, it get drawn early (during this step) one single time in the frame and rendered indirectly into a render target,

They are done of course a tiny piece by a tiny piece,

Imagine if each of them get ghosted 3 times, this mean each need to re-draw entirely from scratch 3 times if we are not going through this early phase.

And then when the time comes to add the UI by the end of the frame and after post-processing, these 3 widgets get re-used as a single image each on a quad instead of re-drawing it entirely step by step every time it is needed.

This gif from the end of the frame, each of the bottom 3 widgets draws 3 times to simulate the layers ghosting effect..

So you can consider this step as a clever optimization for the UI system used in PRAGMATA!

And for these 3 core elements to render to their dedicated rendertargets, there are quite a few common UI atlases used (always the same atlases so far during the course of the demo)

For that purpose the below structs are used.

UI Early Prepare Structs

Here is another frame, with more details in each of the widgets (progressing in the game, adding stuff to your HUD…makes sense!)

And in action..

And when get composed to the final HUD by the end of the frame..

Hope it is all clear now, let’s move on.

Compute Stuff Kicked

Some more compute stuff kicks here, such as Instance Culling, Meshlet Culling, Visibility Buffer as well as Light Culling.

Just like the previous compute patch of stuff that kicked earlier, everything here is pretty much data/buffers or arrays and nothing very visual to see/share.

HiZ [Compute]

Hieratical Z buffer. A sequence of compute dispatches to generate depth visibility buffer (depth mips) that can be used for occlusion.

Depth Visibility Buffer – 1920*1080 – R32_UINT

Computes (probably culling)

Z-Prepass

Z Pass.. nothing fancy. sometimes skinned shader used, other times “static” mesh shader used.

Deferred G-buffer

Draw the gbuffer piece by piece. For this given frame

Inputs given are things such as the Depth, Material ID or Visibility Buffer (in addition to meshes & their textures).

Output of the G-Buffer are as follow:

1.Modified G-Buffer
2.Emissive (RGB)
3.Color (RGB) + Metallic (A)
4.Normals (RG) + Roughness (B) + Misc (A)
5.VelocityXY (RG) + AO (B) + SSS Mask (A)

The AO rendertarget above looks like only showing the Blue channel content, and it is correct, i intended to put it that way as SNORM, the RG component would make it look weird, but in reality, that render target looks like as follow

And yes, hair is not yet!

Here are some examples of G-Buffer in other frames

Because the limit of 8 entries per row in the table above, i did not have a chance to put the alpha channels, which holding Metallic, Roughens and other Masks, here they’re below for the previous 5 example frames

IBL/Skybox [Not Always]

Sometimes (and there were very few times at the course of the demo) there is a skybox drawcall by the very end of the G-Buffer, always the last thing if ever existed, it contributes to the emissive rendertarget.

To see the impact of this step, we’ve to step one step before, to see how the Emissive target looked like

Histogram Adjustment [Compute] [Not always]

Another RE Engine signature. This is for Auto Exposure purposes, the RE Engine seem to be using a Matrix or Multizone Metering. In that method exposure get prioritized for the “defined” most important parts of the frame. For that purpose a mask is used. And because of the dark mode applied to the majority of the game, you would notice that the mask is quite similar in majority of the frames (if not exactly the same one) and it would seem like a “Vignette”.

I’ll leave below in readings section some good reads about that method that explains it (and other auto exposure methods) in detail, and explaining why it works perfectly for some games. Anyways, for that purpose, params passed to the shader is as follow

Histogram Adjustment Structs

SSAO [Compute]

TODO::

Shadowmap

Renders shadows in a R32_TYPELESS of the size 2048*2048*32 (array of 32 layers) .

Here are few layers of a shadowmap for a given frame.

Here is another example from a Cinematic frame, during cinematics there are always some fun stuff happening behind the camera, shadowmpas can still capture those! i won’t point to any, i’ll leave it for you to observer!

Shadowmap Shrinking [Compute]

A compute dispatch to shrink the sahdowmap to 1/2 (1024*1024*32)

The shrunk shadowmap is used in other effects such as volumetrics. While the actual full resolution shadowmap used for shadows.

Worth noting that there are some Shadow Occlusion Planes 3d textures 64*64*33 of the format R32_UINT that are utilized during the shadow passes. One of them would look like that

Just one layer of the array

Lighting

Deferred lighting goes as follow, this example below, all are inputs, except the last row of the table.

Ray Tracing

Here are couple of TLAS to sum the Ray Tracing optimizations in BVHs.

Closer look at Diana’s hair, because first, it is visually interesting, and second, the hair rendering done entirely ray-traced in compute as far as i can tell.

Diana’s hair is made of two geometries (two layers) the bottom one is 7742 prims, where the outer and larger one is 62908 prims.

BVHs are well managed..,

Instances & hierarchies,..

And rays & traversal heatmaps,..

Well, to a certain degree… Yes they’re!

Also, here are some of the Ray Tracing outputs during the entire process (the entire thing taking place at the course of multiple dispatches of course, some outputs becomes inputs for a following or subsequent dispatch/es, and so on)

Ray Tracing Structs

Hair (Strand Based) [Compute]

An indirect dispatch kicks to draw hair, it is given a buffer of Vertices that is filled from an earlier compute that does the hair strand simulation, in addition to Depth, Atmospheric Transmittance Texture, Volumetric Fog Texture, Volumetric Particles Texture,…and many other data in form of bound structs/CBs.

Unfortunately many of these data inputs to the dispatch mentioned earlier are not in a state that allows them to be very visual here.

to summarize the entire Strand Hair thing,…

It is worth noting that the output is made of 5 layer (3840*2160*5), two of them usually occupied.

There are some details that may not be very visible in that previous image, so below i did boost the range a little, to make it visible, but the 1st image in the two images below, is basically the same as the previous image above this paragraph, i just boosted it.

Here is the status of the current hot resource (Frame, Depth & AO) once the results of this dispatch get composed back to the frame in a draw command after the dispatch.

Hair Structs

Early Particles

Early Particles is the fancy name i gave to this step, as at the case of Cinematics there are two steps of particles rendering that spread away by other effects in-between them. The other step clearly below is called Late Particles. But this separation is only at the case of Cinematic frame. But at the case of Gameplay, there is only one step, which is this Early Particles step. So you can consider it just “Particles” at the case of looking into a Gameplay frame, or “Early Particles” if we’re looking into a Cinematic frame.

A sequence of small passes made of sequences of DrawIstanced and DrawIndexedInstanced, each works on a unique type of pipeline that delivers a uniquely featured particles effect. The unique type of particles are including (not in any specific order, where at most cases they go in that same order, but some of them maybe absent in some frames or others would present in Cinematics and not gameplay)

Particles Renderers

  • Billboard3DTextureBlendLiFgRiDefault
  • RibbonRiDefault
  • Billboard3DNoSoft
  • Ribbon
  • RibbonDefault
  • RibbonRiNoSoft
  • EmissiveOnly
  • EmissiveOnlyCpuLighting
  • Billboard3DCutoutRiDefault
  • Billboard3DCutoutRiDefault
  • Billboard3DRiPdDefault
  • Billboard3DCutoutNoSoft
  • Billboard3DCutoutAcDefault
  • PolygonCutoutNoSoft
  • Polygon
  • PolygonCutoutDefault

During all those passes, regardless how many or how few are there, it depends on the frame, drawing takes place on the lit frame directly in addition to a full resolution Effects Mask that will be needed later in the frame lifetime.

It is worth noting that depth at this point not getting impacted with any particles drawing, even though there were some what so called “Polygon” particles. So you can consider it in a read-only state.

Here this example in action (Click gifs to open in full internal resolution on YouTube)

For the purpose of drawing, a wide range of fancy FX-Artist-Authored textures are used to do all the magic. For that given frame above, this variations of textures below used across all the steps, so one texture could be used for example with a Ribbon and a Billboard.

Early Particles Structs (open at your won cost)

It is worth mention that during all of these steps, these two are GPU particles

Also during early particles, things such as the effects on the eyes are taking place. For example, the fancy eye lens effects of Diana’s always taking place during this phase, usually under 3 different categories of effects:
– Emissive Only
– Emissive Only CPU Lighting
– Billboard 3D Cutout Ri Default

Lens Flare Image [Not Always]

As long as there are some lens flares present in the frame, this step will take place in order to generate the Lens Flare Image in 1/2 of the target resolution. If no lens flares present, the Lens Flare Image is still kept, but as an empty solid black texture.

There is no difference between a Cinematic or a Gameplay frame, both doing the exact same thing during this step, which is stamping or drawing —DrawIndexedInstanced— the flares one by one into the Lens Flare Image. The Lens Flare image prepared during this step is to be composited later in a later step called Lens Flare Composite or “Fake” Lens Flare as the engine internally referring to it.

So given that final 3840*2160 frame during gameplay

This is how the generated 1920*1080 Lens Flare Image looked like

And in action (Click gif to open in full internal resolution on YouTube)

And in order to produce that Lens Flare Image for that frame, the collection below of these magical textures are well utilized

And during all those draw commands (around ~145 for that given sample frame), a single struct keeps coming on the table (unfold below)

Lens Flare Image (Lens Flare Prepare) Structs

Here is another frame example that spends quite longer time in the Lens Flare Image creation (more complex) and it uses pretty much the exact same set of helper textures.

Late Particles

While Early Particles get drawn right away into the frame, in addition to a Mask, the Late Particles get drawn entirely into their own image, so you can consider this step as something called Late Particles Image. The generated particles image is not be be added to the frame yet, and it is something to be done later with the Motion Blur and DOF if existed.

Like mentioned earlier that the Late Particles seem to be only taking place at the case of Cinematics or some very unique effects that would go through a “Blend” mode into he final frame (like a Photoshop or GIMP Blend Layer basically). Here is one example for that, as the previous Gameplay frame used in showing Early Particles doesn’t have a Late Particles phase.

And of course, it is not something that happen at once, in action, it goes as follow..

The data and/or structs used to help in this phase is pretty much similar to the ones used in the Early Particles phase.

And in order to make separation more clear between Early Particles and Late Particles, here is how they both participate in that frame

Post Processing

Anti-Aliasing (FXAA+TAA)

The RE Engine signature that been used & proven quality for quite sometime now, the mix of FXAA+TAA. Using velocity and counting on the previous frame in addition to the Effects Mask (from the Early Particles step) AA gets to work. One thing here, the Effects Mask needed for responsive AA, so tiny particles don’t just get washed out.

If you’re questioning the format for the Velocity, it is because the Velocity XY are stored in the RG components of one of the gbuffer images that holds Velocity X (R), Velocity Y (G), Ambient Occlusion (B), and SSS Mask (A).

Anti-Aliasing Structs

Motion Blur [Compute] [Not always]

Motion blur done pretty much exactly the same way RE Engine been doing it for years, it was explained earlier in details. But here below the core steps and few notes about them. Feel free to refer to the Resident Evil breakdown’s Motion Blur section for any further details or notes.

1.Tile Max H

Generate the PreComputedVelocity from the given packed velocity texture (VelcoityXY + AO + SSS), and use the output in addition to Depth to generate the Generate TileMaxH (the very tall famous image, full render resolution height but width of a tile).

2.Tile Max

Continue with the TileMax vertically this time, in order to end up with full Tile Max texture

3.Neighbor Max

NeighborMax from TileMax

4.Composite

Composite to the frame using the PreComputedVelocity

Motion Blur Structs

DOF [Compute] [Not always]

Depth of Field as well is done here pretty much the exact same way RE Engine been doing it for the past several years, and the method for DOF was explained in detail in the Resident Evil article, below we’ll go though it once more, but it is 1 : 1 to what been in the engine since Resident Evil remakes.

1.Tile Max Heigh

Depth to generate the Tile Max Heigh, you can think about this as a template for the tile size.

2.Tile Max

Use the pervious output to generate the TileMax rendertarget (first step in having complete Near & Far tiles mask)

3.Neighbor

Generate the Neighbor rendertarget, which is basically the complete Near & Far fields (mask) that is packed in R11G11B10_FLOAT (credits to this cool trick to Crysis/Cryteck/CryEngine)

4.CoC

Using the Depth + HDR Image + Neighbor (generated in previous step), we finally calculate the CoC, and at the same time downscale the HDR image to 1/2, so we have a version that is same size as CoC.

5.DOF Image

Now use that outputs of the previous dispatch (CoC Downscaled HDR) + the neighbor, so we can calculate a CompressDOF, and DOF A. This is a good decision, i mean separating into two rendertargets, check the formats and you know what i mean. Also keep in mind at this case the CompressedDOF is fully white (it is red below due to the R32_UINT).

6.Compress DOF Image

Using the CompressDOF to apply to the Downscaled HDR, so we can get an output of the first pass of the DOF…a fully blurry image!

7.Compose

Put all together at once, also during this step will compose something like the Late Particles that was prepared, but yet never composed. So it gets composed here too with the DOF.

DOF Structs

MB & DOF Note

Motion Blur & Depth of Field are clearly mostly compute based, and that makes them in timelines would look confusing most of the time. As in some frame, Motion Blur seem to comes first, then Depth of Field, where in other frames it is the opposite order. But this is clearly just a visual thing in frame timeline due to the way our tooling still not really super compute friendly, but in general both effects are going into compute about the same time.

Lens Flare Composite (Fake Lens Flare)

Remember the Lens Flare Image that was prepared few steps earlier, now is the time to composite it to the frame. One interesting observation, as mentioned earlier that preparing the Lens Flare Image step may be bypassed in many frames, but as a matter of fact, the compositing step of the Lens Flare Image (this step) is taking place all the time, regardless the Lens Flare Image has any content or totally empty & skipped!

Fake Lens Flare (Lens Flare Composite) Structs

Lens Effect [Not Always]

1.Downsacle

Scale the frame to 1/4 of the target resolution, at this case 960*540, as this target will be the base for many things in the following step

2.Blur Textures

The goal of this step is to simulating the effect of a lens (Polynomial Lens) that eventually enhancing the frame by some bloom & luminance, this is done by scaling down & up the frame to 1/4 at max (960×650) and 1/64 at lowest (60×33), and blurring the frame the same way you would do bloom basically. Doing this in order to generate multiple blur textures at multiple size.

So basically, scale down, then blur Horizontal, then Vertical, then repeat, till reaching the lowest allowed resolution. like that:

2 Blurs, 4 Times

These are the final outputs from this stage that we will need later

3.Streak Textures

Taking the 1/4 blurred version from the Blur Textures generated earlier, and use it around a kernel to generate multiple streaks in multiple directions. The reason to do that, so we can get some sort of Convolution Bloom eventually, but it seem all generated on the fly, and no kernel image or pattern is given. That jittering goes as follow:

All steps (despite the gif seem repetitive, it is longer than you think!)

These are the final outputs from this stage that we will need later

4.Luminance Texture

Compose Blurs + Streaks from the previous two steps into one texture, called the Luminance Texture

5.Final Compose

Compose the Luminance Texture (Blurs + Streaks) into the final frame

Lens Effects Structs

Soft Bloom [Not Always]

Soft bloom taking place in simple distinctive steps similar to may that we’ve explored in the past (a 100% not like Detroit!).

This step is flagged as [Not Always] while there are presence of bloom during the entire game, as there are two variations of the Bloom & Blur pipelines. During regular gameplay, this Soft Bloom will kick as long as there is a need for it. But during Cinematics, the step discussed above “Lens Effect” is enough in simulating bloom as part of the lens, and it is kinda looking much high quality Bloom (Convolution Bloom).

1. Bloom Image

The goal of this step is to produce a 1/2 of the target resolution Bloom Image, by scaling the frame in two directions. It is a 4 steps bloom (scale down 4 times, then scale back up 4 times).

i. Scale Down

Taking the frame from 3840*2160 down to as low as 120*67

ii. Scale Up

Taking it back up again from 120*67 up to the target Bloom Image resolution —1/2 of the internal resolution— of 1920*1080

2. Composite Bloom Image

Composite Bloom to the frame

For the purpose of Bloom-ing, these structs below are used. Some of them for scaling up only, were others for scaling down & up, and some would be only during composite, and others would be during all steps.

Soft Bloom Structs

And here is another Soft Bloom sample frame

Radial Blur (Prepare) [Compute]

In this step will compute the result for Radial Blur to be consumed in the “Sub-Post-Processing” step below.

Nothing fancy to show in this step, it is all numbers.

Radial Blur Structs

If the struct[s] looks familiar, even with the name, that is because it is the exact same implementation & shader[s] from Resident Evil.

Sub-Post-Processing

Multiple effects taking place in one step using one shader. Still the engine refers to this stage as “LDR PostProcess With Tonemap“, but i still prefer to call this step a sub-post-processing, as it is kinda a sub-pass. These are not necessarily all taking place, it depends on the frame and the visual intention, but all in all, this shader invocations includes things such as these ones below. And it is worth mention that this entire step is pretty much exactly carried over from the Resident Evil version of the engine, the only difference now maybe that small structs that used to be in the shader are all merged into a larger structs. Instead of having around 10 small structs as used to be the case with Resident Evil, now there are around 2 main large structs.

Where all previous post processing where taking place in their own dedicated passes (most of the time, things are mixed at some cases) using different shaders and different buffers of params, but here at this stage, there is a whole lot of post processors are applied all at once in a single shader, and because of that, it makes it hard to see the output of each post-processor before another one get applied, so we can have an overall view of what gets into the pass & what comes out of the pass.

Tonemapping
Film Grain
Lens Distortion
Color levels
Color Correction
Haze Filter
Gradation
Aberration
Refraction
Radial Blur

And here is the impact of that full stack of post-processors all together combined on a given frame..

And here are some more examples to satisfy your eyes. At some frames i’ve got the feeling that it in fact looked much more pretty & less exaggerated without post processing..

Sub-Post-Processing Structs

Sharpen (AMD’s CAS)

Yet another RE Engine famous signature, the AMD implementation that seems to be very favored by the RE Engine team as i’ve been noticing it in pretty much every RE Engine based game. It is yet another compute dispatch that runs on the the most recent version of the SrcImage (yes, still exact same name of the resource from previous RE Engine games, but iirc this is back to the CAS shader itself, not RE choice) that just came out of the previous step, but this time the dispatch is the “Contrast Adaptive Sharpening” or CAS for short. The impact of CAS is very subtle, but can still be noticed in either Gameplay or Cinematic frames.

And because why not, here are some variations of frames between gameplay & cinematics.

i won’t blame you if you see it negating some of the fancy work been done in the frame so far. To a degree yes it does, but not always. We talked about that issue earlier in the Resident Evil breakdown, give that short section a read if you’re interested.

CAS Structs

Color Grading (LUT)

Color grading with a lookup table (LUT) and format change of the output to match the LUT given format

Here are some more frame variations..

UI Prepare [Not Always]

While this sounds like a Post-Processing, but in fact i personally see what happens during this step is not part of post-processing, as it is not with any mean a contributor to the final frame itself, but is it a contributor in the UI system (more below about this).

1.Format Change

A not always step, and it is not a Cinematic thing, it is mostly for Gameplay purposes, and it is just back to the state of “what” we want to show on the final frame as you’ll see in the next step. Taking the output HDR Image from the previous step, change format once again but this time to the 64-bit floating-point format R16G16B16A16_FLOAT in order to be ready to be used for the next step Blur Filter, which requires such floating-point precision.

couple more..

2.Blur Filter

This step is tied to the previous step as mentioned the format change done is to server here. So if the previous step was absent, this is because there will not be any application for the Blur Filter during the frame.

Also worth noting that this Blur Filter step is pretty much a Gameplay thing only if ever existed, and it is 100% absent in Cinematics, as it is not needed anyways.

The main goal of this step is to produce a 1/2 resolution separable-ly blurred version of the frame —1920*1080 at my case— you can call it the Blur Filter Image, that will be used with the “very impressive” 3D UI rendering tech, as parts of the 3D UI widgets would most of the time need to mask the frame behind them (will see that shortly below). So basically the rules are, it is not during Cinematics, and it is only during Gameplay if there will be some UI elements benefitting from the Blur Filter output Image, so not all Gameplay moments would have this step either. This is why i was hesitating for quite sometime in leaving this as part of the Post-Processing, and later on decided to remove it from under the Post-Processing umbrella and put it under its own section of UI Prepare.

Blur Filter always in a two-step fashion (aka separable)

i.Horizontal
ii.Vertical

Put it all together, and plus few more Gameplay examples..

Blur Filter Structs

UI [Not Always]

The UI in this game looks really interesting, it has some tricks here & there to deliver the unique visual identity, and it all started at the beginning of the frame where we’ve seen 3 full widgets are rendered into 3 images, to be later used right away as a quad, so it can be duplicated to simulate a layer ghosting effect. That was the very first part of the UI, but done very early.

The rest of the UI is done here at two distinctive steps, at first we get everything draw to what so called “UI Image” and then, that UI Image get composed to the final frame in order to finalize & present.

UI is not an always exist step, but it is there most of the time at least during Gameplay, and even during Cinematic frames where there are subtitles, UI will be processing (with minimal effort of course).

UI Draw (Prepare UI Image)

The main goal of this step is to output the full resolution UI Image (+ its Mask), by doing a long sequence of mostly regular UI draw commands.

The main input for this step is the Depth in full resolution (as there are parts of the UI are drawn in 3d as worldspace 3d widgets), in addition to the Blur Filter 1/2 resolution image made in the step right before the UI.

Drawing item by item into the UI Image and its alpha channel. The UI Mask stored in the alpha channel is clearly needed for UI Compositing later.

If you did not notice, you can observe things that been prepared earlier, such as the Blur Filter Image of the frame in 1/2 res (it get Masked behind some Elements, most notably the Hacking Window) and the 3 bottom UI Widget Images for Health & Weapons (each get re-drawn multiple times to deliver the Layers Ghosting Effect).

In order to do that, you might have noticed that there are some form of smart masking taking place, some times it is in worldspace, and other times it is in screenspace. During all draw calls, if the item in process would need a mask, the mask is prepared for it in full rendering target resolution. So not everything get that treatment, and for that given frame example, there are around 11 masks used (or better say re-used).

If it is not very clear what the masks are actually doing in the UI, here are few examples of the impact of the mask during a UI element draw commands. Heads up, i picked the largest 2 masks in the list to showcase, others may seem large, but during draw the impact was not very easy to spot.

Another example, just couple of commands after

And one more example, hope it makes it clear.

For all this to happen, a great collection of pre-authored image resources —in addition to the full widget-to-image resources generated at the beginning of the frame— are used.

It is worth noting that there is an attachment that keeps getting updated frequently every few frames (where it is needed), this in full resolution 3840*2160 of R8_UNORM and it seem to be used to draw the 3d widgets in the worldspace, it is always get written to, and it never read from or get transitioned or barriered.

And to put this step all together..

UI Composite [Compute]

A compute dispatch to compose the generated UI Image in the previous step into the final frame!

UI Structs

Present

Final image presented in R10G10B10A2_UNORM in the final target resolution of 3840*2160.

Life of a Frame [Rendering Graph]

Over the years we tried multiple formats to summarize the journey we took in the frame, and yet seemed the Miro graphs were the most successful ones. With the fact that i ran out of free miro graphs, i had to improvise, so i decided to share Detroit’s one with all other games, and make it a single graph that contains any Behind the Pretty Frame made or will be made. So while the graph below showing PRAGMATA, but if you zoom & scroll, you’ll see Detroit (and other future games)

And here is an image of that graph, just in case miro service discontinue someday in the future or something! But navigating that graph above is the intended way, don’t check the image below unless you can’t open the graph.

TODO::upload Miro graph image

Extra Stuff for Future Investigation

Something like Volumetric Fog (Volumetric Effects) are one of the things that interests me, but unfortunately there weren’t too many in the demo, and the very visible occurrences, every time i took captures in there, it was corrupted.

Deeper digging into Ray Tracing details.

Engine General Observations

Shader compilation

Every time i was launching the game, i was faced by the shader compilation screen that take sometime to recompile shaders. Not sure why the game does that, i would understand that if i got an game updated through steam or did some driver update, but that was never the case. i close the game, do some digging or writing, back to open the game after few minutes or few hours, and it recompile shaders again! i hope this issue get resolved, it is little annoying.

Graphical settings

Graphics Settings is one more annoying thing, as it is 100% synced over steam cloud between devices. While cloud save is great feature for any game nowadays, but always has its culprits that developers may miss, one of the most important things, is not to sync all user preferences, as this would cause issues if it is hardware dependent. My use case was that, as mentioned earlier, once i got an email notification about the demo, i installed it on Steam Deck, and did my first play though on Steam Deck. A day later, i went to play on PC, and as soon as the game installed — and Steam did the cloud save automatically— launching the game on my PC, the entire quality was matching Steam Deck, and this included the display resolution. And that is little odd, as if i’m a gamer that are not aware of what is going behind the scenes, i would have though that this is the default or recommended quality of the game! At my case, i had to change the settings one by one. Jumping back & forth between Steam Deck and my PC to re-play the demo every time i get a chance, was little annoying.

A …Thought

While the game looks great and it appeals to me, but it was always the case that not every game i liked cashes out. Not to mention the multiple elephants in the room, from the fact it releases on 2026 after being delayed few times, or being a brand new IP with a brand new gameplay mechanic, where it is always quite vague and risky to test new waters in gaming industry. Also while the demo played nice on my PC and even on Steam Deck, but looking into the longer trailers every time a new one is coming, i sense something with the performance of the game, specially in combat areas or open exterior environments, if the marketing videos are not butter smooth @ 60 fps, then what about the game itself?! i could be wrong — and i wish from all my heart i’m wrong— and it is just me with my eyes issues, and the videos are 100% fine, but i’ve observed such a thing with other games in the past!
All these things combined giving me a little bad internal feeling about how the game will be doing, and i hope i’ll be 700% wrong and it performs super great. We are in a very critical need for the success of such games, games that are not afraid of innovating, telling new stories and hooking us with new characters, and most importantly knocking new doors and pushing the gaming industry forward.

Epilogue

This is the first time ever in the series to breakdown a game before it technically releases! Yes, it is an official demo, but the final game won’t be very different i believe. If you reached this line, so you probably worked in the industry at any level, from indie to AAA, and we all know that by the time the a demo goes live few months before release, there is probably a full game build ready, if not locked & passed platform certification already and the team maybe is working on day 0 patch, or other future DLCs. So chances to make core changes in the renderer are low in my opinion.
Every time a game interests me to look under its hood, it take time, either for me still behind in my steam backlog, or me don’t have enough time for a side quest, or me waiting for a proper sale because my gaming budget is already draining,…etc. It never was the case that i had a chance to look under the hood of a game in its first week, now i had the chance to do that pre-release, and thanks to the Sketchbook demo Capcom put early.

i wish good luck to the game, i’m sure it will do pretty great, it is promising & amazing game, and the demo showed super great potential in terms of interesting gameplay, characters and story. Good luck to the game, good luck to the team behind PRAGMATA, good luck to the RE Engine team, good luck to the entire Capcom,..and of course, good luck to Hugh & Diana in their journey!

Yeah, baby. We did it!

-m


Related Readings & Videos

Behind the Pretty Frame Resident Evil (2, 3, Resistance, and Village)
Pragmata – On <3 Fandom <3
Pragmata Story – Capcom homepage
Pragmata Characters – Capcom homepage
Pragmata Gameplay – Capcom homepage
HLSL Shader Model 6.8
DXGI_FORMAT enumeration (dxgiformat.h)
Wikipedia – Separable filter
Automatic Exposure Using a Luminance Histogram