

Thanks to our close technical partnership with NVIDIA, we integrated DLSS and all features described above into Unity and HDRP – and we can’t wait to push things even further! Var jitteredVP = projectionMatrix * cameraData.viewMatrix ProjectionMatrix = GL.GetGPUProjectionMatrix(projectionMatrix, true)

ProjectionMatrix.m12 += m_TemporalJitter.w ProjectionMatrix.m02 += m_TemporalJitter.z This will result in the shader working smoothly without any modification, as the vertex shader calls the same function to convert the World position onscreen. Step 3: Set the View Projection matrix to the global property UNITY_MATRIX_VP. UpSamplingTools.GetRTScalePixels(cameraData.pixelHeight) UpSamplingTools.GetRTScalePixels(cameraData.pixelWidth), M_TemporalJitter = new Vector4(temporalJitter.x, temporalJitter.y, These two were later used to modify the projection matrix and globally affect the rendering result: Then multiply the jitter by 2 and divide by the scaled resolution to convert the jitter to screen space in pixels. Vector2 temporalJitter = m_HalotonSampler.Get(m_TemporalJitterIndex, samplesCount) The output jitter should be between negative 0.5 to 0.5. Step 1: Generate samples from Halton sequences of a specific camera, according to different settings. Consider these steps to do it effectively: In practice, applying Jitter Offset can be rather intuitive. Here, the recommended sample pattern comprises Halton sequences, which are low-discrepancy sequences that look random but cover the space more evenly. That’s why 24 Entertainment turned to DLSS, which not only reduced the overall rendering resolution to rectify these issues, but also obtained high-quality results with the desired smooth edges. Jitter generally means that the sampling position in the pixel is slightly adjusted, so that samples can be accumulated across frames instead of attempting to solve an undersampling problem all at once. If the historical color of each pixel in the current frame can be identified, you can use that information going forward. With the help of motion vectors, it then blends the color results between frames. This method adds a different jitter to each frame in order to change the sampling position. Temporal Anti-aliasing (TAA) is another method that accumulates samples across multiple frames. The triangle fragment color can be adjusted to make the edges smoother based on the number of samples in a pixel covered by primitives. MSAA checks multiple samples in different sub-pixel positions, instead of only checking the center sample of the pixel.

If 8K is the desired resolution, rendering will be four times slower, and you will likely run into issues with 8K texture bandwidth (memory usage).Īnother method to solve for aliasing is Multisample Anti-aliasing (commonly known as MSAA), which is supported by GPU hardware. If you aim to achieve 4K results, simply increase the resolution to alleviate aliasing. M_DLSSArguments.InputDepth = depthHandle.rt M_DLSSArguments.InputColor = sourceHandle.rt Mathf.CeilToInt( * viewportScale)) ĬommandBuffer.SetViewport(pixelRect) // RenderScale SupportedĪfter rendering is complete, DLSS arguments should be filled with the low-res image-related parameters, including the size of the source, size of the destination, input color rendering target, and depth target. You must create the render target with the correctly scaled size. The viewport should thus be set in gbuffer to forward pass correctly. To ensure compatible inputs for DLSS, some slight modifications to the pipeline were made.Īfter all, the low-resolution input demands rendering objects at scale. Not only does this produce high-quality results, but by leveraging artificial intelligence to fill in the missing information, the game renders almost twice as fast, which is crucial for this level of competitive, multiplayer gaming. Behind the scenes, DLSS makes use of a neural network to generate high-resolution images, preserving the artistic detail that gamers experience. To maintain its strong performance, Naraka: Bladepoint renders at low-resolution and avoids the need for things like pixel shading calculations. Leveraging artificial intelligence, DLSS augments graphics’ performance and overall artistic quality, without compromise. In partnership with NVIDIA, 24 Entertainment was granted early access to their Deep Learning Super Sampling (DLSS), the latest rendering technology engineered to run real-time worlds at high frame rates and resolutions. To capture these beautiful environments, while maintaining the performance and frame rate necessary to support a 60-player Multiplayer, 24 Entertainment worked closely with NVIDIA and Unity. This action-adventure battle royale takes place across colorful mountaintops, lush forests, and sprawling ancient cities, all rendered in striking detail.
