Comparing AMD’s FidelityFX Super Resolution With Nvidia’s DLSS
AMD recently introduced its FidelityFX Super Resolution (FSR) upscaling technology. This comes on the heels of the release of its RDNA 2 GPU architecture. AMD added support for ray tracing in RDNA 2, but one of the side effects of the ray tracing is a significant slowdown in pixel processing (I’ve seen up to four times slower with ray tracing enabled). With FSR, AMD introduced a better spatial image upscaler. Spatial image processing techniques use neighboring pixels from the original rendering to estimate the interpolated pixels for the higher resolution image sent to the display, it does not use motion vectors or data from previous frames. In addition to using this technique to compensate for ray tracing slowdowns, it can also be used to enhance the image quality and improve framerates for slower graphics cards. Technology like this that can extend the life of a GPU is especially important during GPU shortages. It’s also helpful to increase framerates when running ray-tracing graphics.
Nvidia delt with a similar problem when it introduced ray tracing in its RTX graphics cards back in 2018. Ray tracing improves the realism of images, but the extra processing burden slows down frame rates. Nvidia solved the slowdown problem in 2019 by rendering a lower resolution image and then reconstruction the higher resolution image for display using its Deep Learning Super Sampling (DLSS) technology. Nvidia introduced Tensor Cores in their Turing architecture, which taps into the power of a deep learning neural network for DLSS to boost frame rates and generate sharp images. When ray tracing and DLSS are combined, the RTX cards can render ray-traced enabled games with roughly the same frame rates but with enhanced visual quality. Nvidia’s DLSS uses temporal information (information from previous frames and motion vectors) along with machine learning to recreate extra pixels. Nvidia trains a neural net using a generalized AI network offline via a supercomputer on game content and the games get accelerated by the Tensor Cores on the RTX GPUs to generate and reconstruct native quality image in real time.
Image Upscaling with Super Resolution
The term “Super Resolution” has been used to describe many different upscaling technologies and each company has a different take on it. The origins of super resolution included analog techniques such as noise reduction and spatial-frequency filtering to increase the resolution of images. Adobe uses super resolution to refer to the process of improving the quality of a photo by boosting its apparent resolution using machine learning techniques. In Photoshop and Lightroom super resolution can increase image detail and size up to four times the original image while maintaining sharp edges and image detail. There’s research into a number of machine -learning techniques to upscale including denoising diffusion probabilistic models.
There are both single-frame and multiple-frame variants of super resolution. AMD’s FSR uses a single frame (spatial) from low resolution render input and Nvidia’s DLSS uses multi-frame temporal upscaling with pixel reconstruction supported by neural networks, multiple input sources, and an AI model to reconstruct the image.
How FSR and DLSS are different
AMD’s FSR uses what the company calls ”a cutting-edge algorithm” to detect and recreate high-resolution edges from the lower-resolution source image and does not use temporal information. Producing high-resolution edges are a critical element for making the lower resolution frame to appear as a “super resolution” image. From AMD’s website, FSR first performs “An upscaling pass called EASU (Edge-Adaptive Spatial Upsampling) that also performs edge reconstruction. In this pass the input frame is analyzed and the main part of the algorithm detects gradient reversals – essentially looking at how neighboring gradients differ – from a set of input pixels. The intensity of the gradient reversals defines the weights to apply to the reconstructed pixels at display resolution.” A second sharpening pass called “RCAS” (Robust Contrast-Adaptive Sharpening) then extracts more pixel detail in the upscaled image. RCAS appears to fine-tune the image for shaper edges and extracts pixel details by adjusting pixel colors based on localized contrast.
MORE FOR YOU
FSR provides a better image quality than traditional anti-aliasing techniques, but it does require integration into the rendering pipeline. Adding FSR to the rendering pipeline is reasonably straight forward process and AMD has made the technique available in its GPUOpen initiative. Also, FSR can be used on many different GPUs, including Nvidia GPUs, and can be used in Microsoft and Sony game consoles. AMD’s goal was to release it as an open-source upscaling technique that didn’t require specialized hardware. FSR has four settings up to UltraQuality which can be largely indistinguishable from native resolution.
Nvidia would point out that the lack of motion vectors for FSR can create visual artifacts with objects in motion, as they cannot be tracked from frame to frame without them. And a lack of temporal data means that FSR cannot look forward or backward in frames to resolve images and this can result in missing parts for the image or incorrect images altogether.
DLSS is a more advanced image reconstruction algorithm using AI rendering technology to reconstruct native quality images while rendering only one quarter to one half the pixels. There is a generalized offline model that is generated on a supercomputer and a real-time component that executes during game play. The DLSS implementation has improved greatly since its initial release with the Turing architecture GPUs.
From NVIDIA’s developer website: “A special type of AI network, called a convolutional autoencoder, takes the low resolution current frame, motion vectors, and the high resolution previous frame, to determine on a pixel-by-pixel basis how to generate a higher resolution current frame. By looking at motion vectors and the prior high resolution output, DLSS can track objects from frame to frame, delivering stability in motion, and reducing flickering and popping artifacts. This process is known as ‘temporal feedback,’ as it uses history to inform the future. With access to prior frames and motion vectors, DLSS can track each pixel, and take multiple samples of the same pixel across frames (known as temporal supersampling), delivering better detail and edge quality than traditional upscaling by training against a large dataset of 16K-resolution images, the DLSS algorithm learns to predict high resolution frames with greater accuracy. And through continual training on NVIDIA’s supercomputers, DLSS can learn how to deal with new classes of content — from fire to smoke to particle effects — at a rate that engineers hand-coding non-AI algorithms simply could not keep up with.”
While a simple scalar like AMD’s FSR is a simple task for game developers to implement, and not as deep an integration as DLSS, it should not be overlooked that NVIDIA DLSS is already integrated in Unreal Engine 4 and 5 as a plugin and is now natively integrated into Unity 2021.2. It is also supported by many major private game engines. With these engine-level integrations, adding DLSS to a game is fast and easy.
NVIDIA DLSS is now offered on over 60 titles, as well as a general plugin for all developers for Unreal Engine 4, Unreal Engine 5, and Unity 2021.2, as well as numerous first-party engines. FSR is offered in 12 games at the time of this writing and is available as a feature patch you can apply for Unreal Engine 4.26 and is available now in a special preview beta branch of Unity 2021.2 HDRP. Still, Nvidia has a two-year lead on AMD in bringing super resolution to games and major engine support. DLSS is now available in a number of popular gaming franchises such as Battlefield, Call of Duty, Cyberpunk, Death Stranding, Doom, Final Fantasy, Fortnite, LEGO, Minecraft, Rainbow Six, Red Dead Redemption, Rust, Tomb Raider, Watch Dogs, Wolfenstein and many more. Nvidia has also just announced support for Linux and for Arm processors.
In general, both FSR and DLSS both provide the largest benefit for higher screen resolutions, especially with 4K (3840×2160) target displays. However, DLSS potentially offers image quality comparable to native image quality at all resolutions, including 1080p.
Nvidia and AMD have approached the super resolution challenge with different techniques. Each has benefits, but each has limitations. DLSS only runs effectively on Nvidia RTX graphics cards, and NVIDIA’s approach uses more advanced graphics techniques like motion vectors and temporal data, resulting in higher image quality. Nvidia also has a two-year lead on AMD in bringing super resolution to games.
AMD’s FidelityFX Super Resolution runs on most recent graphics cards, including Nvidia cards and even Sony and Microsoft game consoles. FRS was released to developers through the GPUOpen project. AMD doesn’t have neural processors in RDNA 2, so a neural net approach doesn’t work for it. Instead, AMD has built a two-pass spatial approach that uses standard API to run on any hardware. It also trails Nvidia on supported game titles but may be able to catch up over time.
At present there’s no mainstream game that has both technologies to allow a direct visual comparison, but each has a different objective and a different application, so a direct comparison is not really all that relevant. Nvidia is already on its second generation of DLSS and this is just version 1.0 of FSR. Because AMD open sourced FSR, there could be future improvements, like temporal techniques, made by either AMD or game developers. What technique is available for you to use is largely out of your control – it will depend on the game and your graphics card. But there is a choice now even if you don’t have an Nvidia GeForce RTX card. And if you are the owner of a GeForce RTX graphics card, you have the option of either technique!
Tirias Research tracks and consults for companies throughout the electronics ecosystem from semiconductors to systems and sensors to the cloud. Members of the Tirias Research team have consulted for AMD, Intel, Nvidia, and other companies throughout the PC and server ecosystems.