'

Nvidia promises a brighter future for DLSS - New techniques will deliver greater detail

The improvements to DLSS seen in Control are just the beginning

Nvidia promises a brighter future for DLSS - New techniques will deliver greater detail

Nvidia promises a brighter future for DLSS - New techniques will deliver greater detail

In our Raytracing/RTX analysis of Remedy's Control, we noted that the game's support for DLSS offered final images which were much clearer than what the game could offer when using the game's in-engine scaling options. Even so, our analysis was able to show that Nvidia's AI performance-boosting technique was far from perfect. 

When Turing was first announced, DLSS was easily the most promising add-on for Nvidia's RTX series of graphics cards. Raytracing would take time to develop, but DLSS promised gamers enough of a performance boost to perhaps make 4K, or higher resolution, gaming viable. However, DLSS failed to meet the expectations that Nvidia set for the feature. 

Moving forward, Nvidia plans to improve DLSS by implementing new processing techniques which can be used to fill in lost image details. This will enhance the quality of images/frames rendered using DLSS while continuing to offer performance benefits. Nvidia sees room for its DLSS technology to advance, and the company plans to explore these opportunities. 

 

    One of the core challenges of super resolution is preserving details in the image while also maintaining temporal stability from frame to frame. The sharper an image, the more likely you’ll see noise, shimmering, or temporal artifacts in motion.

During our research, we found that certain temporal artifacts can be used to infer details in an image. Imagine, an artifact we’d normally classify as a “bug,” actually being used to fill in lost image details. With this insight, we started working on a new AI research model that used these artifacts to recreate details that would otherwise be lost from the final frame.

This AI research model has made tremendous progress and produces very high image quality. However, we have work to do to optimize the model's performance before bringing it to a shipping game.

Leveraging this AI research, we developed a new image processing algorithm that approximated our AI research model and fit within our performance budget. This image processing approach to DLSS is integrated into Control, and it delivers up to 75% faster frame rates.

Let’s look at an example video, below. The left side uses Control’s in-engine scaling. The right side shows DLSS. Both sides are rendering at 720p, and outputting at 1080p. Notice how DLSS brings out more detail and improves temporal stability, reducing flickering and shimmering.

While the image processing algorithm is a good solution for Control, the approximation falls short in handling certain types of motion. Let’s look at an example of native 1080p vs. 1080p DLSS in Control. Notice how the flames on the right are not as well defined as in native resolution.

Clearly, there’s opportunity for further advancement.


Nvidia knows that DLSS offers levels of image quality that are nowhere near the levels offered by native resolution rendering. More often than not, we have found that the technique falls short when rendering certain types of images. Knowing this, Nvidia is exploring new techniques which will allow DLSS to continue to deliver increased game performance while smoothing over the shortcomings of the technology. 

The video below showcases a new AI research model from Nvidia, which uses new techniques which Nvidia plans to optimise and bring to future games. 

 

    There are many other examples of how deep learning is used to create super resolution images and video, create new frames of video, or transfer an artist’s style from one image to the next. Before Turing, none of this was possible in real-time. With Turing’s Tensor Cores, 110 teraflops of dedicated horsepower can be applied for real-time deep learning.

Let’s look at an example of our image processing algorithm vs. our AI research model. The video below shows a cropped Unreal Engine 4 scene of a forest fire with moving flames and embers. Notice how the image processing algorithm blurs the movement of flickering flames and discards most flying embers. In contrast, you’ll notice that our AI research model captures the fine details of these moving objects.

With further optimization, we believe AI will clean up the remaining artifacts in the image processing algorithm while keeping FPS high.

More Innovation To Come

The increasing computing demands of next generation, ray-traced content requires clever approaches, such as super resolution, to deliver great frame rates. The new DLSS techniques available in Control are our best yet. We’re also continuing to invest heavily in AI super resolution to deliver the next level of image quality.

Our next step is optimizing our AI research model to run at higher FPS. Turing’s 110 Tensor teraflops are ready and waiting for this next round of innovation. When it arrives, we’ll deploy the latest enhancements to gamers via our Game Ready Drivers.

 




Right now, it is clear that Nvidia sees a future in its DLSS technology, hoping that it will become a major differentiating factor between Geforce gaming products and competing offerings from the likes of AMD and Intel moving forward. Hopefully, in time, DLSS will become the image/performance-enhancing feature that was originally promised to gamers at Turing's launch. 

You can join the discussion on Nvidia's plans for DLSS on the OC3D Forums

«Prev 1 Next»

Most Recent Comments

02-09-2019, 19:03:18

mazty
The issue Nvidia have is that the advantage of deep learning is that the algo should be calculated elsewhere, offline and NOT in real-time, ready to be dropped into much lighter hardware.

If DLSS was designed correctly, it should be a case that patches significantly improve performance on the same GPU, but if it's hardware constrained, then Nvidia merely have given the customer another sales reason as to why they need to upgrade their GPU every ~12 months.Quote

02-09-2019, 20:52:50

tgrech
These GPUs are those much lighter pieces of hardware, the models are trained on the DGX SaturnV supercomputer.Quote

03-09-2019, 04:39:03

Dicehunter
Would be cool if Nvidia released a behind the scenes video showing us how the deep learning is done on their super computers and then how it's ported over to consumer grade hardware.Quote

03-09-2019, 05:22:51

WYP
Quote:
Originally Posted by Dicehunter View Post
Would be cool if Nvidia released a behind the scenes video showing us how the deep learning is done on their super computers and then how it's ported over to consumer grade hardware.
They wouldn't do that. Could give away too many industry secrets. Nvidia would be wise to not seed any of its methodologies to the competition.Quote

03-09-2019, 05:43:31

Dicehunter
Quote:
Originally Posted by WYP View Post
They wouldn't do that. Could give away too many industry secrets. Nvidia would be wise to not seed any of its methodologies to the competition.
Well I don't mean indepth stuff but a cool little walkthrough like -

"This is where the bulk of our deep learning is done... here is where it gets ported to consumer hardware"

Stuff like that, Obviously not showing us code and techniques just a small tour of the facility Quote
Reply
x

Register for the OC3D Newsletter

Subscribing to the OC3D newsletter will keep you up-to-date on the latest technology reviews, competitions and goings-on at Overclock3D. We won't share your email address with ANYONE, and we will only email you with updates on site news, reviews, and competitions and you can unsubscribe easily at any time.

Simply enter your name and email address into the box below and be sure to click on the links in the confirmation emails that will arrive in your e-mail shortly after to complete the registration.

If you run into any problems, just drop us a message on the forums.