Nvidia RTX 2080 and RTX 2080 Ti Review

GDDR6, RT and Tensor Cores

nVidia RTX 2080 and RTX 2080Ti Review


Long before Turing was announced, we knew that the graphics card would make use of GDDR6 memory. Today's gaming workloads are becoming increasingly bandwidth-starved, especially as gamers reach for higher resolutions and faster refresh rates, creating the need for additional memory bandwidth. 

In recent hardware generations, manufacturers have looked to solutions like HBM and GDDR5X to supply the memory bandwidth required for modern games, with both solutions having problems. First off, HBM (and HBM2) are difficult to utilise, attaching silicon directly to a GPU die using interposers of other complex structures, making HBM-powered graphics cards difficult to produce. GDDR5X, on the other hand, works a lot like GDDR5, though the bandwidth boost offered by the technology is minimal when compared to GDDR6, with most GDDR5X graphics cards using 10Gbps memory, a relatively small increase over the commonly used 8Gbps GDDR5 that is used by other products.   

With Turing, Nvidia has moved to GDDR6, offering higher levels of power efficiency than GDDR5 while also providing a substantial performance uplift. The 14Gbps memory that is used with Nvidia's Turing RTX series graphics cards offers a 40% uplift over the GDDR5X used with the GTX 1080, and a 75% improvement over the 8Gbps GDDR5 memory used on the GTX 1070. On top of Turing's use of GDDR6 memory, Nvidia has also brought some improved bandwidth-saving measures to their Turing graphics architecture, allowing their RTX 2080 Ti to offer a 50% boost in memory bandwidth over its predecessor, the GTX 1080 Ti.

The bandwidth increase is solely related to the improved speed of the GDDR6, as a simple calculation will demonstrate. For example, GTX 1080Ti, 484 GB/s of bandwidth @ 5505 MHz. 484/5505=0.0879. 0.0879*7000 (the MHz speed of the RTX 2080Ti) = 616 GB/s. The bandwidth of the RTX 2080Ti. By which we mean there isn't any extra witchcraft enabling the GDDR6 to stand tall over its predecessor, merely a hefty increase in bandwidth.

Nvidia RTX 2080 and RTX 2080 Ti Preview  
Tensor Cores

Nvidia's Tensor cores are designed to accelerate the calculation of matrix multiples, maths that is commonly used by deep learning algorithms and other AI-focused compute scenarios. 

Some of you will be wondering why Nvidia has decided to bring this enterprise-grade feature into the gaming space, but the answer is simple. What can you do with AI and Deep Learning? Almost anything!

Right now, Nvidia uses their Tensor cores for Deep Learning Super Sampling (DLSS) in games, which allows Nvidia to offer similar levels of image quality as a native resolution presentation with TAA while delivering a significant performance uplift. This gives DLSS users performance uplift that is estimated to be in the region of 35-40%, acting as a kind of "free performance upgrade" for games that support the Deep Learning algorithm. 

Nvidia has stated that they plan to create other technologies that can utilise their Tensor cores. 

Nvidia RTX 2080 and RTX 2080 Ti Preview  
RT Cores

Nvidia's RT (Ray Tracing) cores are perhaps the most heavily advertised portion of Nvidia's Turing architecture, acting as the industry's first form of Ray Tracing Hardware acceleration, at least in the GPU market. 

In the slide below, imagine if both the shading and RT Core workflows were in a single line, creating a long list of work for any graphics processor. The work of Nvidia's RT cores is twofold. First, they add additional parallelisation to Turing's workflow, allowing other shading calculations to be conducted concurrently, and secondly, they accelerate the Ray Tracing workload directly to complete the task at a faster rate. 

Nvidia RTX 2080 and RTX 2080 Ti Preview  
When compared to Nvidia's GTX 1080 Ti, the RTX 2080 Ti is said to be roughly 10 times faster in Ray Tracing workloads, with the ability to create 10 Giga Rays per second. 

This kind of performance is transformative for the world of Real-Time Ray Tracing, making the task feasible within the gaming market for the first time. 

As transformative as Nvidia's RT cores are, they do not offer the performance that is required to fully Ray Trace games, but they allow us to conduct a form of hybrid rendering. This enables developers to merge traditional rasterisation and Ray Tracing to be to deliver higher levels of graphical fidelity than ever before, while also completing the process in real-time with Turing hardware. 

This mark's the graphics industry's first baby steps into the world of real-time Ray Tracing, something which will become increasingly relevant in the years. 

Nvidia RTX 2080 and RTX 2080 Ti Preview  
How it all comes together

When everything comes together, Nvidia's concurrent workflow system will allow more computational work to be completed than ever before, further parallelising GPU workflow. 

Soon, Nvidia's Tensor cores will be used to increase the clarity of games with DLSS, reducing the computational power required to render high-resolution images, offering the industry's first AI-made performance boost. With Deep Learning, Nvidia hopes to fake high-resolution images convincingly enough that gamers will never notice the difference, something which we will test at a later date.  

With Turing, Nvidia has packed more computational power into a single graphics card than ever before, while also diversifying the compute infrastructure of graphics cards to enable new features, forging a path into the realms of Deep Learning and real-time Ray Tracing. Normally there is a performance price to pay for any spectacular increase in image quality, and that's what we're here to test.

Nvidia RTX 2080 and RTX 2080 Ti Preview 

«Prev 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 Next»

Most Recent Comments

19-09-2018, 08:59:41

First!! I have my Brew and Bourbons ready!Quote

19-09-2018, 09:00:11

Grr I have no F keys on my board. *F5 pretend*Quote

19-09-2018, 09:17:14

So at 1440P the 2080 Ti is a 10-20FPS gain over a 1080 Ti, About what I expected to be honest, Definitely not worth 1100 quid though.Quote

19-09-2018, 09:18:39

Originally Posted by Dicehunter View Post
So at 1440P it's a 10-20FPS gain over a 1080 Ti, About what I expected to be honest, Definitely not worth 1100 quid though.
You said it dude, you said it Quote

19-09-2018, 09:25:15

Originally Posted by Dicehunter View Post
So at 1440P the 2080 Ti is a 10-20FPS gain over a 1080 Ti, About what I expected to be honest, Definitely not worth 1100 quid though.
Yep. I'm confident I'm going to skip this generation and wait for the next. Quite happy with the 1080 Ti's performance. Besides G-Sync kinda makes those ekstra 20 fps irrelevant anywayQuote

Register for the OC3D Newsletter

Subscribing to the OC3D newsletter will keep you up-to-date on the latest technology reviews, competitions and goings-on at Overclock3D. We won't share your email address with ANYONE, and we will only email you with updates on site news, reviews, and competitions and you can unsubscribe easily at any time.

Simply enter your name and email address into the box below and be sure to click on the links in the confirmation emails that will arrive in your e-mail shortly after to complete the registration.

If you run into any problems, just drop us a message on the forums.