Nvidia RTX 2080 and RTX 2080 Ti Preview
Published: 14th September 2018 | Source: Nvidia | Price: |
Nvidia's Pascal architecture has been with us since mid-2016, giving Nvidia over two years to design and build something greater, promising consumers an enormous leap in graphics performance.
What wasn't expected was Turing, the graphics core to combine all of Nvidia's graphics IP into a single package, combining traditional shader compute with the company's AI-centric Tensor cores alongside bespoke Ray Tracing Acceleration hardware. With Turing, Nvidia promises to make all forms of compute relevant for gaming graphics, taking the GPU industry in a brave new direction.
With Turing, it is almost easier to list what hasn't changed since Pascal. Nvidia has implemented a new core design, moved to GDDR6 memory, added support for new display standards like VirtualLink, implemented an enhanced video encode/decoder and has even dropped a "Native HDR Display Pipeline" into the mix. That list didn't even mention the Ray Tracing and AI technologies that Nvidia has built into Turing, that's how much of a technological leap Turing is over Pascal.
Today, we will go as deep into Nvidia's Turing architecture as we can, though please note that Nvidia has provided us with this information with less than 24 hours notice, preventing us from covering all aspects of the Turing graphics architecture. Over the next few pages, we will discuss some of the most important hardware changes within Turing, and how these features will impact both modern and future games.