'

Microsoft reveals their DirectX Raytracing (DXR) for DirectX 12

Enhancing DirectX 12 with potentially game changing technology

Microsoft reveals their DirectX Ray Tracing (DXR) for DirectX 12

Microsoft reveals their DirectX Raytracing (DXR) for DirectX 12

At GDC 2018 Microsoft are making waves, unveiling a raytracing extension for their DirectX 12 API, acting as a mechanism to bring the "holy grail" of graphics technology to the forefront and focus future software and hardware on taking advantage of the technology.   

Raytracing is what is currently used to render high-end animated movies, taking offline server farms considerable amounts of time to create single frames for the latest Disney/Pixar movies, leaving it as something that is often considered out of reach for real-time game rendering.   

Today 3D games use rasterisation to create images, offering considerable advantages when it comes to raw speed. Simplifying the situation a little, Rasterisation is not capable of simulating light correctly, whereas Raytracing is designed to accurately represent light from the eye/viewport outwards, allowing reflections, shadows, refraction and other advanced optical effects to be showcased. 

Microsoft's DirectX Raytracing isn't meant to go all-in on the technology but act as a standard API which will allow both hardware and software to transition to a real-time ray-traced future. DXR is designed to use software and hardware acceleration to introduce ray-traced elements, instead of replacing Rasterisation in one fell swoop. 

DXR is designed to work on today's GPU hardware, offering a fallback layer which will allow developers to get started today on Raytraced content on DirectX 12, though it is worth noting that this method will be slow to render until newer hardware is available that offers hardware-level acceleration. Nvidia's Volta graphics are said to deliver Hardware and software support for DirectX Raytracing, with older cards only providing software support. 

Both AMD and Nvidia graphics will support DXR, though at this time it is unknown whether or not AMD/Radeon offers any form of hardware acceleration. It is likely that Raytracing won't be worthwhile in a computational sense for a while, though this API paves the way towards a Raytraced future. Our theory is that Nvidia could be using Volta's Tensor cores to offer Raytracing acceleration, as they have showcased AI-powered raytracing optimisation before for OptiX. Does this mean that Tensor cores are coming to gaming GPUs outside of the Titan V?

Microsoft reveals their DirectX Ray Tracing (DXR) for DirectX 12

(A Ray Traced Example image from SEED, an in-production Engine from EA)


With DXR, Microsoft is offering a vital stepping stone towards real-time ray tracing, a step which game developers seem more than willing to take. So far a total of five game engines have confirmed their plans to support DXR, including the Frostbyte Engine (EA/Dice), SEED engine (EA, in-development), 3DMARK (Futuremark), Unreal Engine 4 (Epic Games) and Unity (Unity Technologies). Microsoft says that they have other partners that they cannot disclose at this time. 

The question today is how far will developers go to support this feature, as the function does not extend outside of the Xbox One and Windows 10 given DirectX 12's limitations. Even so, this will offer something new for hardware manufacturers to strive towards, opening up a new corridor for AMD and Nvidia to compete with each other. It's now a matter of time before we see a shipping game with raytraced elements. 

You can join the discussion on DirectX 12's new Raytracing extension on the OC3D Forums.  

«Prev 1 Next»

Most Recent Comments

19-03-2018, 16:31:10

NeverBackDown
Wonder how performance will be even though it's only a tiny fraction of real Ray tracing as a wholeQuote

19-03-2018, 17:18:49

WYP
Quote:
Originally Posted by NeverBackDown View Post
Wonder how performance will be even though it's only a tiny fraction of real Ray tracing as a whole
It is likely that it won't be great for another hardware generation, until both sides have some form of acceleration. Or at least that is when we can start seeing these features used more widely.

looking forward to seeing this in some games.Quote

19-03-2018, 17:42:28

NeverBackDown
Yeah that's why I'm anxious about it. If only a $3000 GPU can handle it imagine how everyone else will feelQuote

19-03-2018, 17:56:35

WYP
Quote:
Originally Posted by NeverBackDown View Post
Yeah that's why I'm anxious about it. If only a $3000 GPU can handle it imagine how everyone else will feel
It's not that only a Titan V can handle it, it's just that it can lessen the workload with specific features. If used sparingly it could be possible on today's gaming hardware.

Right now the prevailing theory is that Nvidia is using their Tensor cores for acceleration, which could mean that Tensor cores are coming to gaming GPUs. Harkens back to the Turing rumours, where I and others speculated that AI features/Tensor cores could be coming to Nvidia's gaming hardware.

I wonder if AMD could accelerate this with Rapid Packed Math with FP16 compute, as that would deliver a 2x performance boost right there.Quote

19-03-2018, 18:27:51

TheF34RChannel
Tensor cores would presumably increase the price of a gaming part, wouldn't it?Quote
Reply
x

Register for the OC3D Newsletter

Subscribing to the OC3D newsletter will keep you up-to-date on the latest technology reviews, competitions and goings-on at Overclock3D. We won't share your email address with ANYONE, and we will only email you with updates on site news, reviews, and competitions and you can unsubscribe easily at any time.

Simply enter your name and email address into the box below and be sure to click on the links in the confirmation emails that will arrive in your e-mail shortly after to complete the registration.

If you run into any problems, just drop us a message on the forums.