'

Nvidia brings Integer Scaling to its drivers, but only for Turing

Sharper upscales for retro games and pixel art titles

Nvidia brings Integer Scaling to its drivers, but only for Turing

Nvidia brings Integer Scaling to its drivers, but only for Turing

Some games are designed to look blocky, be it an indie pixel art title or an emulated classic on a modern gaming PC. Sadly, these classic-style graphics are often misrepresented on today's high-resolution screens, where resolution scaling adds unnecessary blur or removes sharp pixel edges. This has only become more common in today's 4K era. 

Earlier this year, Intel stated that it would support a feature called Integer Scaling, a method which allows classic games and pixel art graphics to be scaled to higher resolutions without losing any of the image's sharpness. a single pixel will be scaled to a block of 2x2 pixels, or 3x3 pixels or more, allowing games to look as they should, without the need for added blur or other upscaling artefacts. Unfortunately for Intel, revealing these plans have shown their plans to the competition, giving them a chance to offer the same features before Intel has a chance to hit the GPU market. 

Today, Nvidia has revealed "Turing Integer Scaling", a seemingly Turing specific feature which allows Nvidia to offer integer scaling within games, preserving pixel edge details and the ultra-sharp presentation of classic and pixel art games. The images above showcase the benefits of Nvidia's new scaling feature, which allows games like FTL: Faster Than Light to look like the developer intended on a modern, high-resolution display. 

Users of Nvidia's new Geforce 436.02 driver, and an Nvidia Turing or newer graphics card, can enable Integer Scaling as a beta feature within Nvidia's control panel. This enables clearer sharper images when used with specific games. This driver is now available to download
   

Nvidia brings Integer Scaling to its drivers, but only for Turing  

You can join the discussion on Nvidia adding support for GPU Integer Scaling to their drivers on the OC3D Forums

«Prev 1 Next»

Most Recent Comments

20-08-2019, 09:46:30

Bartacus
Wow, Nvidia finally managed to make 8-bit Nintendo games from 20 years ago look sharp. DLSS FTW, LOL!Quote

20-08-2019, 10:32:45

RobM
Quote:
Originally Posted by Bartacus View Post
Wow, Nvidia finally managed to make 8-bit Nintendo games from 20 years ago look sharp. DLSS FTW, LOL!

It is as though Nvidia are so desperate to have something good they throw anything in.Quote

20-08-2019, 10:53:33

WYP
Quote:
Originally Posted by RobM View Post

It is as though Nvidia are so desperate to have something good they throw anything in.
It's because Intel said it was coming to their drivers with their 11th Gen graphics. It's a niche feature, but a welcome one.

Basically, Intel said it would happen to please Reddit, Nvidia saw that and them rushed it out to say "first". Intel knows the competition is coming, and they need to do what they can to appear to have the best software stack.

If you look at the rest of the announcement, Nvidia's other features are basically answering what AMD brought to the table with AMD anti-lag and Radeon Sharpening.Quote

20-08-2019, 13:23:17

NeverBackDown
Funny thing is Intel can do it on basic graphics while Nvidia needs the full power of their AI suite to power this.Quote

20-08-2019, 13:38:26

tgrech
I don't think this uses any AI tech(It's just a simple nearest-neighbour algorithm rather than bi-linear interpolation), its Turing exclusivity will likely be because they only added dedicated integer units and with them a concurrent integer datapath alongside the floating point one with Turing(While int32 performance in Pascal was about 1/3rd rate fp32 at best). On NVidia's original Turing architecture deep dive article they state in relation to integer instructions
Quote:
In previous generations, executing these instructions would have blocked floating-point instructions from issuing.
(So in a use case like this it would have to constantly context switch after each frame which had delays and some cache flushes on older NVidia archs, it's also possible they could use the Fused-Multiply-Add matrix instructions through the Tensor cores for an optimised k-nearest neighbour algorithm but that wouldn't be a requirement and would break GTX Turing compatibility)Quote
Reply
x

Register for the OC3D Newsletter

Subscribing to the OC3D newsletter will keep you up-to-date on the latest technology reviews, competitions and goings-on at Overclock3D. We won't share your email address with ANYONE, and we will only email you with updates on site news, reviews, and competitions and you can unsubscribe easily at any time.

Simply enter your name and email address into the box below and be sure to click on the links in the confirmation emails that will arrive in your e-mail shortly after to complete the registration.

If you run into any problems, just drop us a message on the forums.