Styx: Shards of Darkness PC Performance Review

VRAM Usage

Styx: Shards of Darkness PC Performance Review

VRAM Usage

In all of our testing, Strix: Shards of Darkness was unable to consume more than 3.6GB of DRAM, making this game ideal for gamers who use 4GB GPUs. At 1080p and 1440p the game never consumed more than 3GB of VRAM, which is again great for those that use older GPUs with sub-4GB frame buffers. 

At Average settings at 1080p and 1440p the game also uses less than 2GB of VRAM, which is ideal for those that use GPUs with 2GB frame buffers, which are still very popular amongst gamers today.  

While some high-end GPU users would like to see a higher resolution texture pack to make use of larger frame buffers and increase texture detail, it is great to see that Cyanide Studios has worked to make this game suitable for older GPUs with smaller frame buffers. 

Styx: Shards of Darkness PC Performance Review  

«Prev 1 2 3 4 5 6 7 8 9 10 Next»

Most Recent Comments

15-03-2017, 14:05:55

Lynx
Detailed review, much appreciated for a fan of the game.
Any reason why we no longer bench with a 970? It's still the most popular GPU on the Steam Hardware Review.
Would be interesting to see how it degrades over time (nVidia special) compared to 10 series GPUs.Quote

15-03-2017, 18:55:20

WYP
Quote:
Originally Posted by Lynx View Post
Detailed review, much appreciated for a fan of the game.
Any reason why we no longer bench with a 970? It's still the most popular GPU on the Steam Hardware Review.
Would be interesting to see how it degrades over time (nVidia special) compared to 10 series GPUs.
We have never had a 970 in the game stuff as most of the GPUs I test were not provided by 3rd parties (With the GTX 1060 and RX 480 GPUs we use being the exception).

As much as I would love to test a GTX 970, the problem is where does the testing stop? Every new GPU will add more work to do and if I add a GTX 970 I would also need an R9 390 for balance.

This is the big problem when it comes to covering this stuff, as my time is limited and my job is to cover both new games and news content on the website.

When it comes to covering how Nvidia/AMD GPUs age in new games, that is why we still use the R9 Fury X/GTX 980Ti and GTX 960/R9 380 GPUs in out tests, as these GPUs provide some great insight into how older GPUs run on modern games, especially modern DX12 titles.Quote

16-03-2017, 15:43:18

AngryGoldfish
Man, I'm sick of games that run so poorly on AMD. In this case it's to a ridiculous degree. People moan about games not looking as good as they should be considering they need powerful GPU's. What I care about is parity and reasonable performance from both vendors. This kind of favouritism is incredibly destructive while demanding games in general help further the PC industry.Quote

06-05-2017, 18:47:37

Colts
Quote:
Originally Posted by AngryGoldfish View Post
Man, I'm sick of games that run so poorly on AMD. In this case it's to a ridiculous degree. People moan about games not looking as good as they should be considering they need powerful GPU's. What I care about is parity and reasonable performance from both vendors. This kind of favouritism is incredibly destructive while demanding games in general help further the PC industry.
It's not favouritism nor is it the developers fault AMD is behind in the GPU market these past few years. This is soley on AMD. When an RX580 with 6.2 tflops competes with GTX 1060 with 3.8 tflops you got an issue fact is Nvidia is more efficient at getting the best performance from each tflop. This is all on AMD and has nothing to do with Developers of gaming or nvidia. If AMD Vega were to take the crown for best GPU it would need to be about 18 TFLOPS to compete with Nvidia 12 TFLOPS this also tells you AMD Vega 12.5 the highend card is only going to compete with GTX 1080 and nothing above it. AMD fanboys need to face this reality AMD just isn't as good as Nvidia at making GPU's but we need both GPU Vendors that is a fact. Just quit being so blind.Quote

08-05-2017, 09:10:34

AngryGoldfish
Quote:
Originally Posted by Colts View Post
It's not favouritism nor is it the developers fault AMD is behind in the GPU market these past few years. This is soley on AMD. When an RX580 with 6.2 tflops competes with GTX 1060 with 3.8 tflops you got an issue fact is Nvidia is more efficient at getting the best performance from each tflop. This is all on AMD and has nothing to do with Developers of gaming or nvidia. If AMD Vega were to take the crown for best GPU it would need to be about 18 TFLOPS to compete with Nvidia 12 TFLOPS this also tells you AMD Vega 12.5 the highend card is only going to compete with GTX 1080 and nothing above it. AMD fanboys need to face this reality AMD just isn't as good as Nvidia at making GPU's but we need both GPU Vendors that is a fact. Just quit being so blind.
I'm no engineer or developer, but you just confirmed what I was saying. When a 3.2 TFLOP GPU can beat a 6.2 TFLOP GPU, would you not say that is in part due to the way games are developed? In the same way a 7600K at 100% load can beat a 6c/12t CPU at 40% load, the infrastructure is not currently suited to such hardware. How is that AMD's fault? I've been disappointed by AMD's choices, but I don't blame them entirely. So I heavily disagree with you: this is not solely on AMD.Quote
Reply
x

Register for the OC3D Newsletter

Subscribing to the OC3D newsletter will keep you up-to-date on the latest technology reviews, competitions and goings-on at Overclock3D. We won't share your email address with ANYONE, and we will only email you with updates on site news, reviews, and competitions and you can unsubscribe easily at any time.

Simply enter your name and email address into the box below and be sure to click on the links in the confirmation emails that will arrive in your e-mail shortly after to complete the registration.

If you run into any problems, just drop us a message on the forums.