ASUS nVidia GTX590 Review
Introduction and Technical Specifications
Published: 24th March 2011 | Source: ASUS | Price: £579.99 |

Introduction
Putting two GPUs onto a single circuit board is a fairly recent trend for the highest models in a particular graphics card series. Although it had been tested before with varying success it was the GTX295 and ATI HD4870X2 that really proved it was both possible and beneficial to go for SLI/Crossfire but on a single card.
Of course the primary benefit is that you can buy two, and go for a Quad-GPU setup, but there are also energy savings over having two single cards.
Heat however is the largest hurdle to overcome. This has been dealt with either by reducing the performance of the GPUs used, or resorting to something that sounds like a 747 taking off.
Today we're looking at the latest attempt from nVidia, the GTX590. It comes at the perfect time as we've only just reviewed the AMD HD6990 and so we can really get to grips with which of the two is the King of single card performance.
Without further ado let's get to it.
Technical Specifications
Product Name | GeForce® GTX 590 |
GPU | Dual NVIDIA® GeForce® GTX 500 series |
Engine Clock speed | 607 MHz |
Unified Shaders | 1024 (512 per GPU) |
Shader Clock | 1215 MHz |
Memory Clock speed | 3414 MHz |
Memory | 3072MB DDR5 (1536MB per GPU) |
Memory interface | 768-bit (384-bit per GPU) |
Display Outputs | Triple DL-DVI-I, mini-DP |
HDCP | Yes |
Cooling | Active (with fan) (dual-slot) |
DirectX® version | DirectX® 11 with Shader Model 5.0 |
Other hardware features | 8-channel Digital Surround Sound, HDMI 1.4a compatible, HD Audio bitstream capable, hardware accelerated Blu-ray 3D ready, Quad NVIDIA® SLI™ ready, NVIDIA 3D Vision™ Surround ready |
Software Features | nView® Multi-Display, Hardware Video Decode Acceleration Technology, NVIDIA® CUDA™ technology, OpenGL® 4.1, |
Windows 7 capability | Windows® 7 with DirectCompute support |
Most Recent Comments
I'm a bit disappointed of new Nvidia card.Quote
@ sources95
Your 580s will eat this as a light snack.Quote
It does seem that the 6990 is better value for money, but I'm sure when other manufacturers bring out custom coolers and such that will make a difference to the heat and noise issue, making the 6990 an even better buy, depending on the price of course
And I thought £5 was the lowest note, 600 of them is £3k
The heat of the 4870x2 literally heated up my room to uncomfortable levels. Can't be dealing with that again from a 6990 and the power just isn't there in the 590 judging from this review.
I'll read a few more later for comparison sake.
I remember getting the 295 in the £4**'s and the 4870x2 in the high £3**'s.
Prices are now obscene.Quote
Either way, 6990 average OC headroom 10% (830-910mhz ish.) gtx590 OC head room 32%.. (607-801mhz) Results = horrendously quick and I'm flabbergasted :|Quote
http://www.youtube.com/watch?v=sRo-1VFMcbc
Looks like the drivers supplied with the card are faulty,Nv released some new drivers this morning(just for this card)
EDIT:just noticed the GPUZ screenshot,you were using the bad drivers,your lucky it didnt blow on you,as well as that vid I posted its happened to other reviewers as well,TPU for one)Quote
for an extra £60 i could get THREE MSI GTX560Ti Twin Frozr II OC editions, and have them in tri-SLIQuote
The GTX560 like the 460 only supports dual SLI.Quote
EDIT: i cannot justify paying that amount of money to play ONLY about a dozen "worth-playing" games that would benefit from such a card!Quote
|
whoa - £600 @aria for an extra £60 i could get THREE MSI GTX560Ti Twin Frozr II OC editions, and have them in tri-SLI |
Now that you've tested both GTX 590 and AMD HD6990, which one would you recommend in a WATERCOOLED setup?Quote
|
I think nvidia did listen to his users for once. The card may be beaten by a 6990 on very high resolutions however the card runs a lot cooler. There isnt a vacum cleaner in your pc going around. Secondly if you look at all the different benchmark out there it seems nvidia aimed at the 1920x1080 resolution. This is the most used resolution and if you look closely then the 590 beats the 6990 most of the time. So well done job nvidia. Great card. |
|
whoa - £600 @aria for an extra £60 i could get THREE MSI GTX560Ti Twin Frozr II OC editions, and have them in tri-SLI |
At the end of the day it's down to how much money you are happy to part with against performance, a 590 is around £575 on Scan, so about £35-40 more than a hair dryer, sorry 6990, and £220ish less than a 580 sli.
In most games by the looks of it the 580 sli cleans house, with the 6990 performing better than the 590, but I personally could not live with a wailing banshee in my rig, and I'm not prepared to splash out £800ish on 2 x 580's, so I'm more than happy to pay a little extra to get a 590 over the 6990 for the substantially reduced noise and more refined drivers.
Having said that I'd love to see someone do a direct comparison with 2 x 570's against a 590 and 6990, as it does seem to be the best bang for £/$ setup from what I have seen.Quote
|
What driver were you using?As there appear to be some problems! Looks like the drivers supplied with the card are faulty,Nv released some new drivers this morning(just for this card) EDIT:just noticed the GPUZ screenshot,you were using the bad drivers,your lucky it didnt blow on you,as well as that vid I posted its happened to other reviewers as well,TPU for one) |
There's also been a new GPUZ out this week that shows the card specs of both the dual cards that much better. I would hope the memory overclock in todays review is an oversight of GPUZ.
You can "alternatively" power the 8pin connectors to adopt more power (if you care less about that sorta thing) and get 580 based clocks out of it. But unless you manage the cooling, I'd not suggest it for 24/7.
The price of these dual cards is fantastic, but to be fair, dual cards are by nature high priced.
I'll go back to it again, just with the 6990/xfire/6970/xfire OC3D reviews, some of the graphs are still showing significant drops in framerates for overclocked runs in benches of the games - which just can't be the case. Something is definitely not right.
Imo you can pick either of these dual cards. If you favor dx11 games, I'd get the 6990 (well not me cos I like physx and quality), and if you play dx9/10 you can pick the GTX590 or 6990. Both of them are spanking performance for what they are, and if you complain about the price of both, you really aught not be considering them.
Fastest single gpu card for me everytime.Quote
and bth, i would love to see in the banchmarks a bad company 2.Quote
|
Tom, Now that you've tested both GTX 590 and AMD HD6990, which one would you recommend in a WATERCOOLED setup? |
But you'd still be better off going GTX570 SLI, assuming your budget wont stretch to GTX580s in SLI
|
Well the primary problem with the HD6990 is heat and noise. A quick look at the graphs will show that the HD6990 whips the GTX590 if there is a difference. So Watercooling solving the heat/noise issue it's an easy pick for the HD6990. But you'd still be better off going GTX570 SLI, assuming your budget wont stretch to GTX580s in SLI |
|
We'll be rejigging our game benchmarks soon and BBC2 is likely to make an appearance. Although personally I can tell you the GTX590 will eat it for breakfast at any setting. |
my question is if a single gtx570 (for now, sli later) will be stronger than a gtx 295 (never mind the dx 10/11)?!
i do play bc2 (2 much) and i dont know if it worth the jump.Quote
|
as far as i can tell, bad company 2 do likes amd/ati more, buth my problem r amd/ati drivers. my question is if a single gtx570 (for now, sli later) will be stronger than a gtx 295 (never mind the dx 10/11)?! i do play bc2 (2 much) and i dont know if it worth the jump. |
|
Had these 590s sat at 800/1600/4000 stable and issue free so either these are fantastic silicon or yours a little erm.. lacking. |
This card deffo wouldnt go any further and keep increasing the scores.
|
Can't believe for the 1st time TTL seems to be pretty much the last reviewer to put a 590 vid up |
Game wise VB is correct and we will being bringing some new games in to test with, we just want to choose wisely.
We are waiting to test Crysis 2 and Shift 2 to see if they are demanding enough and not just gay console ports like MOH and COD.
@ NEPAS
We tested originally on the beta driver and then again on the new driver after it was released, I just forgot to GPU-Z the new drivers. not bad considering the card was only here 36 hours too :SQuote
|
YouTube is being a pitFa today. That and Im meant to be having some time off! |
Enjoy your time off Tom, I know you will be living it up Saturday

Quote|
Tom, Now that you've tested both GTX 590 and AMD HD6990, which one would you recommend in a WATERCOOLED setup? |
|
Wait for the video dude, its covered in that but tbh its pretty much covered in this review aswell. |
|
I know you stated best bang for pound was SLI GTX 570 could possiblely please do a review of that ? |
|
Wait for the video dude, its covered in that but tbh its pretty much covered in this review aswell. |
|
Im realy annoyed that the HD 6990 is better, had realy high hops for the GTX 590. Hopefully there will be a non reference one with more power connectors so the clocks can be ramped up a little more, push it closer to GTX 580 SLI. |
|
It's a bit odd imo, if you are going to be spending £600 on a GPU you would think the average user of that GPU would have a res of at least 1900x1200 |
Just saying I can see what they aimed at. Scoring wise at higher resolutions I think its a bit disappointing.
I would have thought with the gpu used from the 580 it would perform a bit better.
And spending 600 quid on a GPU is a bit silly anyway for me at least.
All cards nowadays perform well at higher resolutions anyways.
For me getting a card like the 590 or 6990 isnt about getting the highest performance but the WOOT factor. Or the more show off factor. These cards are more for benchers in mine point of view anyway.
People like tom for instances who loves searching for the limmit and fiddle around with.
Doesnt mean though I like what nvidia has done with it and also what their aiming for.
So yeah I say way to go green team. Red team isnt doing bad aswell, but should have payed more attention to sound and heat.Quote
|
So it's the 6990 then. Thanks Tom. Me either. I actually sold my 5970 hoping to jump on the GTX 590. It looks like I'm gonna have to stick with AMD still. |


QuoteDual waterblocks and some serious overclocking would be nerd p0rnQuote
|
TBH with this sort of money to spend I have to say the sensible money is 2x GTX570's. Dual waterblocks and some serious overclocking would be nerd p0rn |
|
TBH with this sort of money to spend I have to say the sensible money is 2x GTX570's. Dual waterblocks and some serious overclocking would be nerd p0rn |
|
The problem with this is that the cost of 2 waterblocks would bring the total to around $1000 for two GTX 570s in SLI whereas as single GTX 590 or HD6990 with waterblock would cost $800 for practically the same performance. |
|
haha point taken. But you could argue you can add another 6990 for CrossfireX later too lolz. |
|
Yes but then instead oh having a hover running all the time, you will have two hovers running all the time. |
|
The problem with this is that the cost of 2 waterblocks would bring the total to around $1000 for two GTX 570s in SLI whereas as single GTX 590 or HD6990 with waterblock would cost $800 for practically the same performance. |
GTX570 can be overclocked to around GTX580 performance. So two would be GTX580 SLI speeds.
Now if you want to claim that a HD6990 or the GTX590 is equal to a GTX580 SLI setup, good luck with that
|
TBH as well it also anoys me that you can get a HD 6990 for £540 and a GTX 590 for £580+ even though the GTX 590 on paper is worse. |
*EDIT Well, The cheapest 590, and the cheapest 6990
Quote
|
TBH with this sort of money to spend I have to say the sensible money is 2x GTX570's. Dual waterblocks and some serious overclocking would be nerd p0rn |
|
I'm sure you've tested the GTX570 in SLI against the HD6990 to make such outlandish claims. GTX570 can be overclocked to around GTX580 performance. So two would be GTX580 SLI speeds. Now if you want to claim that a HD6990 or the GTX590 is equal to a GTX580 SLI setup, good luck with that |
.. which you do realize is the equivalent of suggesting 2x480 in SLI, and it would have more memory and cost less money. The 480 is now £196 and the 570 around £260 (@ Scan as of right now, for example)
Watercooled or 3rd party cooled, I honestly never thought I'd see this suggestion after all the palaver.Quote
i will sit on the fence and wait a few weeks, before i decide which is the best step forward for me.
by then the prices may have dropped a bit too
|
they have only been on sale for 8hrs in the UK. no-one has had the opportunity to really see what this baby can do, single or SLI'd. i will sit on the fence and wait a few weeks, before i decide which is the best step forward for me. by then the prices may have dropped a bit too |
Then again first day purchases of such hardware is a bit stupid in my opinion.
I am looking to replace my 2 gtx 280 cards for either 2 560TI, or if the price goes down to 2 570's, However the 590 is looking very interesting, and given time to let the drivers mature, i might pick one up for the sole reason that it runs quiet and gives my audio card some breathing room, as its right on top of one 280 card now, does get a bit hot.Quote
|
I'm sure you've tested the GTX570 in SLI against the HD6990 to make such outlandish claims. GTX570 can be overclocked to around GTX580 performance. So two would be GTX580 SLI speeds. Now if you want to claim that a HD6990 or the GTX590 is equal to a GTX580 SLI setup, good luck with that |
|
Im realy annoyed that the HD 6990 is better, had realy high hops for the GTX 590. Hopefully there will be a non reference one with more power connectors so the clocks can be ramped up a little more, push it closer to GTX 580 SLI. |
HOWEVER heres my question, i currently run 2560 x 1600 and i was curious if im going to run into a vram issue and if in my case im better with the 6990?Quote
|
Crysis 2 is just a gay port Tom, Just played it on the PC and there are no user settings to set the texture quality or the AA, it complete bulls**t i guess we can put Crytek on the list of PC sellouts. Good game though. There is no DX 11 or 64bit built in to the game as of yet crytek say they will bring out a patch in the next few weeks. |
|
Yeah atm the game is an obvious console port, it doesn't even scale properly with multi GPUs atm. Hopefully with the patches it will become more PC game and a less obvious port. Crysis that's playable on max settings isn't Crysis. |
Apparently the "hooky" download that was leaked of the game also had 64bit bins in it.Quote
|
Yeah atm the game is an obvious console port, it doesn't even scale properly with multi GPUs atm. Hopefully with the patches it will become more PC game and a less obvious port. Crysis that's playable on max settings isn't Crysis. |
http://www.techpowerup.com/142842/EVGA-Storms-Forth-GTX-590-Launch-with-Four-Classified-Series-Products.html
Tom, any chance of you doing a quick video to show how this card runs 3 monitors? Would really like to know if its as good as eyefinity.Quote
I'll be upgrading when I've lived alot of overclocking life on my cards I think. 590 might need PCI frequency OC's to get more from it and I read the review and they didn't get a massive OC on the 590.Quote
|
I think 590 is gear'd to use full PCI express 2.0 x16 bandwidth at full load, tripple monitor resolutions so it doesn't always need to be faster, maby it'll cripple at 3D vision, and enthuiast overclockers, gamers etc would go for single core SLI instead for those reasons and more, their motherboard is designed for it, and as quad SLI is a rare performer in games and these are GeForce cards, 1 590 is ideal for the media X entertainment computer and the motherboards with 1 PCI express 2.0 x16 slot for graphics. Getting a proper enthusiast rig need the motherboard to begin. I'll be upgrading when I've lived alot of overclocking life on my cards I think. 590 might need PCI frequency OC's to get more from it and I read the review and they didn't get a massive OC on the 590. |
Thanks for another video Tom! Hope the weather is nicer in England, starting to get swamplike down here again in Georgia
-GentlemenQuote
Quote
|
Well the primary problem with the HD6990 is heat and noise. A quick look at the graphs will show that the HD6990 whips the GTX590 if there is a difference. So Watercooling solving the heat/noise issue it's an easy pick for the HD6990. But you'd still be better off going GTX570 SLI, assuming your budget wont stretch to GTX580s in SLI |
|
Nvidia has a serious power problem with the 590 if they increase the clocks on the 590 the power goes above PCI Express limmits. So overclock at you own risk. Just remember that in this case you are risking you motherboard as well as your graphics card. |
Theres nothing stopping any manufacturer putting 4x 8pin pcie power connectors on a card. There are no limits. You can have a 700W++++ card if you want.
The 590 """could""" be overclocked to the 580 levels, and probably beyond, not by conventional methods - AND - if the circuitary around them is up to it. Which will depend on the build.
Time will tell, just keep an eye on those overclocking records.Quote
|
Nope. Every card with an additional power source plugged into the pcb in addition to the pcie slot feed, is intended to go over the pcie limits. Theres nothing stopping any manufacturer putting 4x 8pin pcie power connectors on a card. There are no limits. You can have a 700W++++ card if you want. The 590 """could""" be overclocked to the 580 levels, and probably beyond, not by conventional methods - AND - if the circuitary around them is up to it. Which will depend on the build. Time will tell, just keep an eye on those overclocking records. |
A11: PCI-SIG has developed a new specification to deliver increased power to the graphics card in the system. This new specification is an effort to extend the existing 150watt power supply for high-end graphics devices to 225/300watts. The PCI-SIG has developed some boundary conditions (e.g. chassis thermal, acoustics, air flow, mechanical, etc.) as requirements to address the delivery of additional power to high-end graphics cards through a modified connector. A new 2x4 pin connector supplies additional power in the 225/300w specification. These changes will deliver the additional power needed by high-end GPUs. The new PCI-SIG specification was completed in 2007.
What that means is that at maximum with the two 8 pin connectors + the 75 watts from the PCIE adapter there is a maximum of 375 watts available under PCIE 2.0. Nvidia says the GeForce GTX 590 is a 365 W board. By the way, there is a reason Nvidia did not add a third 8 pin connector or a forth for that matter, they will not build a card outside of the PCIE 2.0 specification. If they did they would have to warn anyone who installed such a card that they are voiding the warranty of there motherboard. If it were as simple as just adding power ad hoc nvidia probably would have done it so they could run the 590 at higher clock rates.
Sure a manufacturer could put as many power connectors on a board as they want. Doing so would put the product outside the PCIE 2.0 specification so no one would be stupid enough to install the thing. Which is why none of them are doing it, neither Nvidia or AMD have engineers that are so moronic that they would design a board out of spec.
I hope that cleared things up for you.Quote
|
Well no actually, Two 6950 in crossfire will outperform two 570 in SLI because the 69xx cards scale better in dual and triple card configurations. The other advantage of the 6950 is that they are substantially less expensive. if you are worried about noise with these cards get them with an aftermarket cooler they are still cheaper than a pair of 570. If you decide to bios mod them they will be faster than a pair of 580. |
|
Yes you are correct high end graphics cards usualy have additional power input they can be 6 pin 8 pin or both. That said the following is an excerpt from the PCI E 2.0 Spec A11: PCI-SIG has developed a new specification to deliver increased power to the graphics card in the system. This new specification is an effort to extend the existing 150watt power supply for high-end graphics devices to 225/300watts. The PCI-SIG has developed some boundary conditions (e.g. chassis thermal, acoustics, air flow, mechanical, etc.) as requirements to address the delivery of additional power to high-end graphics cards through a modified connector. A new 2x4 pin connector supplies additional power in the 225/300w specification. These changes will deliver the additional power needed by high-end GPUs. The new PCI-SIG specification was completed in 2007. What that means is that at maximum with the two 8 pin connectors + the 75 watts from the PCIE adapter there is a maximum of 375 watts available under PCIE 2.0. Nvidia says the GeForce GTX 590 is a 365 W board. By the way, there is a reason Nvidia did not add a third 8 pin connector or a forth for that matter, they will not build a card outside of the PCIE 2.0 specification. If they did they would have to warn anyone who installed such a card that they are voiding the warranty of there motherboard. If it were as simple as just adding power ad hoc nvidia probably would have done it so they could run the 590 at higher clock rates. Sure a manufacturer could put as many power connectors on a board as they want. Doing so would put the product outside the PCIE 2.0 specification so no one would be stupid enough to install the thing. Which is why none of them are doing it, neither Nvidia or AMD have engineers that are so moronic that they would design a board out of spec. I hope that cleared things up for you. |
Another point is that the PCI-SIG would not specify that the slot AND external power would put forward a limit to the power considerations regarding it. They would ONLY concentrate on the slot itself. All the quoted paper suggest is that "with the addition of the suggested/new" 8 and 6 pin power connectors (i.e. the quoted NEW 2x4pin) - this is at a point in time (2007) when we moved from 4 pin molex as very much a standard to 6 and 8 pin pcie.
Time has moved on, and besides manufacturers of mobos moving onto pcie 2.0a/b/c/etc addendums to the original PCI-SIG submission for the original 2.0, there's nothing preventing the psu manufacturers suggesting a 10x pin pcie power connector. Or to hell with it, here's a 20pin.
Reading the PCI-SIG on 3.0 last week, there is no mention about wattage boundries, only the inference that "they can supply more power" whilst at the same time "will be more efficient" - which can be read a number of ways. Plenty of bandwidth speech.
In effect there is no limit put forward by the 2.0 specification, except for the slot itself and suggestions of what can be added to it, taking into account the technology available at the time of writing.Quote
|
Thing you need to bear in mind also is that that specification is years old (2007), where the thought of anything coming close to 365w was crazy talk. Even tho there are/were professional cards that make a mockery of that. Another point is that the PCI-SIG would not specify that the slot AND external power would put forward a limit to the power considerations regarding it. They would ONLY concentrate on the slot itself. All the quoted paper suggest is that "with the addition of the suggested/new" 8 and 6 pin power connectors (i.e. the quoted NEW 2x4pin) - this is at a point in time (2007) when we moved from 4 pin molex as very much a standard to 6 and 8 pin pcie. Time has moved on, and besides manufacturers of mobos moving onto pcie 2.0a/b/c/etc addendums to the original PCI-SIG submission for the original 2.0, there's nothing preventing the psu manufacturers suggesting a 10x pin pcie power connector. Or to hell with it, here's a 20pin. Reading the PCI-SIG on 3.0 last week, there is no mention about wattage boundries, only the inference that "they can supply more power" whilst at the same time "will be more efficient" - which can be read a number of ways. Plenty of bandwidth speech. In effect there is no limit put forward by the 2.0 specification, except for the slot itself and suggestions of what can be added to it, taking into account the technology available at the time of writing. |
|
The PCIE 2.0 spec sets the total power standard for High performance PCIE cards. The specification is very clear about the overall power and thermal limit's regardless of power source. It is true that the PCIE 3.0 standard "may" increase some of those power/thermal limits, however the PCIE 3.0 standard is not finalized and is not available on any motherboard available to the public. |
.. and you can search for anything in addition (if you have membership) and there's nothing to be found. Outside of just 150w (and suggestions on how to achieve up to 365w). It used to have 'suggestions' of how to reach 300w, which are now deleted (or struck through as is the method).
Also you can download the 3.0 base spec plus the to-come 3.1 spec.Quote
|
How bizarre, cos you can look at the 150w specification here : http://www.pcisig.com/specifications/pciexpress/graphics/ .. and you can search for anything in addition (if you have membership) and there's nothing to be found. Outside of just 150w (and suggestions on how to achieve up to 365w). It used to have 'suggestions' of how to reach 300w, which are now deleted (or struck through as is the method). Also you can download the 3.0 base spec plus the to-come 3.1 spec. |
The PCIE 3.0 spec was released to the PCI0-SIG partners on November 18, 2010. PCI-SIG expects the PCIe 3.0 specifications to undergo rigorous technical vetting and validation before being released to the public. This process, which was followed in the development of prior generations of the PCIe Base and various form factor specifications, includes the corroboration of the final electrical parameters with data derived from test silicon and other simulations conducted by multiple members of the PCI-SIG.
The PCIE 3.0 final production spec is likely to change as the many PCI-SIG stakeholders produce functioning silicone from the current spec. As a result neither Intel or AMD plan on including PCIE 3.0 on there current chipsets (sandy bridge, bulldozer). Both companies have "suggested" that they don't intend to integrate PCIE 3.0 until late 2012 or 2013. This is early speculation from both companies so those estimates could be substantially delayed.
Now on to the matter of the physics. The main reason there is a 375 watt per slot limit in the PCIE 2.0 spec is because when you put that much power in you need to get that much heat out. Given the space constraints of the form factor the PCI-SIG partners agreed that 375 watts as an upper thermal and electrical limit per slot would be sufficient. The limit can be doubled simply by using 2xPCIE 2.0 slots (crossfire, SLI).
It is unlikely that the PCI-SIG partners will increase the 375 watt limit in the PCIE 3.0 production spec for two main reasons. One the increased production cost is prohibitive and two as the pitch of GPU silicon is reduced the number of transistors that can be included will increase while producing far less heat. That is to say that future GPU's are likely to use less power and produce less heat while improving performance. There simply isn't any need to increase the thermal or electrical profiles.
I have two questions for you Rastalovich
If Nvidia could have simply added another power connector why didn't they?
Is it because they are stupid or because they are smart?Quote
375w wouldn't be put forward as a limit due to heat disappation within a pc case as they know full well you can put 4x card in xfire/sli within a said case. 8x if you could the productivity servers you can install parallel gpu setups in.
To insist 375w was max and to allow the further addition of pcie x16 electical slots would be silly, don't ya think
EDIT: I have a feeling that maybe you're not seeing my view of how the system works, so I've put together a "brief" explanation in as best laymen's terms as I can:
In the model we're looking at, there are 3 prominent bodies:
ATX
PCI
Graphic card manufacturers
ATX come out with the standards of which power supply manufacturers are suggested to abide by when producing psus for the industry.
PCI have standards that apply to the use of busses, in the main, that are most commonly looked at as slots (even though they can be integrated also).
Graphic card manufacturers obviously produce the cards in our little model that display stuff - basically. AMD, Intel, nVidia, Silicone, Matrox and a few more.
Aside from these 3 there are many groups, with their own standards to which the above three bear in mind when putting forward their own studies/papers/standards, which range from safety people, environmental people, electrical, motherboard and other component people, the list does go on quite a bit.
Between all these groups there is a whole load of interaction, co-operation and studies. As an example, a graphic card manufacturer will come forward wanting to make an oem card that oems can use in their mass produced pcs aimed at business and the public. The oem has told them, as they usually do -very strongly- that they don't want ANY external power connections to this card, but it has to be more powerful than the present integrated/embedded selection. In this case, the gfx people can think of power and look directly at what PCI have put forward. They'll comply with their papers on what they've had motherboard manufacturers in turn comply with.
As time goes by, the consumer market gets more demanding. When PCI came out with their new standard for their gfx slot, they gave everyone the boundries at which the power would/could be used. As the gfx cards advance, their manufacturer saw the easy option of adding an additional molex cable to the side of their cards pcb to go beyond what PCI had stated would be available. Everyone spoke, and PCI added their errata/addendum to their previous paper on power use. It now includes "to achieve the power required for blah blah, a single molex is used" and so on.
Now the PCI's paper will include this addition. The bar has been raised as far as the gfxcard people are concerned, and time continues to move on, advances are made on this new power level.
Oems (HP, Dell, Acer, etc) btw are still insisting on NO extra pcb plugs.
The gfx people have reached a new era in their rnd, they need to surpass this poxy molex supply. They talk with the ATX people, who come up with a new type of connector (the 6 pin pcie for example). They produce their new paper ATX x.x and in turn PCI will catch wind of this - run a bunch of tests, do some studies, tell everyone they're happy, and bring out their new errata/addendum to the existing PCI standard.
Now the PCI's paper will include yet another raising of the power level supporting the *new* 6 pin pcie connector.
Repeat for the use of 2x 6pin, 8pin and 6+2pin, 2x 8pin and so forth.
Theoretically, the gfxcard people and the ATX people could be in discussion about a new 10 pin or 6+2+2 pcie connector. Each of the people will talk to each other, tests will be done, as per usual, regulations and studies will be re-issued with a new proposed power level. The ATX people say they'll bring out psus with 1x 8+2+2 connectors for the lower end of the market and 2x for ... possible dualing of these newer cards. Bringing a possible new power threshold that 2x 10pin connectors on a single gfxcard can handle. (purely theory, I can't see this happening with the proposed new die shrinks also - but who knows - it is possible)
As each of these groups talk with each other, conduct their own internal test, bars are continuously raised. A quoted paper stretching back to 2007 regarding the power levels for pcie use can only suggest about what is currently available. It, at that time, has little idea that 2x 8pin may become that popular.
One thing is for sure, stress to pcbs due to the plugging and unplugging, pull and such like of additional power sources is not favored. Which is why alot of oems dislike them. One of the defenses against a 2x 10pin power arrangement is that it emulates the mobo power connectors which would come too close to stressing the rear or top of the card. But ingenious inventions could work around it somehow. 3x 8pin is obviously suggesting a similar cable to what mobos have now. Sticking those at the back of an 11inch pcb is not wanted I don't think.Quote
8 pin = 150 watts
mobo PCI express = 75 watts
i thinkQuote
|
They would go along with the PCI-SIG suggestion of what they've ratified in conjunction with what the ATX x.x standard put forward as a method of power supply. i.e. "we've made an 8pin pcie connector" - and PCI-SIG adjust their documents accordingly once it's passed through the ecn. |
|
375w wouldn't be put forward as a limit due to heat disappation within a pc case as they know full well you can put 4x card in xfire/sli within a said case. 8x if you could the productivity servers you can install parallel gpu setups in. |
|
To insist 375w was max and to allow the further addition of pcie x16 electical slots would be silly, don't ya think |
|
EDIT: I have a feeling that maybe you're not seeing my view of how the system works, so I've put together a "brief" explanation in as best laymen's terms as I can: |
So this whole back and forth started because I said:
Nvidia has a serious power problem with the 590 if they increase the clocks on the 590 the power goes above PCI Express limmits. So overclock at you own risk. Just remember that in this case you are risking your motherboard as well as your graphics card.
and you replied
Nope. Every card with an additional power source plugged into the pcb in addition to the pcie slot feed, is intended to go over the pcie limits.
Theres nothing stopping any manufacturer putting 4x 8pin pcie power connectors on a card. There are no limits. You can have a 700W++++ card if you want.
The 590 """could""" be overclocked to the 580 levels, and probably beyond, not by conventional methods - AND - if the circuitary around them is up to it. Which will depend on the build.
Time will tell, just keep an eye on those overclocking records.
Now the original question that I replied to was
"why is the Nvidia 590 clocked so low"
I stand by my answer, Nvidia says that the 590 at load draws 365 watts. Nvidia says that the clock rates they set for the reference 590 are to ensure compliance with PCIE electrical/thermal standards. The fact that Nvidia did not characterize this as a problem doesn't make it any less the case. The fact is if Nvidia could have set the reference clocks higher they would have. Nvidia would love to claim the fastest single graphics card title, As it stands AMD's 6990 holds that title, costs between $75.00 and $100.00 dollars less and uses less power. The one key drawback of the 6990 reference design is noise, cheers to Nvidia for making the 590 quite. That said, the OEMS already have aftermarket cooling in the pipeline for the 6990, soon they will be very quite as well and you can bet it wont have a $75.00 premium.
That said a pair of 6950 in crossfire will outperform both the 590 and the 6990. Two 6950 in crossfire will outperform the more powerful 570 in SLI because the 69xx cards scale better in dual and triple card configurations. There are very, very quite version of the 6950 available from several OEMS. The 6950 is a lot less expensive than any of the other above options. So I say if you need extreme performance and your smart (like a good value) get a pair of 6950's and call it a day.Quote
|
375watts, is that the max 2 x 8 pin + mobo power can give because AMD 6990 can draw 450watts with the faster BIOS, according to AMD's website a few days ago? 8 pin = 150 watts mobo PCI express = 75 watts i think |
8 pin 150 watts each
PCIE slot 75 watts
So 2x8 pin + PCIE slot = 375 watts
Wow if there is a faster bios that can cause a 75 watt increase in power draw over the PCIE spec. That is one bios I would have to say "Not a chance in Bleep" to. there is no way I would flash that to my card. Crazzy..
J can you post a link, that is one train wreck I just have to see.
Thanks manQuote
|
If by "suggestion" you are referring to industry excepted standards then I agree. Trade organizations create standards, IP companies design to those standards, manufacturers build to those standards. The result is a larger and more stable market for the consumer which at the end of the day is what it's all about. |
As do/will the power statements.
|
Your right 375 watts was not put forward as total thermal limit within a PC case. 375 watts is the electrical/thermal limit of a single PCIE 2.0 slot. PCIE does not restrict the total number of potential PCIE 2.0 slots. ATX however seems to think that 7 expansion slots are enough. So for the most part 7 PCIE slots are the most you can get on a standard ATX motherboard. |
PCI (slots/busses) and atx do not between themselves alone decide how many a mobo can supply for expansion. There is no standard to this. It's mostly down to controllers, chipsets to service the busses and what the mobo manufacturer has in mind. They often tie up a number of lanes with onboard devices. Again, really depending on what embedded companies they have 'onboard' with their design.
|
No, because by putting that load on another PCB in another PCIE 2.0 slot the surface area for thermal dissipation has at least doubled. What is silly is suggesting putting that same thermal load "in this case 750 watts" on a single PCIE 2.0 card. Thank you for making my point for me. |
|
That's true, best I can tell you think industry standards are just suggestion because there is no enforcement body "other than the marketplace". What you seem to be saying is that in theory a graphics chip manufacturer could design and build a single 750 watt graphics board. OK, sure in theory, my point is they wont because of those pesky industry standards. That and putting that much heat in that small a form factor without an extraordinary cooling solution is a good way to start a fire. |
|
I stand by my answer, Nvidia says that the 590 at load draws 365 watts. Nvidia says that the clock rates they set for the reference 590 are to ensure compliance with PCIE electrical/thermal standards. The fact that Nvidia did not characterize this as a problem doesn't make it any less the case. The fact is if Nvidia could have set the reference clocks higher they would have. Nvidia would love to claim the fastest single graphics card title, As it stands AMD's 6990 holds that title, costs between $75.00 and $100.00 dollars less and uses less power. The one key drawback of the 6990 reference design is noise, cheers to Nvidia for making the 590 quite. That said, the OEMS already have aftermarket cooling in the pipeline for the 6990, soon they will be very quite as well and you can bet it wont have a $75.00 premium. |
|
375watts, is that the max 2 x 8 pin + mobo power can give because AMD 6990 can draw 450watts with the faster BIOS, according to AMD's website a few days ago? 8 pin = 150 watts mobo PCI express = 75 watts i think |
2x 8 pin sockets 'can' and will supply beyond 450w if required, it can be required for overclocking 2x 8 pin cards especially when going beyond a stock cooler.Quote
Thanks oneseraph and RastalovichQuote
|
here oneseraph: http://www.amd.com/uk/products/desktop/graphics/amd-radeon-hd-6000/hd-6990/Pages/amd-radeon-hd-6990-overview.aspx#4 Thanks oneseraph and Rastalovich |
Looks like AMD and Nvidia both agree with oneseraph.
"Dual-BIOS Support
The AMD Radeon HD 6990 graphics card features dual-BIOS capabilities. This feature is controlled by the Unlocking Switch, which toggles between the factory-supported Performance BIOS of 375W (BIOS1), and an Extreme Performance BIOS (BIOS2) that can potentially unlock higher clock speeds and up to 450W of mind-blowing performance!
Caution:
Do not use the 450W setting unless you are familiar with overclocking and are using high-quality system components to ensure maximum system stability. If you encounter system instability or other unexpected system performance while using the 450W setting, return the graphics card to the factory-supported 375W setting, as your system may not be properly equipped to handle the increased demands of the 450W setting.
The following procedure describes how to switch between BIOS settings using the Unlocking Switch on your AMD Radeon HD 6990 graphics card.
Locate the yellow caution sticker adjacent to the AMD CrossFireX connector on your AMD Radeon HD 6990 graphics card. This sticker covers the Unlocking Switch and must be removed to access and change dual-BIOS switch positions.
WARNING: Before proceeding, thoroughly review the documentation for your AMD Radeon HD 6990 graphics card and assure that your computer meets all minimum system requirements.
Remove the sticker and set the Unlocking Switch to the desired setting:
Position 1 450W Extreme Performance BIOS (BIOS2).
Position 2 (shipping position) 375W factory-supported Performance BIOS (BIOS1).
WARNING: AMD graphics cards are intended to be operated only within their associated specifications and factory settings. Operating your AMD graphics card outside of specification or in excess of factory settings, including but not limited to overclocking, may damage your graphics card and/or lead to other problems, including but not limited to, damage to your system components (including your motherboard and components thereon (e.g. memory)); system instabilities (e.g. data loss and corrupted images); shortened graphics card, system component and/or system life; and in extreme cases, total system failure. AMD does not provide support or service for issues or damages related to use of an AMD graphics card outside of specifications or in excess of factory settings. You may also not receive support or service from your system manufacturer.
DAMAGES CAUSED BY USE OF YOUR AMD GRAPHICS PROCESSOR OUTSIDE OF SPECIFICATION OR IN EXCESS OF FACTORY SETTINGS ARE NOT COVERED UNDER YOUR AMD PRODUCT WARRANTY AND MAY NOT BE COVERED BY YOUR SYSTEM MANUFACTURERS WARRANTY."Quote
|
Almost, 375 was the most that was wanted/could be supplied, within reason, given the want of the gfx people and what could be supplied by atx power within reason. PCI (slots/busses) and atx do not between themselves alone decide how many a mobo can supply for expansion. There is no standard to this. It's mostly down to controllers, chipsets to service the busses and what the mobo manufacturer has in mind. They often tie up a number of lanes with onboard devices. Again, really depending on what embedded companies they have 'onboard' with their design. |
Standard ATX allows 7 expasion slots.
http://www.formfactors.org/FFDetail.asp?FFID=1&CatID=1 look at 3.3.1 Expansion Slots
oneseraph 1 Rastalovich 0
PCIE power limit 375 watts
oneseraph 2 Rastalovich 0
|
No, because you can put 1000W, for example, on a pcie 2.0 card if you chose to do so, as long as you manage the waste (heat) effectively. They could create a 4x gpu card that was 22 inches long and only fitted in a customized case if really wanted to. And it would be within the PCIe base standards. |
oneserph 3 Rastalovich 0
Basically everything I look up points to oneseraph being right. No offense but the more you comment the less you appear to know. I suggest letting it go mate, you are on the wrong side of the debate.Quote
|
I have read all the back and forth between you and oneseraph. I decided to do a little research of my own. I thought you both might be full of it. here is what I found out. Standard ATX allows 7 expasion slots. http://www.formfactors.org/FFDetail.asp?FFID=1&CatID=1 look at 3.3.1 Expansion Slots oneseraph 1 Rastalovich 0 PCIE power limit 375 watts oneseraph 2 Rastalovich 0 Both Nvidia and AMD have both stated that they are staying within the pcie 375 watt limit. oneserph 3 Rastalovich 0 Basically everything I look up points to oneseraph being right. No offense but the more you comment the less you appear to know. I suggest letting it go mate, you are on the wrong side of the debate. |
|
here oneseraph: http://www.amd.com/uk/products/desktop/graphics/amd-radeon-hd-6000/hd-6990/Pages/amd-radeon-hd-6990-overview.aspx#4 Thanks oneseraph and Rastalovich |
This is what happens when you operate so far out of spec. First you get the following hilarious disclaimer.
"WARNING: AMD graphics cards are intended to be operated only within their associated specifications and factory settings. Operating your AMD graphics card outside of specification or in excess of factory settings, including but not limited to overclocking, may damage your graphics card and/or lead to other problems, including but not limited to, damage to your system components (including your motherboard and components thereon (e.g. memory)); system instabilities (e.g. data loss and corrupted images); shortened graphics card, system component and/or system life; and in extreme cases, total system failure. AMD does not provide support or service for issues or damages related to use of an AMD graphics card outside of specifications or in excess of factory settings. You may also not receive support or service from your system manufacturer."
Really going beyond the PCIE 375 watt limit by a solid 75 watts could damage to your system components (including your motherboard and components thereon (e.g. memory). Wow that's surprising. Could cause complete system failure you say, Hmm that doesn't sound good. If I do this your not going to help me you say.
Anyway, you see what I mean.
Thanks for the link J man, I haven't laughed that hard in a while.
|
I have read all the back and forth between you and oneseraph. I decided to do a little research of my own. I thought you both might be full of it. here is what I found out. Standard ATX allows 7 expasion slots. http://www.formfactors.org/FFDetail.asp?FFID=1&CatID=1 look at 3.3.1 Expansion Slots oneseraph 1 Rastalovich 0 PCIE power limit 375 watts oneseraph 2 Rastalovich 0 Both Nvidia and AMD have both stated that they are staying within the pcie 375 watt limit. oneserph 3 Rastalovich 0 Basically everything I look up points to oneseraph being right. No offense but the more you comment the less you appear to know. I suggest letting it go mate, you are on the wrong side of the debate. |
There is no debate as to whether PCI-SIG mention 375w as the operational limit, but as I'm try to explain to you all about how the system works, this is/was dependant what is available at the time of testing and writing of the said document. i.e. 2x 8pin atx pcie power connectors - it would not be envisaged that anyone would go beyond that at the time.
Back with the base 2.0 specification they would have stated 75w, for the slot, plus *whatever additional power could feasibly and realisticly be supplied*. They added molex, upped the power, 6x pin, upped the power, 8x pin, upped the power .. til the documents now read 375 .. another change comes along and so the documents will/may change.
As a statement of intent, for sure both parties stated they were sticking within this, and they have. They're all nice and friendly behind the scenes, even tho tho poke tounges out at each other when the other's not looking.
This isn't a political battleground or anything buddy, I'm merely trying to explain to you how the system works.
At the current moment in time, the PCIe 2.0 (latest 2.1) 'should' (I can't confirm this) be being used within modern up to date mobos being release. 2.1 is (should be) the final stepping stone to 3.0, it is *practically*, for all intensive purposes, the same in everything except data usage (for arguments sake). Whether this itself carries the same electrical properties of 3.0 I can't tell you, as the documents within PCI-SIG, which you'll need membership for to look at maybe, do-not give specific numbers on what the power req for base-3.0 will be. I did speculate the other week, knowing that both nvidia and amd have release these cards that so fragrantly play the "I'm not touching you" game with 375w, amd especially as they have a bigger hand in mobos these days, that they are infact in-house using "3.0" and are gearing up for it for future releases.
The reasoning behind many of the failures of going beyond 375 in testing gfxcards is down to how these 8x pin sockets are supplied. If you go down the standard route of conventionally hooking up your psu and using 8x pin designated cables, your system can and usually will, shutdown as the psu trips out. BUT if you supply the 8x pin socket with a combination of power sources, using adapters in the main, much more than 375 can be there if the card requires it - i.e. what overclockers will tend to do if they're intent on breaking the boundries.
For sure, there will be disclaimers all over the websites of these manufacturers explaining to you how awful it will be for your system if you go beyond 375, they'll probably even use it as a means to refuse warranty. But hey - it's a disclaimer - Intel have disclaimers about how many volts they want you to put over their cpus, and how much notice of that do enthusiasts take notice of ? In today's climate we need disclaimers on bridges that dangling your baby off the edge of it could result in harm and the bridge people won't be held responsible.Quote
Their harping back to the old requirement of plugging in molex connectors to mobos if you intended to use the second pcie slot for graphics.
EDIT: oops, ofc what I should be saying is how dare they, this breaks so many regulations, it's rediculous !
No, cos you've been around so long that you already know how much of a pain in the arse, opinionated, waffle-spouting, member I can be.
Really only trying to advise how the respective document system works, and I attract the arguments probably by the way I write things. Sue me, it took me ages to get an english qualification, whilst at the same time I got math, electronics and physics, and went on to work in the field where we get these document releases.
And despite how members may take things, or increasingly tend to over the last so many years, I honestly don't mean any offense by any of it. If someone takes it, I'll be the first to appologise.Quote
|
+rep No, cos you've been around so long that you already know how much of a pain in the arse, opinionated, waffle-spouting, member I can be. Really only trying to advise how the respective document system works, and I attract the arguments probably by the way I write things. Sue me, it took me ages to get an english qualification, whilst at the same time I got math, electronics and physics, and went on to work in the field where we get these document releases. And despite how members may take things, or increasingly tend to over the last so many years, I honestly don't mean any offense by any of it. If someone takes it, I'll be the first to appologise. |
|
Is it bad i find it funny when people try to argue with rasta? |
|
+rep No, cos you've been around so long that you already know how much of a pain in the arse, opinionated, waffle-spouting, member I can be. Really only trying to advise how the respective document system works, and I attract the arguments probably by the way I write things. Sue me, it took me ages to get an english qualification, whilst at the same time I got math, electronics and physics, and went on to work in the field where we get these document releases. And despite how members may take things, or increasingly tend to over the last so many years, I honestly don't mean any offense by any of it. If someone takes it, I'll be the first to appologise. |
Well I am offended, everyone who has suggested that you are wrong has been responded to with off point often rude, sarcastic and hyperbolic comments. So all these people don't know what they are talking about? You think you are the only person who understands the way things work? My how lucky we all are that you are here to inform the rest of us. What would we idiots do without you.
What a load of SH*T!
The company I work for is a member of the PCI Special Interest Group and I am a consulting engineer. So when I say that you are wrong I think everyone on this forum will appreciate my meaning.
So here goes, you are wrong.
Now anyone can argue for the sake of argument. It takes real character to admit when you are wrong. So what's it gonna be, do you have the stones to admit when you are wrong or are you that other guy?Quote
|
This post pretty much says it all, pain in ares is right. You say you have an English qualification, so no excuse there. You claim something about math, electronics and physics, so no excuse there. You claim that you will be the first to apologize if someone is offended. Well I am offended, everyone who has suggested that you are wrong has been responded to with off point often rude, sarcastic and hyperbolic comments. So all these people don't know what they are talking about? You think you are the only person who understands the way things work? My how lucky we all are that you are here to inform the rest of us. What would we idiots do without you. What a load of SH*T! The company I work for is a member of the PCI Special Interest Group and I am a consulting engineer. So when I say that you are wrong I think everyone on this forum will appreciate my meaning. So here goes, you are wrong. Now anyone can argue for the sake of argument. It takes real character to admit when you are wrong. So what's it gonna be, do you have the stones to admit when you are wrong or are you that other guy? |
|
The company I work for is a member of the PCI Special Interest Group and I am a consulting engineer. |
And I do appologise, unreservedly if any offense is taken.Quote
|
Excellent research, and best of luck with your ATX standard mobo with the agp for advanced graphics and isa lanes for the 7 expansion slots. Let us know how you get on with that, I assume your current mobo fits these standards. There is no debate as to whether PCI-SIG mention 375w as the operational limit, but as I'm try to explain to you all about how the system works, this is/was dependant what is available at the time of testing and writing of the said document. i.e. 2x 8pin atx pcie power connectors - it would not be envisaged that anyone would go beyond that at the time. Back with the base 2.0 specification they would have stated 75w, for the slot, plus *whatever additional power could feasibly and realisticly be supplied*. They added molex, upped the power, 6x pin, upped the power, 8x pin, upped the power .. til the documents now read 375 .. another change comes along and so the documents will/may change. As a statement of intent, for sure both parties stated they were sticking within this, and they have. They're all nice and friendly behind the scenes, even tho tho poke tounges out at each other when the other's not looking. This isn't a political battleground or anything buddy, I'm merely trying to explain to you how the system works. At the current moment in time, the PCIe 2.0 (latest 2.1) 'should' (I can't confirm this) be being used within modern up to date mobos being release. 2.1 is (should be) the final stepping stone to 3.0, it is *practically*, for all intensive purposes, the same in everything except data usage (for arguments sake). Whether this itself carries the same electrical properties of 3.0 I can't tell you, as the documents within PCI-SIG, which you'll need membership for to look at maybe, do-not give specific numbers on what the power req for base-3.0 will be. I did speculate the other week, knowing that both nvidia and amd have release these cards that so fragrantly play the "I'm not touching you" game with 375w, amd especially as they have a bigger hand in mobos these days, that they are infact in-house using "3.0" and are gearing up for it for future releases. The reasoning behind many of the failures of going beyond 375 in testing gfxcards is down to how these 8x pin sockets are supplied. If you go down the standard route of conventionally hooking up your psu and using 8x pin designated cables, your system can and usually will, shutdown as the psu trips out. BUT if you supply the 8x pin socket with a combination of power sources, using adapters in the main, much more than 375 can be there if the card requires it - i.e. what overclockers will tend to do if they're intent on breaking the boundries. For sure, there will be disclaimers all over the websites of these manufacturers explaining to you how awful it will be for your system if you go beyond 375, they'll probably even use it as a means to refuse warranty. But hey - it's a disclaimer - Intel have disclaimers about how many volts they want you to put over their cpus, and how much notice of that do enthusiasts take notice of ? In today's climate we need disclaimers on bridges that dangling your baby off the edge of it could result in harm and the bridge people won't be held responsible. |
I cant confirm this, I cant confirm that, what a bunch of doubletalk. All these excuses just to get out of admitting your wrong. It's like cupojoe said arguing for the sake of argument.
Man up and admit your wrongQuote
|
Aye calm it down now you lot please. /warning shot of the bow |
Apologies to all for my part, that includes you Rastalovich.
|
Roger that! Apologies to all for my part, that includes you Rastalovich. |
I share your sentiment.Quote


Continue ReadingQuote