'

Nvidia RTX 3090 and RTX 3080 Specifications Leak

Are you ready for Nvidia's fastest gaming GPU to date?

Nvidia RTX 3090 and RTX 3080 Specifications Leak

Nvidia RTX 3090 and RTX 3080 Specifications Leak

The specifications of Nvidia's RTX 3090 and 3080 have been leaked through Videocardz, revealing the CUDA core counts, the TGPs and the memory specs of Nvidia's upcoming Ampere products. 

This report seemingly confirms that Nvidia's RTX 3090 will utilise 24GB of GDDR6X memory, and it has been reported that the RTX 3080 will release with 20GB variants, suggesting that Nvidia wants to increase the memory capacities that are available on consumer graphics cards. However, Nvidia's RTX 3080 will launch with 10GB models, which is more than enough for today's gamers. 

Nvidia's RTX AMpere lineup will support Nvidia's 2nd Generation Ray Tracing Cores and 3rd Generation Tensor cores, offer PCIe 4.0 connectivity and support both HDMI 2.1 and DisplayPort 1.4a. 

With a TGP of 350W, Nvidia's RTX 3090 will be powered by two 8-pin PCIe power connectors for many custom models. However, Nvidia's Founders Edition version of this graphics card should use the company's new 12-pin GPU power connector. Nvidia's RTX 3090, RTX 3080 and RTX 3070 are all due to be announced next month and launch sometime in September. 

Nvidia's marketing mentions the company's use of a 7nm manufacturing process, according to Videocardz. Below is a chart detailing the leaked specifications of Nvidia's RTX 3000 series. 

 
 RTX 3090 RTX 3080RTX 2080 Ti FERTX 3070
GPUGA102-300GA102-200TU102-300GA104-300
Node7nm7nm12nm7nm
CUDA Cores524843524352?
Boost Clock1695MHz1710MHz1635MHz?
Memory24GB GDDR6X10GB GDDR6X11GB GDDR68GB GDDR6
Memory Clock19.5 Gbps19 Gbps14 Gbps16 Gbps
Memory Bus384-bit320-bit354-bit256-bit
Bandwidth936 GB/s760 GB/s616 GB/s512 GB/s
TGP350W320W260W220W

Nvidia RTX 3090 and RTX 3080 Specifications Leak  Nvidia RTX 3090 and RTX 3080 Specifications Leak

 

You can join the discussion on Nvidia's leaked RTX 3090 and RTX 3080 specifications on the OC3D Forums

«Prev 1 Next»

Most Recent Comments

28-08-2020, 09:10:50

Bagpuss
Well, I'll wait for benchmarks before passing final judgement but those specs on the 3080 are very underwhelming.



My 2080 has 512GB of Bandwidth so 768Gb on the 3080 is only a 50% up tick, which suggests actual in game performance will only be 20% or so better than the 2080Ti.


I really can't see moving from a 2080 to a 3080, being a worthwhile upgrade,TBH.


Only moving to a 3090 would make any kind of sense in performance terms, but absolutely no sense whatsoever in financial outlay to get it.


As I suspected all along, waiting for the 4080 in 2022/23 is the only sensible upgrade path from my 2080.


Oh well, I guess that shiny new OLED TV will be where I spend my money this year.Quote

28-08-2020, 09:22:47

trawetSluaP
This is we don't know what gains their will be from the new architecture. Will need to wait for proper benchmarks.Quote

28-08-2020, 10:10:44

MiNo
I cannot understand the 24 GB.

This obviously will cost a lot to produce - so why this massive increase? Is it needed to get the bandwidth/speed increase? Or are there games that needs this to hold textures? Seeing how people get along nicely on 8/11 GB now it seems like a larger increase than what normal 'evolution' would demand. Even 16GB on a top card seems generous.

Are there other uses / benefits for more RAM on the cards?Quote

28-08-2020, 10:37:05

Bagpuss
Quote:
Originally Posted by MiNo View Post
I cannot understand the 24 GB.

This obviously will cost a lot to produce - so why this massive increase?

Its more about giving Nvidia the option of releasing a more powerful 2080 Super/Ti in 2021 with more than 10GB of VRAM to counter any move from AMD and their RDN2 cards.

24GB serves no useful purpose, its a completely unnecessary amount of VRAM. It also maybe gives nvidia a bogus reason to justify the absurd price the 3090 will no doubt cost.Quote

28-08-2020, 10:37:06

AlienALX
Quote:
Originally Posted by MiNo View Post
I cannot understand the 24 GB.

This obviously will cost a lot to produce - so why this massive increase? Is it needed to get the bandwidth/speed increase? Or are there games that needs this to hold textures? Seeing how people get along nicely on 8/11 GB now it seems like a larger increase than what normal 'evolution' would demand. Even 16GB on a top card seems generous.

Are there other uses / benefits for more RAM on the cards?
The higher you go in resolution the higher the memory bandwidth needs to be. Hence the multiples. Unless you derp the memory controller and decrease the bandwidth you would have problems.

As an example, the 1080Ti 11gb.

MEMORY BUS WIDTH 352 bits
MEMORY BANDWIDTH 484GB/s

VS Titan Xp.

Memory interface width: 384-bit
Memory bandwidth: 547.58 GB/s

So you can see the decrease by derping the memory controller and reducing the memory amount by just 1gb.

Now. Memory bandwidth was kinda important on the XP and Ti. However, nowhere near as important as it will be now. 4k is incredibly demanding on memory bandwidth, and the next gen games (especially RT) will be very demanding on the VRAM. So if they halved the VRAM to 10gb? then all of that power will be hobbled by the low memory bandwidth.

The bigger the textures get the higher that bandwidth needs to be. This is why AMD were so stupid to use HBM, because at the time having all that bandwidth did squat. However, with PCIE 4 and the fact that NVME drives on PCIE4 are ludicrously fast they can finally utilise the storage speed too.

You know how for ages it wasn't worth storing a game on a SSD because the load times barely improved, apart from a small handful of titles?

Well you can totally expect that to change once these new consoles launch.

Why wasn't it improved before? because the consoles did not come with SSDs.

You also need to bear in mind that even though the PC has seen a massive uptick in users (because of games like Fortnite and PubG and the popularity among younger gamers) they (the games) are still coded primarily with the consoles in mind. Hence, any improvements for the PC need to be added in later (IE much faster storage, multiple GPUs, remember those? ) and so on, which they usually do as little as they need to do.

Apart from Rockstar, who genuinely do put in a lot of effort on their ahem, "ports".

Quote:
Originally Posted by Bagpuss View Post
Its more about giving Nvidia the option of releasing a 2080 Super/Ti in 2021 with more than 10GB of VRAM.


24GB serves no useful purchase other than that and maybe a bogus reason to justify the absurd price the 3090 will no doubt be priced at.
That is not true. See the above, and what happens when you derp the memory by just 1gb.

There are many falsehoods doing the rounds at the moment as to why Nvidia are "ripping us off innit" and "their margins are higher than evarrr !" which both of which are BS.

Apparently Nvidia work to a 60% margin. Always have, always will. The reason GPUs have gotten expensive? because we keep demanding more and more performance. So they deliver it.

Also, like I did elsewhere I will try to explain why the 2080Ti cost so much. Again, these are facts, ignore them if you like but it's very narrow minded to do so !

1. Nvidia were not going to release Turing. It was Ampere. Samsung's node failed and it was delayed, and Nvidia had to wait for Samsung to change the node to make it even useable. It started as 8nm, now it is 7. So they needed a whole node just to get it to work.

2. The 2080Ti die was absolutely frigging ENORMOUS. 772mm2. Compare that to the 1080Ti? it was 440 odd mm2. So that is about 35-40% larger than Pascal.

3. Nvidia did not want to use TSMC, as they were involved with Samsung. Why? because TSMC are really expensive. Turing was basically a slightly shrunken Pascal on TSMC with the tensor cores bolted on, hence the massive die. Massive dies are monolithic, cost a fortune, failure rates soar (because one bad area means dead core).

4. Nvidia had a deal with TSMC to provide only working dies. IE, TSMC would swallow some of the dead ones. This meant that basically TSMC would have added in at least some of that cost on the 2080Ti die. There's no way they would have swallowed it all.

5. TSMC *are* expensive. AMD are OK, because they go for Ryzen with its chiplet design meaning lots of working dies per wafer. However, as already explained the 2080Ti die was bloody huge. Meaning huge cost.


And that is quite probably why Turing should get spanked by Ampere because Ampere was the ground up design, not Turing. As I explained, that was a slightly shrunken Pascal on a massive die with tensor cores.

Hence the supposed enormous uplift in RT performance.

To add.

These Samsung dies are not as good as TSMC, BTW. The enormous 2080Ti used 250w TGP. The 3090 uses 350w TGP.

The 3090 is a failed Quadro, but not in the usual sense. It is not a complete failure, it just uses way too much power. So it can't be used as a Quadro, as those things need to go in rack servers etc and need to behave perfectly when it comes to thermal power. You can not shove a 500w+ card (overclocked) into a server.

With us, the home users? they will leave that to us. Water blocks, loads of air flow, big ass coolers etc etc.Quote
Reply
x

Register for the OC3D Newsletter

Subscribing to the OC3D newsletter will keep you up-to-date on the latest technology reviews, competitions and goings-on at Overclock3D. We won't share your email address with ANYONE, and we will only email you with updates on site news, reviews, and competitions and you can unsubscribe easily at any time.

Simply enter your name and email address into the box below and be sure to click on the links in the confirmation emails that will arrive in your e-mail shortly after to complete the registration.

If you run into any problems, just drop us a message on the forums.