AMD "Fusion" Vision for the Future - on-die GPU

AMD - Future vision and integration technologies


AMD have fallen slightly behind in the CPU wars in a lot of technology enthusiasts eyes. With Intel ramping up Core2Duo, AMD's previous title of "best gaming chip" has fallen by the wayside. However, by no means are AMD struggling and indeed they are certainly not resting on their laurels. After a press conference attended by OC3D, featuring talks from ATI's (ex) CTO Bob Drebin and Phil Mester - AMD's Technology CTO, we were given a brief but interesting insight of where AMD are going.

How AMD/ATI see the market

AMD stated that the market is going for more and more cores and more and more processing power, but forgetting that these have to actually be utilised to be worth anything. Software support seems to be pretty far behind hardware at the moment and although this is being worked on, is it really worth having 32 cores on a CPU?

What AMD see as the future is adding a proper instruction set to x86. It was done with MMX, SSE and X64 and they think that we need an instruction set to add support for GPUs on CPUs. Now it may seem I'm jumping ahead a little here, but I'll come back to that.

amd future plan

Remember that this is how AMD see it going. Intel's views have not been made clear to us and it way be that they have plans of their own. I have also heard that nVidia plans on making CPUs...but whether they can do that on their own without the specialist knowledge of a good CPU firm is another thing.

I will say that AMD were the first off the blocks with their 64 bit support, and Intel chose to adopt their instruction set for that...

The Hardware side

AMD's view is that the GPU has become such a useful tool in the computing world that it is needed at the heart of systems now. They also focused on the phrase "performance-per watt-per dollar". This is something that AMD say that the industry needs to start focusing upon. With data centres becoming more and more CPU-heavy AMD want to show how they can reduce the power overhead.

max power usage

This shows that the interface between the CPU > PCI-e bus > GPU hinders the performance per watt with the current model. AMD have plans to stop this happening.

Why GPU on-die?

The GPU has come a long way in recent years, becoming a massively parallel processing unit. With a large amount of pipes and a getting faster and faster, the GPU can do things that the CPU will struggle at. As an example AMD used Folding @ Home. Folding is an a distributed computing program that utilitises people PC's to fold "proteins". I won't say too much more on this other than it runs a whole lot faster on the GPU than it does on the CPU due to it's massively parallel nature.

AMD also went on to demonstrate a rather fun physics simulation on both the CPU and GPU - showing that the GPU manages to give far faster and more detailed physics than even the fastest AMD chip can do currently.

Above is what AMD think this could lead to in a basic way, lets see a little more detail:

flexible approach

Here we see ATI demonstrating that you can utilise the power of the GPU is many different ways. The first was we see is perhaps for a lower-end desktop of low power HTPC/laptop. With Vista now fully utilising the GPU for OS acceleration, this would mean that there needn't be a discreet graphics card with the additional power overhead that comes with it.

However in example two we see that this is a far more powerful machine that could use several CPU core and several GPU cores for different things. Perhaps a high-end high power commercial unit for vehicle simulation or medical imaging.

For me the exciting thing comes when you have a dual core CPU handling the game engine/AI, the embedded GPU handling physics and anything else needed that can be easily programmed in and a discreet ultra-fast add-in GPU for the graphics. This is where we see AMD admitting that they believe that there will always be discreet cards and that they're business is not to get rid of them. Remember they want "performance-per watt-per dollar".

platform choices

Above you can see some platform choices for vendors. This is where AMD can learn from ATI (aside from their quite obviously excellent GPUs). ATI have managed to get themselves into a lot of sectors of the technology market. From mobile phones to HDTV and palmtop PC's - ATI are inherent in a lot of media applications. AMD see this as somewhere where their platform needs to be pervasive.


Thus we lead on to Torrenza. AMD's "open development platform". This platform is something AMD hope to build their whole range of next-gen products around. With applications ranging from mobile platforms, network applications and even gaming platforms: Torrenza looks to be a nice solution to a complex problem. AMD are openly encouraging the development of a number of specialised processors. With the huge diversity that abound in the etch world AMD have cited a few examples: Java, XML, Floating Point and Media Processing. Torrenza allows integration of these alongside the more traditional components of a system.


As you can see AMD are reaching across a huge area with this technology and hoping that they can give an all in one solution to all computing needs.


Before we get too excited about this technology - and it is exciting; let us pause and look for some issues:

• AMD's GPU on CPU x86 instruction set has to be accepted as standard (just like x64).
• Software has to be written to utilise this new technology. There is little enough support for dual cores.
• AMD were a little sketchy on what would happen as regarding graphics memory. Would it be integrated onto the motherboard or would this share the slower system memory?
• Even more sketchy were the details of the speed of these GPU's, although we were assured they would be "many times" faster than a chipset graphics solution (referring to Intel's GMA solutions).
• What are AMD doing in the short term? I heard mention in passing of "some exciting new products coming soon" but nothing solid apart from the 2007 release of the native quad core.

Aside from those worries, AMD's future plans look pretty exciting. If they manage to pull it off and their on-die GPU x86 instruction set gets industry approval this may once again be an exciting time to be a fan of AMD. With ATI's blisteringly fast GPU's now owned by AMD (n.b. the ATI brand stays) and AMD still up there in the processor race, perhaps we are going to see an AMD that reaches far further than just making exceptionally good CPU's.

With Vista on the way this is perhaps the optimal time for this kind of innovation. This would reduce the OEM's overheads by adding graphical support onto the CPU and mean that "Vista Ready" laptops/desktop could start with some pretty powerful graphics hardware, ultimately giving the end-user a better "Vista experience".

As a last word - heres something that made me chuckle. AMD used the "Top500" companies website as an example to show the power of an on-die GPU. With an estimated 1000 nodes typically they came up with this:

a petaflop!

A Petaflop anyone?

Discuss in our Forums


GPU = Graphics Processing Unit
CPU = Central Processing Unit

All pictures are courtesy of AMD

«Prev 1 Next»

Most Recent Comments

30-10-2006, 10:18:22

We went to an ATI/AMD press conference, and I try to explain (nearly) all that we heard.

Take a lookQuote

30-10-2006, 10:52:03

i understood hardly any of that, but it all seems very exciting.

1 question though, if GPU and CPU are on the same chip, what will this mean in terms of heat? surely chips are going to get hotter, probably back to the prescott days?Quote

30-10-2006, 10:57:37

Well this is another thing that wasn't we will have to see.

But remember AMD's aim is "performance-per watt-per dollar" so hopefully they will be making them low-power parts!Quote

30-10-2006, 12:26:46

Wirelessly posted (LG Fusic: Opera/8.01 (J2ME/MIDP; Opera Mini/2.0.4719/1378; en; U; ssr))

from what i can tell this definately looks like the future of graphics!Quote

11-12-2006, 22:21:03

i reckon we will be seeing "media center" pcs built into televisions, or the whole thing the size of a hddvd with a minimal heatsink, heatpipes and a coupla fans (or they may even utilise liquid cooling).

this way, i reckon we will be going back to the apple designs (all in one computers inc the monitor), but at the same slimness as of todays' TFTs.

at least for the general/media person.Quote

14-01-2007, 14:43:51

well its one of those things you got to wait and see to get more answersQuote

14-01-2007, 14:50:37

Bah and what of upgradability.Quote

14-01-2007, 14:53:16

Well there would be a standard socket for all CPUGPU combo's and AMD envisage having a range of choices from single core CPU through tri-core CPU/GPUQuote

14-01-2007, 14:55:00

Right but if you want to upgrade your gpu youd need a new cpu.Quote

14-01-2007, 14:57:52

Not necessarily as AMD/ATI say that discreet Graphics cards on PCI-e will still be used no matter what

Then the GPU on-die could be used for other stuff (physics etc)Quote

14-01-2007, 15:17:32

Who comes up with the names? Yotta FLOPQuote

14-01-2007, 16:01:38

PP Mguire
See this is exactly what i was talking about on another forum back in early 2006. They thought i was crazy and i got flamed, but yet look at what they are doing! This is awesome. *points to self* cant wait.Quote

14-01-2007, 16:10:02

Then my only question is how will overclocking be effected :PQuote

Register for the OC3D Newsletter

Subscribing to the OC3D newsletter will keep you up-to-date on the latest technology reviews, competitions and goings-on at Overclock3D. We won't share your email address with ANYONE, and we will only email you with updates on site news, reviews, and competitions and you can unsubscribe easily at any time.

Simply enter your name and email address into the box below and be sure to click on the links in the confirmation emails that will arrive in your e-mail shortly after to complete the registration.

If you run into any problems, just drop us a message on the forums.