AMD "Fusion" Vision for the Future - on-die GPU
AMD - Future vision and integration technologies
Published: 8th November 2006 | Source: ATI |
AMD have fallen slightly behind in the CPU wars in a lot of technology enthusiasts eyes. With Intel ramping up Core2Duo, AMD's previous title of "best gaming chip" has fallen by the wayside. However, by no means are AMD struggling and indeed they are certainly not resting on their laurels. After a press conference attended by OC3D, featuring talks from ATI's (ex) CTO Bob Drebin and Phil Mester - AMD's Technology CTO, we were given a brief but interesting insight of where AMD are going.
How AMD/ATI see the market
AMD stated that the market is going for more and more cores and more and more processing power, but forgetting that these have to actually be utilised to be worth anything. Software support seems to be pretty far behind hardware at the moment and although this is being worked on, is it really worth having 32 cores on a CPU?
What AMD see as the future is adding a proper instruction set to x86. It was done with MMX, SSE and X64 and they think that we need an instruction set to add support for GPUs on CPUs. Now it may seem I'm jumping ahead a little here, but I'll come back to that.
I will say that AMD were the first off the blocks with their 64 bit support, and Intel chose to adopt their instruction set for that...
The Hardware side
AMD's view is that the GPU has become such a useful tool in the computing world that it is needed at the heart of systems now. They also focused on the phrase "performance-per watt-per dollar". This is something that AMD say that the industry needs to start focusing upon. With data centres becoming more and more CPU-heavy AMD want to show how they can reduce the power overhead.
Why GPU on-die?
The GPU has come a long way in recent years, becoming a massively parallel processing unit. With a large amount of pipes and a getting faster and faster, the GPU can do things that the CPU will struggle at. As an example AMD used Folding @ Home. Folding is an a distributed computing program that utilitises people PC's to fold "proteins". I won't say too much more on this other than it runs a whole lot faster on the GPU than it does on the CPU due to it's massively parallel nature.
AMD also went on to demonstrate a rather fun physics simulation on both the CPU and GPU - showing that the GPU manages to give far faster and more detailed physics than even the fastest AMD chip can do currently.
Above is what AMD think this could lead to in a basic way, lets see a little more detail:
However in example two we see that this is a far more powerful machine that could use several CPU core and several GPU cores for different things. Perhaps a high-end high power commercial unit for vehicle simulation or medical imaging.
For me the exciting thing comes when you have a dual core CPU handling the game engine/AI, the embedded GPU handling physics and anything else needed that can be easily programmed in and a discreet ultra-fast add-in GPU for the graphics. This is where we see AMD admitting that they believe that there will always be discreet cards and that they're business is not to get rid of them. Remember they want "performance-per watt-per dollar".
Thus we lead on to Torrenza. AMD's "open development platform". This platform is something AMD hope to build their whole range of next-gen products around. With applications ranging from mobile platforms, network applications and even gaming platforms: Torrenza looks to be a nice solution to a complex problem. AMD are openly encouraging the development of a number of specialised processors. With the huge diversity that abound in the etch world AMD have cited a few examples: Java, XML, Floating Point and Media Processing. Torrenza allows integration of these alongside the more traditional components of a system.
Before we get too excited about this technology - and it is exciting; let us pause and look for some issues:
• AMD's GPU on CPU x86 instruction set has to be accepted as standard (just like x64).
• Software has to be written to utilise this new technology. There is little enough support for dual cores.
• AMD were a little sketchy on what would happen as regarding graphics memory. Would it be integrated onto the motherboard or would this share the slower system memory?
• Even more sketchy were the details of the speed of these GPU's, although we were assured they would be "many times" faster than a chipset graphics solution (referring to Intel's GMA solutions).
• What are AMD doing in the short term? I heard mention in passing of "some exciting new products coming soon" but nothing solid apart from the 2007 release of the native quad core.
Aside from those worries, AMD's future plans look pretty exciting. If they manage to pull it off and their on-die GPU x86 instruction set gets industry approval this may once again be an exciting time to be a fan of AMD. With ATI's blisteringly fast GPU's now owned by AMD (n.b. the ATI brand stays) and AMD still up there in the processor race, perhaps we are going to see an AMD that reaches far further than just making exceptionally good CPU's.
With Vista on the way this is perhaps the optimal time for this kind of innovation. This would reduce the OEM's overheads by adding graphical support onto the CPU and mean that "Vista Ready" laptops/desktop could start with some pretty powerful graphics hardware, ultimately giving the end-user a better "Vista experience".
As a last word - heres something that made me chuckle. AMD used the "Top500" companies website as an example to show the power of an on-die GPU. With an estimated 1000 nodes typically they came up with this:
Discuss in our Forums
GPU = Graphics Processing Unit
CPU = Central Processing Unit
All pictures are courtesy of AMD