AMD "Fusion" Vision for the Future - on-die GPU

AMD - Future vision and integration technologies

Introduction

AMD have fallen slightly behind in the CPU wars in a lot of technology enthusiasts eyes. With Intel ramping up Core2Duo, AMD's previous title of "best gaming chip" has fallen by the wayside. However, by no means are AMD struggling and indeed they are certainly not resting on their laurels. After a press conference attended by OC3D, featuring talks from ATI's (ex) CTO Bob Drebin and Phil Mester - AMD's Technology CTO, we were given a brief but interesting insight of where AMD are going.


How AMD/ATI see the market

AMD stated that the market is going for more and more cores and more and more processing power, but forgetting that these have to actually be utilised to be worth anything. Software support seems to be pretty far behind hardware at the moment and although this is being worked on, is it really worth having 32 cores on a CPU?

What AMD see as the future is adding a proper instruction set to x86. It was done with MMX, SSE and X64 and they think that we need an instruction set to add support for GPUs on CPUs. Now it may seem I'm jumping ahead a little here, but I'll come back to that.

amd future plan

Remember that this is how AMD see it going. Intel's views have not been made clear to us and it way be that they have plans of their own. I have also heard that nVidia plans on making CPUs...but whether they can do that on their own without the specialist knowledge of a good CPU firm is another thing.

I will say that AMD were the first off the blocks with their 64 bit support, and Intel chose to adopt their instruction set for that...


The Hardware side


AMD's view is that the GPU has become such a useful tool in the computing world that it is needed at the heart of systems now. They also focused on the phrase "performance-per watt-per dollar". This is something that AMD say that the industry needs to start focusing upon. With data centres becoming more and more CPU-heavy AMD want to show how they can reduce the power overhead.

max power usage

This shows that the interface between the CPU > PCI-e bus > GPU hinders the performance per watt with the current model. AMD have plans to stop this happening.


Why GPU on-die?


The GPU has come a long way in recent years, becoming a massively parallel processing unit. With a large amount of pipes and a getting faster and faster, the GPU can do things that the CPU will struggle at. As an example AMD used Folding @ Home. Folding is an a distributed computing program that utilitises people PC's to fold "proteins". I won't say too much more on this other than it runs a whole lot faster on the GPU than it does on the CPU due to it's massively parallel nature.

AMD also went on to demonstrate a rather fun physics simulation on both the CPU and GPU - showing that the GPU manages to give far faster and more detailed physics than even the fastest AMD chip can do currently.

Above is what AMD think this could lead to in a basic way, lets see a little more detail:

flexible approach

Here we see ATI demonstrating that you can utilise the power of the GPU is many different ways. The first was we see is perhaps for a lower-end desktop of low power HTPC/laptop. With Vista now fully utilising the GPU for OS acceleration, this would mean that there needn't be a discreet graphics card with the additional power overhead that comes with it.

However in example two we see that this is a far more powerful machine that could use several CPU core and several GPU cores for different things. Perhaps a high-end high power commercial unit for vehicle simulation or medical imaging.

For me the exciting thing comes when you have a dual core CPU handling the game engine/AI, the embedded GPU handling physics and anything else needed that can be easily programmed in and a discreet ultra-fast add-in GPU for the graphics. This is where we see AMD admitting that they believe that there will always be discreet cards and that they're business is not to get rid of them. Remember they want "performance-per watt-per dollar".

platform choices

Above you can see some platform choices for vendors. This is where AMD can learn from ATI (aside from their quite obviously excellent GPUs). ATI have managed to get themselves into a lot of sectors of the technology market. From mobile phones to HDTV and palmtop PC's - ATI are inherent in a lot of media applications. AMD see this as somewhere where their platform needs to be pervasive.


Torrenza


Thus we lead on to Torrenza. AMD's "open development platform". This platform is something AMD hope to build their whole range of next-gen products around. With applications ranging from mobile platforms, network applications and even gaming platforms: Torrenza looks to be a nice solution to a complex problem. AMD are openly encouraging the development of a number of specialised processors. With the huge diversity that abound in the etch world AMD have cited a few examples: Java, XML, Floating Point and Media Processing. Torrenza allows integration of these alongside the more traditional components of a system.

torrenza

As you can see AMD are reaching across a huge area with this technology and hoping that they can give an all in one solution to all computing needs.


Conclusion


Before we get too excited about this technology - and it is exciting; let us pause and look for some issues:

• AMD's GPU on CPU x86 instruction set has to be accepted as standard (just like x64).
• Software has to be written to utilise this new technology. There is little enough support for dual cores.
• AMD were a little sketchy on what would happen as regarding graphics memory. Would it be integrated onto the motherboard or would this share the slower system memory?
• Even more sketchy were the details of the speed of these GPU's, although we were assured they would be "many times" faster than a chipset graphics solution (referring to Intel's GMA solutions).
• What are AMD doing in the short term? I heard mention in passing of "some exciting new products coming soon" but nothing solid apart from the 2007 release of the native quad core.

Aside from those worries, AMD's future plans look pretty exciting. If they manage to pull it off and their on-die GPU x86 instruction set gets industry approval this may once again be an exciting time to be a fan of AMD. With ATI's blisteringly fast GPU's now owned by AMD (n.b. the ATI brand stays) and AMD still up there in the processor race, perhaps we are going to see an AMD that reaches far further than just making exceptionally good CPU's.

With Vista on the way this is perhaps the optimal time for this kind of innovation. This would reduce the OEM's overheads by adding graphical support onto the CPU and mean that "Vista Ready" laptops/desktop could start with some pretty powerful graphics hardware, ultimately giving the end-user a better "Vista experience".

As a last word - heres something that made me chuckle. AMD used the "Top500" companies website as an example to show the power of an on-die GPU. With an estimated 1000 nodes typically they came up with this:

a petaflop!

A Petaflop anyone?

Discuss in our Forums


Key:

GPU = Graphics Processing Unit
CPU = Central Processing Unit

All pictures are courtesy of AMD

«Prev 1 Next»

Most Recent Comments

04-11-2006, 07:52:08

Toxcity
[QUOTE=Kempez]It's different from CSS as it's much slower. 20FPS and above is fine to play in Oblivion.

I play it at 1680 x 1050 on my 7900GTX and get over 30FPS :)[/QUOTE]

Christ! :worship:

I playing it at 1280x768... Not even the bets my screen can do.! :(


Damn!

04-11-2006, 08:14:43

Kempez
[QUOTE=Toxcity]Christ! :worship:

I playing it at 1280x768... Not even the bets my screen can do.! :(


Damn![/QUOTE]

Aye not so bad...but the X1950XTX does it at 1920 x 1200

04-11-2006, 10:01:37

Toxcity
WoaH! Lucky git! :worship:

04-11-2006, 12:27:43

Rastalovich
Oblivion is a poorly written game tbh, it feels and acts similarly to Gothic (I think it was).

Some1 needs to give them a game engine.

04-11-2006, 13:07:58

Ham
[QUOTE=Rastalovich]Oblivion is a poorly written game tbh, it feels and acts similarly to Gothic (I think it was).

Some1 needs to give them a game engine.[/QUOTE]

Have to disagree there. Ive played through it 2 and 1/2 times now and the first time it only choked on full-full settings. Once tweeked (both settings and the config files) it ran at a fluid ~60fps. Its a fantastic game and many people waited for a long time for it, due to bethsada (sp prob) spending so much time polishing it. Its a very resource hungary game, so even in a year or twos time it will still challenge high end gaming rigs due to the open-aired nature of the game. Just as morrowind does now.

04-11-2006, 13:20:05

Kempez
It's an awesome looking game with a demanding engine due to large distances and high details. I love it and it's awesome

04-11-2006, 15:09:39

Sticky Mick
Yeah, I agree.
The only thing going against it is the overzealous system specs needed to run it, but that's the future of gaming. We're always on the lookout for photorealistic graphics, they take up so much in system resources, and you've gotta move them about somehow.

But tbh I can't see much point in wanting anything more than 30-35 FPS. We watch TV and video at only 25 FPS, and bearing in mind that the majority of affordable TFTs still can't handle the same response times as CRTs.

04-11-2006, 15:19:01

Rastalovich

Have to disagree there. Ive played through it 2 and 1/2 times now and the first time it only choked on full-full settings. Once tweeked (both settings and the config files) it ran at a fluid ~60fps. Its a fantastic game and many people waited for a long time for it, due to bethsada (sp prob) spending so much time polishing it. Its a very resource hungary game, so even in a year or twos time it will still challenge high end gaming rigs due to the open-aired nature of the game. Just as morrowind does now.



Having to have an outstanding rig to make a game nice`n glossy doesn`t make it well written.

It`s programming trends like this that require an OS to have 512m to be workable, and games of years to come to require 10g and a Quad core processor just to run the credits.

The two games mentioned are poor. Imo btw.

05-11-2006, 03:28:55

Sticky Mick
[QUOTE=Rastalovich]Having to have an outstanding rig to make a game nice`n glossy doesn`t make it well written.

It`s programming trends like this that require an OS to have 512m to be workable, and games of years to come to require 10g and a Quad core processor just to run the credits.

The two games mentioned are poor. Imo btw.[/QUOTE]

I have to agree there Rast.

I've often wondered how much influence system performance has on the programming of games. Would most games of today still run well on something as low as a PIII/800 if they were properly optimised and cleaned up?
Because the industry has stated that an Intel C2D/XR3ITURBONUTTERBARSTEWARD PC is now entry level, is this giving developers the excuse they need to produce poorly written, unoptimised code because the entry level PCs would run it without a problem anyway?

A prime example was the Lock On flight sim. The system specs demanded a Pentium 800 or equivalent. But it ran like a pig on my AMD 64/3000+ Since they've cleaned up the code and optimised it it runs well on my PIII/750 rig, but runs like a dream on the AMD.

05-11-2006, 03:53:53

Rastalovich
A very silly example of a fairly well written game is WoW. I don`t mean in terms of the way it stores the information regarding other players, cos online games in general still suck at that. (taking a single character and running into a crowd of 100s/1000s will explain what I mean here - if u don`t have the gigs u`r pc will be spankered) - less so for ones written in the eastern sector, but even so.

I mean in terms of how glossed it looks in comparison to other online games that demand more base specs just to even play. It can look damn good on the simplest of pcs (minus the lag factor mentioned above). U compare that to RFOnline, ok the game looks really good, nice effects, but the resources required are enormous in comparison to WoW. (there are a handful of years between the original release of RFO in korea and WoW I grant u - but even so)

Bang the developers heads together imo.
Reply
x

Register for the OC3D Newsletter

Subscribing to the OC3D newsletter will keep you up-to-date on the latest technology reviews, competitions and goings-on at Overclock3D. We won't share your email address with ANYONE, and we will only email you with updates on site news, reviews, and competitions and you can unsubscribe easily at any time.

Simply enter your name and email address into the box below and be sure to click on the links in the confirmation emails that will arrive in your e-mail shortly after to complete the registration.

If you run into any problems, just drop us a message on the forums.