AMD reveals a Exascale MEGA APU in a new academic paper

AMD reveals a Exascale MEGA APU in a new academic paper

AMD reveals a Exascale MEGA APU in a new academic paper

 
For years AMD has been planning to create large APUs for the High-performance compute (HPC) market, though these plans come with their own design challenges what need to be overcome. 
 
While on paper it may seem easy to design a massive APU, but in reality, these designs are almost impossible to manufacture and present issues given the hugely different design characteristics of a CPU and a GPU. One of the largest issues comes when manufacturing large CPU/GPU dies, with yields decreasing and costs rising as you create larger products. 
 
One of the largest issues comes when manufacturing large CPU/GPU dies, with yields decreasing and costs rising as you create larger products. Imagine a silicon wafer and imagine that a single wafer has a certain number of defects, each wafer creates a certain number of chips, which means that only a small number of chips will be affected in the whole batch. When creating products with large die sized the number of chips per silicon wafer decreases, which means that defects can destroy a larger proportion of the products in a single silicon wafer. 
 
According to this paper, AMD wants to get around this “large die issue” by making their Exascale APUs using a large number of smaller dies, which are connected via a silicon interposer. This is similar to how AMD GPUs connect to HBM memory and can, in theory, be used to connect two or more GPU, or in this case CPU and GPU dies, to create what is effectively a larger final chip using several smaller parts. 
 
In the image below you can see that this APU uses eight different CPU dies/chiplets and eight different GPU dies/chiplets to create an exascale APU that can effectively act like a single unit. If these CPU chiplets use AMD’s Ryzen CPU architecture they will have a minimum of 4 CPU cores, giving this hypothetical APU a total of 32 CPU cores and 64 threads. 

This new APU type will also use onboard memory, using a next-generation memory type that can be stacked directly onto a GPU die, rather than be stacked beside a GPU like HBM. Combine this with an external bank of memory (perhaps DDR4) and AMD’s new GPU memory architecture and you will have a single APU that can work with a seemingly endless amount of memory and easily compute using both CPU and GPU resources using HSA (Heterogeneous System Architecture).     

In this chip both the CPU and GPU portions can use the packages onboard memory as well as an external memory, opening up a lot of interesting possibilities for the HPC market, possibilities that neither Intel or Nvidia can provide themselves. 

 

AMD reveals a Exascale MEGA APU in a new academic paper

 

Right now this new “Mega APU” is currently in early design stages, with no planned release date. It is clear that this design uses a new GPU design that is beyond Vega, using a next-generation memory standard which offers advantages over both GDDR and HBM. 

Building a large chip using several smaller CPU and GPU dies is a smart move from AMD, allowing them to create separate components on manufacturing processes that are optimised and best suited to each separate component and allows each constituent piece to be used in several different CPU, GPU or APU products.  

For example, CPUs could be built on a performance optimised node, while the GPU clusters can be optimised for enhanced silicon density, with interposers being created using a cheaper process due to their simplistic functions that do not require cutting edge process technology.

This design method could be the future of how AMD creates all of their products, with both high-end and low-end GPUs being made from different numbers of the same chiplets and future consoles, desktop APUs and server products using many of the same CPU or GPU chiplets/components.      

 

You can join the discussion on AMD’s Exascale “Mega APU” and its modular design on the OC3D Forums. 

 

AMD reveals a Exascale MEGA APU in a new academic paper

AMD reveals a Exascale MEGA APU in a new academic paper

 
For years AMD has been planning to create large APUs for the High-performance compute (HPC) market, though these plans come with their own design challenges what need to be overcome. 
 
While on paper it may seem easy to design a massive APU, but in reality, these designs are almost impossible to manufacture and present issues given the hugely different design characteristics of a CPU and a GPU. One of the largest issues comes when manufacturing large CPU/GPU dies, with yields decreasing and costs rising as you create larger products. 
 
One of the largest issues comes when manufacturing large CPU/GPU dies, with yields decreasing and costs rising as you create larger products. Imagine a silicon wafer and imagine that a single wafer has a certain number of defects, each wafer creates a certain number of chips, which means that only a small number of chips will be affected in the whole batch. When creating products with large die sized the number of chips per silicon wafer decreases, which means that defects can destroy a larger proportion of the products in a single silicon wafer. 
 
According to this paper, AMD wants to get around this “large die issue” by making their Exascale APUs using a large number of smaller dies, which are connected via a silicon interposer. This is similar to how AMD GPUs connect to HBM memory and can, in theory, be used to connect two or more GPU, or in this case CPU and GPU dies, to create what is effectively a larger final chip using several smaller parts. 
 
In the image below you can see that this APU uses eight different CPU dies/chiplets and eight different GPU dies/chiplets to create an exascale APU that can effectively act like a single unit. If these CPU chiplets use AMD’s Ryzen CPU architecture they will have a minimum of 4 CPU cores, giving this hypothetical APU a total of 32 CPU cores and 64 threads. 

This new APU type will also use onboard memory, using a next-generation memory type that can be stacked directly onto a GPU die, rather than be stacked beside a GPU like HBM. Combine this with an external bank of memory (perhaps DDR4) and AMD’s new GPU memory architecture and you will have a single APU that can work with a seemingly endless amount of memory and easily compute using both CPU and GPU resources using HSA (Heterogeneous System Architecture).     

In this chip both the CPU and GPU portions can use the packages onboard memory as well as an external memory, opening up a lot of interesting possibilities for the HPC market, possibilities that neither Intel or Nvidia can provide themselves. 

 

AMD reveals a Exascale MEGA APU in a new academic paper

 

Right now this new “Mega APU” is currently in early design stages, with no planned release date. It is clear that this design uses a new GPU design that is beyond Vega, using a next-generation memory standard which offers advantages over both GDDR and HBM. 

Building a large chip using several smaller CPU and GPU dies is a smart move from AMD, allowing them to create separate components on manufacturing processes that are optimised and best suited to each separate component and allows each constituent piece to be used in several different CPU, GPU or APU products.  

For example, CPUs could be built on a performance optimised node, while the GPU clusters can be optimised for enhanced silicon density, with interposers being created using a cheaper process due to their simplistic functions that do not require cutting edge process technology.

This design method could be the future of how AMD creates all of their products, with both high-end and low-end GPUs being made from different numbers of the same chiplets and future consoles, desktop APUs and server products using many of the same CPU or GPU chiplets/components.      

 

You can join the discussion on AMD’s Exascale “Mega APU” and its modular design on the OC3D Forums. 

Â