Look at a Formula One race car: Objectively, it’s an absurd way to move a human around. But when speed is everything, we’ll go to any lengths. The same is true when it comes to supercomputers – massive, high-performance systems dedicated to solving some of humanity’s greatest challenges in fields such medicine, climate science and cosmology. The algorithms running on these systems – from AI training to multiphysics simulation to manipulation of massive medical “omics” datasets – have one thing in common: They all need the maximum possible bandwidth between memory and processing.
And that means High Bandwidth Memory (HBM), a very special kind of DRAM where multiple memory die are stacked vertically and integrated with a processor in one chip package. Why do this? Physics! You can’t get around the fact that a bit can’t travel a foot across a circuit board in less than a nanosecond. Those few inches between memory DIMMs and CPU matter at supercomputer speeds. But you can get more bandwidth by moving memory closer to compute and shrinking those distances down to millimeters. Closer is faster.
The challenges go beyond “just” packaging. Now that our partner’s silicon is cohabiting with ours, we need a whole new level of collaboration and integration between two engineering teams.
I talked to recovering DRAM circuit designer Girish Cherussery to learn more. As HBM product management director in Micron’s Compute & Networking Business Unit, Girish is immersed in this fusion of technology and business, all in pursuit of ultimate performance. You will be fascinated by what he has to say.
Learn more at micron.com/ultra-bandwidth-solutions.