SK hynix has a big announcement for the industry, it has developed a new generation of high-bandwidth memory or HBM. HBM is used in accelerators such as data center GPUs, FPGAs, AI accelerators, and many other places. It is even making its way to the Intel Sapphire Rapids Xeon as shown at Hot Chips 33. Developing a new generation is a big deal for the industry since that means more performance available to chip designers.
SK.hynix Announces Its Ultra-Fast HBM3
One somewhat fun part of the new technology is the naming. It is called “HBM3” but it is actually a fourth-generation technology. There was original HBM, then HBM2, now HBM2e shipping in current devices. HBM3 is slotted as the next generation beyond that.
The new chips have bandwidth of 819GB/s which is not double HBM2e but just over 70% faster so that is a lot more bandwidth. SK hynix also says that its new chips come in 16GB and 24GB capacities. That means that capacity can grow without requiring more chips.
SK hynix says that the new HBM3 chips are made by stacking 12-dies into a 30 micrometer package using TSV (through silicon via) interconnects. The company also says it has on-chip ECC which is basically becoming standard at this point.
SK hynix is not going to replace the memory in your notebook or desktop with HBM3 any time soon. HBM2e just hit mass production last year and this is a development announcement rather than a mass production announcement. Further, the cost of these HBM chips is high enough that these are likely only to find placement in high-end devices in the data center or other places where high-value parts are often found. Memory bandwidth is a key challenge in modern chips, so a technology offering more bandwidth will be quickly adopted in the market. Still, while we see HBM2e in products we are reviewing at STH in 2021, we expect to see HBM3 as soon as it hits mass production.