Samsung HBM3E 12H 36GB In Production Q2 2024

Samsung HBM3E 12H 36GB
Samsung HBM3E 12H 36GB

Samsung has a new HBM memory coming. The company announced that its next-gen memory will use a 12-high stack. The result is a 36GB memory capacity. This will be a bigger deal for next-gen accelerators as AI GPUs consume greater quantities of memory.

Samsung HBM3E 12H 36GB In Production Q2 2024

The new HBM3E memory increases the layer count from 8 layers for 24GB total to 12 layers and 36GB total. For folks looking at the NVIDIA H100 to H200 change, that is going from 5x 16GB packages on the NVIDIA H100 to 6x 24GB HBM3E stacks on the H200. Increasing memory often means more can be done on a single GPU or AI accelerator for the hot application today, AI training and inference. Working with larger model sizes often means higher quality so memory is set to expand in capacity.

To do this, Samsung says the new HBM3E 12H memory uses “advanced thermal compression non-conductive film (TC NCF), allowing the 12-layer products to have the same height specification as 8-layer ones to meet current HBM package requirements.” (Source: Samsung)

Samsung says that TC NCF helps mitigate chip die-warping. As more chips are stacked atop one another, packaging is a challenge. Those making compute chips do not want the HBM stacks to become taller because then their chips might need a filler material to achieve the same height for cooling. Going up with more stacks means each must be thinner, and thus there is a greater risk of warping as heat is applied. Samsung says the gap between chips is now only 7 micrometers.

Samsung also says it is using different size bumps between the chips to help with heat dissipation and thermal expansion.

Final Words

Samsung simply said that its new HBM3E 36GB memory will start mass production in the first half of this year, which is the industry code for Q2 of a given year. If something is being produced in Q1, companies say Q1. UnlikeĀ Micron’s HBM3E announcement this week. Samsung did not name a customer. Still, with NVIDIA sucking up so much of the top-bin HBM supply with some of the highest cost accelerators (Cerebras is higher) we think that HBM3E in 36GB packages will be used in almost any amount they are produced.


Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.