Supermicro SYS-821GE-TNHR NVIDIA HGX H200 8-GPU Tray Overview
On the front of the system, the top section is the GPU tray that can be removed using the two latches on either side.

If there is an issue with a HBM package on the GPU for example, this entire assembly can be removed from the cold aisle and swapped in around a minute without disturbing any cabling.

Previously, we have seen a version of this system and tray in our A Look at the Liquid Cooled Supermicro SYS-821GE-TNHR 8x NVIDIA H100 AI Server piece. In this version, we have a fairly early NVIDIA H200 8-GPU baseboard.

In the front, we have the NVIDIA NVLink switch heatsinks which have grown with each generation. In the future Blackwell generation, we go from four NVLink switches to two, and they are placed in the center of the GPUs. That will be a big change.

Behind those heatsinks, we have the eight NVIDIA H200 GPUs.

We have covered the NVIDIA H200 many times, but these GPUs take the Hopper architecture and upgrade the memory to HBM3e with 141GB of capacity. In the total system that gives us 1.128TB of HBM3e memory.

On the back, we get the Astera Labs PCIe retimers along with their heatsinks.

We also get an array of connectors to provide the massive power needs along with the PCIe connectivity to the GPU baseboard.

Next, let us take a quick look inside the system to see the midplane.
Did SuperMicro say anything about how they ensure networking reliability, with the optics on the hot-aisle side? Optics are notoriously unreliable in GPU work (look up “network flaps”) with the hot aisle heat and increased dust both likely to be problems.