Supermicro SYS-821GE-TNHR Power Delivery
The system standard comes with six 3kW power supplies that we have featured on STH previously when we looked at the liquid-cooled version of the server. Those provide 4+2 redundancy.

For those that want to have greater 4+4 redundancy, the center two fans can be replaced with two additional PSUs as an option.

Generally with this system, we see somewhere just over 2kW at idle, and around 10kW at its peak. It is a bit hard to compare the GPU server power consumption directly since this server, for example, has around 1kW of NICs installed, a full set of memory, and so forth. It is fun that we have a system that has networking alone using power more akin to the power consumption of a standard 2U server.

As we went into earlier, Supermicro built this 8U platform because it is dense enough for most racks that deliver under 60kW of power even with the additional height. By increasing the size, it got class-leading serviceability along with a few percent lower power consumption versus a shorter 6U chassis.
STH Server Spider: Supermicro SYS-821GE-TNHR
In the second half of 2018, we introduced the STH Server Spider as a quick reference to where a server system’s aptitude lies. Our goal is to start giving a quick visual depiction of the types of parameters that a server is targeted at.

This server is not the most dense, but that is perhaps the point. It is hitting a density metric to align with the maximum power output for ~50-60kW racks in data centers. Still, since it is a taller chassis, we must say it is slightly less dense. That is always a fun part of having a density metric.
Final Words
We first showed the liquid-cooled version of this 8U platform in 2023, so in 2025, and now with the NVIDIA H200 upgrade, it feels like we know this system well. Supermicro’s overt choice to lower density was one that perhaps more in the industry should follow.

Every major component in this server can be removed and swapped without removing the chassis, except the midplane which removes via handles through the top. This is really important in AI clusters. If and when a system fails, it must be repaired. Supermicro’s design allows the system to remain racked and cabled through service. In the last three months, I have taken apart roughly 20 AI servers from different vendors, and there is a good reason that this is the industry’s standard and a wildly successful model for Supermicro.

Overall, if you are looking for NVIDIA HGX H200 servers, then the Supermicro SYS-821GE-TNHR is a great platform, that has sold in high volumes, and that shows a lot of refinement compared to the 8 GPU servers we reviewed over the past decade.
Did SuperMicro say anything about how they ensure networking reliability, with the optics on the hot-aisle side? Optics are notoriously unreliable in GPU work (look up “network flaps”) with the hot aisle heat and increased dust both likely to be problems.