Liquid Cooling Infrastructure: From the GPU to the Cooling Tower
The Supermicro NVIDIA GB300 NVL72 is an entirely liquid-cooled system. The GPUs, CPUs, NVLink switches, ConnectX-8 NICs, and other thermally significant components all require liquid cooling. Supermicro designs and produces the complete cooling infrastructure for these systems, and that full coverage was on display during the tour.
Rack Manifolds and Blind-Mate Connections
At the rack level, both horizontal and vertical manifolds distribute coolant to and from each server or switch tray. The vertical manifolds in ORV3 racks are for blind mating.

The other manifolds in Supermicro’s more traditional racks are engineered for single-handed operation.

A technician can insert or remove even larger coolant hoses with one hand, which is a meaningful operational consideration when servicing dozens of racks and carrying a replacement component in the other hand.
In-Row Cooling Distribution Units
The in-row CDU on display is capable of handling approximately 1.8MW of cooling capacity, sufficient to support more than ten NVL72 racks from a single unit.

Inside these CDUs are redundant pumps, flow monitoring, inlet and outlet temperature sensing, and pH monitoring of the coolant loop.

Maintaining proper pH is essential to protecting the cold plates, manifolds, and other components in the loop from degradation over time.

Supermicro also designs the secondary cooling loop piping that runs from the CDU out to the individual racks.

Needless to say, on these large systems, there is a lot of liquid-cooling piping.

As a quick aside, we have shown this a few times, but Supermicro also makes in-rack CDUs.

The in-row CDUs free up more space in the compute racks and lower the total number of pumps required for an AI Factory.
Rear Door Heat Exchangers
The Supermicro NVIDIA GB300 NVL72 rack expels residual heat through forced-air convection even with direct liquid cooling on the primary components. To capture that air-side heat before it enters the data center aisle, Supermicro produces rear door heat exchangers that mount directly to the back of the rack. These function analogously to an automotive radiator: warm air passes through the heat exchanger fins, transferring heat to a secondary liquid cooling loop that carries it out of the facility.

Two variants were on display: a 50-kilowatt unit and an 80-kilowatt unit, both designed for the ORV3 rack specification. A 50-kilowatt unit for standard 19-inch EIA racks was also present for customers deploying liquid-cooled hardware in conventional rack infrastructure. All units integrate with Supermicro monitoring software, providing fan speed telemetry, flow rate data, and leak detection status.
Liquid-to-Air Sidecar: Retrofitting Air-Cooled Facilities
Not every data center is prepared to support direct liquid-cooling at the rack level. For those facilities, Supermicro offers a liquid-to-air sidecar. This self-contained unit installs adjacent to the rack and contains a heat exchanger, fans, and redundant pumps.

The liquid cooling loop from the rack connects to the sidecar, where heat is transferred to air and exhausted through the rear of the unit.

The sidecar includes the same sensor integration and management hooks as the rest of the Supermicro cooling line.
Next, let us get to the outdoor cooling towers before wrapping up.



I’m in awe of how much you cover. Small networking to 1.6T optical DSP’s. Small GB10 box to giant Supermicro AI Factory.