Outdoor Cooling Towers
The final stop on the cooling chain is the facility water loop. Supermicro offers cooling towers in both 1-megawatt and 5-megawatt configurations. The advantage of the smaller configuration is that the lead time is shorter.

Warm facility water, after absorbing heat from the CDU heat exchangers, is pumped out to these towers, where it passes through cooling coils with large fans driving airflow across them.

The goal is not refrigeration. The goal is to return the water temperature close to ambient before it cycles back through the facility.

When we say cooling towers, it is not just the boxes, but also there is quite a bit of additional equipment around the towers themselves.

Supermicro manufactures these towers to compress the supply chain lead time between a customer order and the moment their AI cluster is fully operational.
Final Words
The breadth of what Supermicro designs and manufactures in-house for AI infrastructure is genuinely impressive. Starting from the server nodes that go into the NVL72 rack and working outward through the cold plates, cooling manifolds, CDUs, rear door heat exchangers, outdoor cooling towers, and spanning the complete SuperCloud software suite, this is a company that has made a deliberate strategic choice to own as much of the supply chain as possible.

The rationale is not difficult to understand. When the world’s largest AI infrastructure operators are deploying hundreds or thousands of racks on compressed timelines, the ability to deliver every component of the system from a single vendor with a unified supply chain and pre-validated integration is a meaningful competitive advantage.

Each NVL72 rack represents millions of dollars of hardware. Delays caused by mismatched cooling components or unvalidated networking configurations are not acceptable at this scale.

The generational leap from NVIDIA B200 to B300 is substantial in ways that extend well beyond GPU memory capacity. The integration of ConnectX-8 networking directly onto the HGX baseboard, delivering 800Gbps per GPU, doubles the east-west bandwidth available to every GPU in the system. That is as significant an upgrade for distributed training and inference at scale as the jump from HBM3 to HBM3E. Supermicro is positioned to deliver that full generational upgrade as a complete, integrated system, which is ultimately the only way this hardware gets deployed successfully at the scale the industry is targeting.
Hopefully, you enjoyed this one. We started looking at only a few servers, then ended up looking at just about all the hardware you would need to get racks installed in a modern Supermicro NVIDIA B300 or GB300 AI Factory.



I’m in awe of how much you cover. Small networking to 1.6T optical DSP’s. Small GB10 box to giant Supermicro AI Factory.