This is going to be one of the most fun reviews we have done. Specifically, we are going to see how liquid cooling works as part of this Gigabyte H262-ZL0 review. The Gigabyte H262-ZL0 is a liquid-cooled 2U 4-node AMD EPYC server. As a result, we had a little challenge. Our liquid cooling lab is not quite ready, so we are leveraging CoolIT Systems’ lab for this, since they make the PCLs or passive coldplate loops in the Gigabyte server. We recently went up to Calgary in Alberta Canada and I built an entire liquid cooling solution from assembling the Gigabyte server to connecting everything. I brought twelve 64-core AMD EPYC Milan CPUs, 64x 64GB DDR4-3200 DIMMs, several Kioxia NVMe SSDs, and more up to Canada so that we could test out a reasonably high-end configuration.
The net result is that we have a solution built on a test bench that was capable of cooling over 20,000 AMD EPYC 7003 “Milan” cores using the flow from a garden hose of water. This is more than your typical STH server review as this is going to be really cool (pun not intended, but welcomed.)
Gigabyte H262-ZL0 Hardware Overview
This review is going to be a bit different. We are going to start with the chassis, but then we are going to get to the liquid-cooled nodes as well as the entire liquid cooling loop. Before we get too far in this, let me just say this is one of the pieces that I think genuinely may be better in a video than it is in its written form. We have that video here:
As always, we suggest opening this is in its own YouTube browser, tab, or app for the best viewing experience.
Gigabyte H262-ZL0 Chassis Overview
The Gigabyte H262-ZL0 is the company’s 2U 4-node server. Each node houses two AMD EPYC CPUs. When we are discussing this server, the -ZL0 means that the server is liquid-cooled with the CPUs only being cooled by the liquid loops. There is a -ZL1 variant that adds cooling for the RAM as well as the NVIDIA ConnectX fabric adapters. We decided not to do that version just because I already had >$100,000 of gear in my bags and we needed to de-risk the installation to a single day with filming. For longtime STH readers, we have reviewed a number of 2U4N Gigabyte air-cooled units, but this is the first liquid-cooled Gigabyte server we have reviewed.
The twenty-four drive bays are NVMe capable. Each of the four nodes gets access to six 2.5″ drive bays. We will note these are SATA capable as well, but with the current SATA v. NVMe pricing being similar, we do not see many users opting for lower performance SATA instead.
One nice feature is that these drive trays are tool-less. It took Steve longer to snap these photos than it did to install all of the drives.
Gigabyte has a fairly deep chassis at 840mm or just over 33 inches. One can see that Gigabyte also has a service guide printed on the top of the chassis. We are going to work our way back in this review.
The system still has a middle partition between the storage and the nodes/ power supplies that is for cooling and the chassis management controller. The chassis management controller helps tie together all four systems from a monitoring and management perspective via a single rear network port.
The fans are interesting. We only have four fans, but spaces for eight. With liquid cooling, we are removing ~2kW of heat from the chassis without the fans, so there is a lot less need for fan cooling. In the video, we have a microphone next to these fans in an open running system with all four nodes going and they are still relatively quiet.
Here is a look at how the nodes plug into the chassis from the rear side of the chassis.
For another perspective, here is a view looking through the node tunnels to the connectors and fans. Each tunnel houses two dual-socket nodes. The nodes simply slide into the tunnel and can be easily removed for service.
The power supplies are 2.2kW 80Plus Platinum units. The reason we can use these PSUs with a dense system like this is that we have the lower system power due to the heat being removed via the liquid cooling loops.
One can also see the CMC network port next to the bottom PSU and a quick node map so it is easy to tell which node is which.
Next, let us get to the big show and take a look at the liquid-cooled nodes in the server.