Gigabyte G242-Z10 Power Consumption
We decided to settle on an AMD EPYC 7402P configuration with 256GB of RAM (32GBx8) and the QLogic 4x 25GbE NIC and varied GPUs in the system.
Perhaps the most interesting aspect here is that the GeForce RTX 2080 Ti Blower configuration was using the most power. It seems like the extra power draw from the blower-style fans was pushing chassis fans to spin harder which caused additional power consumption in the context of the server. That is an interesting result which was not our hypothesis, however, it makes some sense given ramping fan speeds we saw. We would have liked to test with the new Tesla V100S and the Radeon Instinct MI60 but we did not have those GPUs available and could not source units to borrow. Still, this is a decent set to give some sense of scaling.
STH Server Spider: Gigabyte G242-Z10
In the second half of 2018, we introduced the STH Server Spider as a quick reference to where a server system’s aptitude lies. Our goal is to start giving a quick visual depiction of the types of parameters that a server is targeted at.
The Gigabyte G242-Z10 is not the densest system. It has four 3.5″ drives, but it is not a capacity storage server. It has two NVMe 2.5″ drives and a single M.2 slot but is not an NVMe server. There are two GPUs per rack U and a single CPU with 8x DIMM slots. While this is not the densest system, it is perfect for many existing racks and facilities that are not chasing 40kW+ per rack loads.
This is one of those platforms which I wish we had for the lab. It is perhaps one of the easiest 2U servers to service GPUs on since Gigabyte is doing something novel with the G242-Z10’s PCIe risers. In terms of density, since we do not currently have the highest-density racks, this is just about perfect.
We had a few minor points of improvement. There are small things such as moving the battery that we pointed out that are nice-to-have upgrades. Still, we like the overall attention to detail on items such as the tool-less drive trays is excellent.