ASRock Rack 8U8X-GNR2 SYN B200 Power Consumption
With twelve 3kW 80Plus Titanium power supplies, we have 36kW total. Luckily, we do not need that much power. Instead, these are rated for 6+6 operation. Easier said, these have six primary PSUs and six for redundancy.

Here is a quick look at the Delta power supplies. These are high efficiency and high quality units, important for GPU servers.

Generally we got just over 2.9kW at idle when we had this system also configured with 10 NICs (10x NVIDIA ConnectX-7 NICs.) Each GPU idles usually in the 140W range, and there are a ton of additional components in these systems. Under max load we saw just over 12.5kW. Depending on the application, you may actually get peaks well below that figure especially if your workloads are memory bandwidth bound. Still, with 18kW + 18kW of power supplies, that is plenty.
Perhaps the more important aspect is that you need to have a lot of power in each rack to run these AI servers, and the power consumption needs are only going up.
STH Server Spider: ASRock Rack 8U8X-GNR2 SYN B200
In the second half of 2018, we introduced the STH Server Spider as a quick reference to where a server system’s aptitude lies. Our goal is to start giving a quick visual depiction of the types of parameters that a server is targeted at.

These new systems have more GPU compute than previous generations, but they are also 2U taller. Let us take a step back here. We have a system with over 1.4TB of HBM3e memory and over 4Tbps of networking throughput. That is more than many of the 32-port 100GbE switches that are out there, and this is just a single server. There is a reason that these servers end up getting connected to the 51.2T class of NVIDIA Spectrum-X switches. While this is not the most NIC cards per U, it is also 500Gbps per U which we have to say is dense on the networking side. Lots of GPU compute, networking, and new CPUs are the point of this system.
Final Words
After testing the ASRock Rack 6U8X-EGS2 H200 last year, this system feels quite familiar. In many ways, the layout of this 8U8X-GNR2 SYN B200 system is similar to the previous generation.

At the same time, there are some big updates. Not only is this system 2U taller, but we have a new generation of NVIDIA Blackwell GPUs. The new Intel Xeon 6 CPUs mean we have faster RAM, more cores, and no PCH. Still, the PCIe subsystem is what differentiates this style of AI server from the NVIDIA GB200 NVL72 racks.

The new Blackwell GPUs at 1kW each mean that this system requires high-power racks. It also offers an enormous upgrade over the Hopper generation.

It is always neat to see how ASRock Rack approaches building these large AI servers. This is a mix of fast new technologies and practical engineering that yields a utilitarian design. In many ways, that is the trend in this class of AI servers.



The sytem from Pegatron looks exactly the same.
The delta over 5 years (Fall? 2020) in the cost, power usage, capability of the 8-way Nvidia servers is quite impressive (along with a $5T market cap.)
Toward the end of my IT career I installed two dual X86 (Xeon or Epyc?) eight A100 based Nvidia servers (vendor: Lambda). My circa 2019 racks supported < 10 KW iirc. The servers (one per rack) were perhaps 4U and cost just one Starbucks coffee < $100K each.
Change is happening quite fast. These things are practically outmoded 2 years are being installed (H100 hourly rental costs 2024 vs 2025)…