ASRock Rack 8U8X-GNR2 SYN B200 Rear Hardware Overview
For the rear, there is a lot going on here. Twelve power supplies, nine fan modules, and two NIC trays are just part of what goes into the system.

Here is the rear I/O block. There are a few neat bits. First, there are only two USB 3 ports in the rear, while there were four in front. We still get the power and reset buttons, and status LEDs, along with our two Intel i350-AM2 1GbE ports and out-of-band management port. We are not going to go into the management on this review, other than to say this uses industry standard IPMI management based on an ASPEED AST2600 chipset with HTML5 iKVM, montioring, and so forth. Perhaps the big takeaway is that this system is one of the few we see with more USB ports on the front than the rear.

Getting the 1GbE ports and management ports to the rear includes a path through the chassis using thin 1GbE cables that we showed earlier. Those cables plug into the back of the ASRock Rack 4UXG_IOB and take the three front ports and help expose them to the rear of the chassis. Instead of needing front and rear I/O, you can pick and choose. One neat feature in the future might be to have different colors so it is easier to trace them from the front to the rear.

We have already shown the to riser slots for the North-South network cards. In this case these are NVIDIA ConnectX-7 NICs. Now we will get into the rest of the rear of the chassis. As. quick aside, if you look to the middle, you can see where ASRock Rack essentially added 2U with two power supply slots on either side and three fan modules. This is for the added 2U with the HGX tray going from 4U in the NVIDIA H200 generation to 6U in the B200 generation.

First, at the bottom, here is the massive set of MCIO cables that go into the PCIe switch board. This shot alone has a line of sixteen MCIO cables connected for 128 lanes of PCIe Gen5.

ASRock has quite a few structures built around these inside the chassis for power delivery, fan control and more.

Since this area is behind the NVIDIA HGX H200 8 GPU baseboard, these components often need heatsinks to stay cool.

Aside from the front fans on this bottom 6U area, there are nine rear fan modules as well.

With the new chassis these are all the same fan modules which is an upgrade in this generation.

These are easy hot-swap fans that just pop into place.

Next, some may wonder what those big connectors are for on either side of the PCIe switch board. They are for the NIC trays that also have their own fans.

Here is a look at the modules with four NVIDIA ConnectX-7 400GbE NICs inside.

Each tray has four PCIe Gen5 x16 low profile slots which house the NICs that each tie 1:1 to the GPUs on the HGX board.

The NVIDIA ConnectX-7’s for these which can come in Ethernet or InfiniBand flavors, albeit Ethernet feels like it has a lot of momentum right now, especially the NVIDIA Spectrum-X flavor. Each GPU gets 400Gbps of external dedicated connectivity.

The trays slide out easily with a lever making them easy to service.

Next, we have the twelve power supplies.

For the system, we get twelve 3kW 80Plus Titanium level power supplies made by Delta. This is up from eight in the NVIDIA H200 generation. ASRock Rack is doing full 6+6 redundancy here. It is cool to see ASRock Rack offering the higher spec full redundancy here.

Each of the 3kW PSUs is also interesting since it supplies both 12V power to the main server parts while also supplying 54V power to the eight NVIDIA Blackwell GPU board. Some vendors use two different types of power supplies to achieve supplying two different voltages.

Six PSUs are installed on each side. Since we have 12V and 54V PSUs, we do not need to worry as much about which PSU is installed into which slot.

With those twelve power supplies installed, or 36kW of power supplies, we have a lot of power for our system.

We now have all of the components installed in the rear of the system.

Next, let us get to the block diagram and performance.



The sytem from Pegatron looks exactly the same.
The delta over 5 years (Fall? 2020) in the cost, power usage, capability of the 8-way Nvidia servers is quite impressive (along with a $5T market cap.)
Toward the end of my IT career I installed two dual X86 (Xeon or Epyc?) eight A100 based Nvidia servers (vendor: Lambda). My circa 2019 racks supported < 10 KW iirc. The servers (one per rack) were perhaps 4U and cost just one Starbucks coffee < $100K each.
Change is happening quite fast. These things are practically outmoded 2 years are being installed (H100 hourly rental costs 2024 vs 2025)…