ASRock Rack 1U10E-ROME/2T Internal Overview
Inside the server, we see a fairly standard layout. At the front of the system, we have the ten drive bays, then we have a fan partition, followed by the motherboard. Something we found intriguing here is that this design seems to have a lot of space between the drive backplane and the fans. Our sense is that the chassis is designed to potentially take larger drives. Looking forward, as more designs become 2.5″ only, that could shave 1-2 inches from the length of a system like this.
The backplane is particularly intriguing as it offers both SATA and NVMe connectivity. In our external overview, we showed PCIe Gen4 linkage using this setup. Many servers do not wire for both SATA/ NVMe to save costs. This is a very nice and also flexible solution that is enabled by the ASRock Rack’s implementation of the flexible AMD EPYC SerDes. ASRock rack is using some I/O lanes as SATA which allows for this configuration. That is a wise move since in a 1U system we are practically limited in terms of how many PCIe lanes can be exposed and used.
After the drive backplane, we get an array of six 1U fan modules to keep the system cool. There are two items of note. First, ASRock Rack is using a flexible cutout for CPU cooling. We generally prefer the hard plastic airflow guides, but the solution works. Second, the fan partition can be removed using a lever to unlock the entire assembly. That can help ease service times in a tight 1U system.
The CPU socket is a standard AMD EPYC 7002 series SP3 socket that has eight DDR4-3200 DIMM slots on either side. Using 128GB LRDIMMs, as an example, one can get up to 1TB of memory. For lower-cost hosting nodes, one can use 16GB DIMMs for 128GB in the machine.
The AMD EPYC 7002 series has a number of options. Perhaps the most intriguing are the “P” series of processors that are single-socket only parts. AMD has 16, 24, 32, and 64 core versions of these optimized CPUs that carry lower costs as well. We think that the P series in this system is going to be the most popular due to pricing.
The CPU and memory area takes up the majority of the top portion of the server motherboard’s PCB. Below the DIMMs we have the PCIe expansion solution. Here we can see the ASRock Rack ROMED8QM-2T motherboard with its onboard ASPEED AST2500 BMC. On the right of this photo, we have a single M.2 slot that can handle up to M.2 22110 (110mm) SSDs. For the primary expansion area, we have two PCIe Gen4 x16 slots.
The top of these two slots has a PCIe riser. We confirmed this is operating at PCIe Gen4 speeds using an NVIDIA-Mellanox ConnectX-5 NIC. The bottom slot does not have room for a riser.
Between the two slots, we see an OCP NIC 2.0 slot. We tried our normal Broadcom 25GbE NICs in this slot and they worked well. We will note that the slot is an A/B slot so one can put faster NICs in here. At the same time, in the next year or so we expect most of the market to transition to the OCP NIC 3.0 form factor since that is designed for a newer generation with PCIe Gen4 and better serviceability.
A small touch which is nice that we wanted to show is the custom heatsink near the PCIe slot that holds the riser. The heatsink extends around the PCIe slot and is indented to allow for the riser to fit. It may not seem like much, but this is a nice custom touch in the system to cool the onboard Intel X550 10GbE controller.
We did want to discuss one area that we dubbed “the corner.” This corner shows examples of two parts of the system that we wish were changed. First, the DIMM slots are single latch units with a fixed side. The number of times we have missed inserting a DIMM on the fixed portion of the DIMM slots is non-trivial since we have encountered them on a number of workstation motherboards previously. A number of STH readers have also run into issues getting proper seating in this DIMM slot design. We wish these were dual latch solutions, however, this is a well-known design in the industry.
Behind the DIMM slots, we have the power and USB cables. ASRock Rack did a good job of making these connectors work and relatively easy to service in a small space, but it is still harder to do so than on larger, less compact motherboards. That is simply due to this board needing a relatively high connector density to make the platform work. Still, it can be hard to get to some connectors that are surrounded by other components.
In our hardware overview, one can see an array of high-density cables bringing NVMe and SATA connectivity to the front of the chassis. The chassis is dense with cables as will be more common in the future. As we move into more systems supporting PCIe Gen4 and in 2022’s PCIe Gen5 era, we expect to see more motherboards that take a similar approach. ASRock Rack here is using a compact PCB and cables to extend PCIe range to various components. We are starting to see this happen more already, and it will be more common in the future.
Next, we are going to get to our test configuration and management before getting to performance, power consumption, and our final words.