ASRock Rack 1U2N2G-ROME/2T Review 1U 2-Node AMD EPYC GPU Server

4
ASRock Rack 1U2N2G ROME 2T Overview
ASRock Rack 1U2N2G ROME 2T Overview

This one is a bit crazy. We have a server from ASRock Rack that is truly something different. This server offers two nodes, each with an AMD EPYC 7002/7003 CPU, their own BMC, 10Gbase-T networking, and even a GPU. If you think that sounds crazy, it just might, and in this review, we are going to see why.

ASRock Rack 1U2N2G-ROME/2T Hardware Overview

For this one, we have an accompanying video because it is such a unique platform. You can find that video here:

As always, we suggest opening this in a YouTube tab, browser, or app for a better viewing experience than the embedded player. Now, let us get to the hardware, and we are going to cover the chassis and the external hardware in this section.

ASRock Rack 1U2N2G-ROME/2T External Hardware Overview

The front of the system is where things are immediately different. The server is a 1U platform at 650mm or 25.6 in deep. The front panel of that server does not have any hot-swap storage.

ASRock Rack 1U2N2G ROME 2T Front
ASRock Rack 1U2N2G ROME 2T Front

The reason for this is that the server is designed with two nodes that each support AMD EPYC CPUs as well as GPUs. That means that most of the front is either a few ports or vents for airflow. Node 1 is on the left while Node 2 is on the right. As we will see, these are independent nodes.

ASRock Rack 1U2N2G ROME 2T Front Node 1
ASRock Rack 1U2N2G ROME 2T Front Node 1

Each node has an out-of-band management port, two USB 3 ports, a VGA port, and then two network ports. These network ports are 10Gbase-T ports powered by an Intel X550 NIC.

ASRock Rack 1U2N2G ROME 2T Front Node 2
ASRock Rack 1U2N2G ROME 2T Front Node 2

Importantly though, not only are these nodes not hot-swappable, but there is limited room for upgrading them. There are no real hot-swap storage expansion or NIC expansion options.

The rear of the chassis shows two primary features. There are two GPU expansion slots on either side, and in the middle, we have power supplies.

ASRock Rack 1U2N2G ROME 2T Rear
ASRock Rack 1U2N2G ROME 2T Rear

The power supplies are 1.6kW rated Delta units. There is a catch though. These power supplies are only rated at 1.6kW at 200V-240V. If you are running at 100-120V, then the power supply limit is 1kW. In a system like this, you can configure it to run at over 1kW, which is only 500W/ node.

ASRock Rack 1U2N2G ROME 2T 1.6kW PSU
ASRock Rack 1U2N2G ROME 2T 1.6kW PSU

Power from the redundant PSUs is fed into a power distribution board. This power distribution board is shared between the two nodes and directly powers the motherboards, fans, and GPUs in this system.

ASRock Rack 1U2N2G ROME 2T Power Distribution Board With GPU Shrouds
ASRock Rack 1U2N2G ROME 2T Power Distribution Board With GPU Shrouds

The two GPU slots are double width for most of today’s professional and data center GPUs.

ASRock Rack 1U2N2G ROME 2T Riser Node 2
ASRock Rack 1U2N2G ROME 2T Riser Node 2

The risers on either side are designed to hold the GPUs and are cabled to the motherboards. To achieve the proper orientation on either side of the chassis, the Node 2 riser is flipped.

ASRock Rack 1U2N2G ROME 2T Riser Node 1
ASRock Rack 1U2N2G ROME 2T Riser Node 1

Next, let us take a look at the nodes.

4 COMMENTS

  1. Because the PSUs are redundant, that would seem to suggest that you can’t perform component level work on one node without shutting down both?

  2. I’m a little surprised to see 10Gbase-T rather than SFPs in something that is both so clearly cost optimized and which has no real room to add a NIC. Especially for a system that very, very, much doesn’t look like it’s intended to share an environment with people, or even make a whole lot of sense as a small business/branch server(since hypervisors have made it so that only very specific niches would prefer more nodes vs. a node they can carve up as needed).

    Have the prices shifted enough that 10Gbase-T is becoming a reasonable alternative to DACs for 10Gb rack level switching? Is that still mostly not the favored approach; but 10Gbase-T provides immediate compatibility with 1GbE switches without totally closing the door on people who need something faster?

  3. I see a basic dual compute node setup there.

    A dual port 25GbE, 50GbE, or 100GbE network adapter in that PCIe Gen4 x16 slot and we have an inexpensive resilient cluster in a box.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.