Gigabyte R181-NA0 1U 10x U.2 NVMe Server Review

12

Gigabyte R181-NA0 Server Overview

The Gigabyte R181-NA0 is a standard 1U platform with a big feature up front: 10x U.2 2.5″ NVMe SSD hot swap bays. This is, by far, the headline feature of the server. The server itself is only about 28.75″ deep, which means it will fit in just about any standard server rack.

Gigabyte R181 NA0 Front
Gigabyte R181 NA0 Front

Taking a look overhead, one can see the basic layout. The NVMe SSDs are in the front with a fan partition next pulling air through the chassis. Airflow is ducted over the CPU sockets from two fans for redundancy and more efficient cooling. The CPU sockets are flanked by DDR4 DIMM slots and in the rear of the chassis, we have expansion slots, NVMe cabling, and redundant power supplies.

Gigabyte R181 NA0 Top View Internal
Gigabyte R181 NA0 Top View Internal

The fans are delta models rated in the server for up to 23,000rpm. One item that is relatively difficult for server vendors to implement due to space constraints is hot-swapping 1U fans. The Gigabyte R181-NA0 fans do not have hot-swap carriers. Instead, one needs to pull the fan cable off of the header during replacement. This is not too hard, but it is slightly more difficult than replacing hot-swap fans. Modern fans are extremely reliable, so the argument can be made that it is unlikely that we will see a need to ever replace a fan.

Gigabyte R181 NA0 Delta Fans
Gigabyte R181 NA0 Delta Fans

The dual LGA3647 CPU sockets target Intel Xeon scalable platforms. You can see that the sockets are flanked by twelve DDR4 DIMM slots each. That means that the system is ready for large memory footprints (up to 3TB) today, and has the potential to utilize Intel Optane Persistent Memory alongside traditional RAM with the Cascade Lake generation.

Gigabyte R181 NA0 CPU Sockets And Memory
Gigabyte R181 NA0 CPU Sockets And Memory

That mass of light blue cables is used to convey the PCIe signaling from the motherboard to the front NVMe U.2 drive bays. You can see that Gigabyte’s design team is using PCIe cards plus motherboard ports to provide PCIe lanes for the front drive bays.

Gigabyte R181 NA0 DSATA DOM And PCIe Cards With Cables
Gigabyte R181 NA0 SATA DOM And PCIe Cards With Cables

One of the cards used is a PCIe 3.0 x16 card that occupies one of the server’s PCIe expansion slots. This provides connectivity for four drives. The motherboard supports another riser in this assembly which, in a 2U server, provides a PCIe slot above the power supplies. In this 1U form factor, since there is no room above the power supplies, the secondary riser provides two more PCIe headers for U.2 drives.

Gigabyte R181 NA0 NVMe CNV3124
Gigabyte R181 NA0 NVMe CNV3124

One of the motherboard’s two OCP slots is also occupied by a card providing cabled connectivity to the for the PCIe lanes required by the U.2 front drive bays.

Gigabyte R181 NA0 NVMe VROC Key And U2 Connections
Gigabyte R181 NA0 NVMe VROC Key And U2 Connections

A quick note here on VROC. The system supports Intel VROC which is Intel’s RAID solution for NVMe SSDs. Specifically, it works with some Intel NVMe SSDs. VROC requires a physical key. On the Gigabyte R181-NA0, this sits under the PCIe 3.0 x16 card in the riser slot. Removing the riser is relatively easy, but upgrading VROC in a data center using this configuration may be difficult. On the other hand, it is unlikely that one will wish to do so which makes this more of an ordering of installation steps.

Since we built the server, we also wanted to show off a must-have feature and configuration item: SATA DOMs. The gold SATA DOM ports power the modules without an external power cable if the SATA DOMs support the feature. We suggest ordering your server with 64GB or 128GB modules. Doing so allows you to install an OS such as VMware ESXi, or a Linux distribution, without utilizing the 10x NVMe bays for that low-value role. Our advice is to use SATA DOMs to maximize your investment in NVMe storage.

Gigabyte R181 NA0 SATA DOMs Installed
Gigabyte R181 NA0 SATA DOMs Installed

Even with all of the PCIe connectivity heading to the front of the chassis, there are still I/O customization opportunities. There is an OCP networking port for your basic networking connectivity. There is also a PCIe x16 port for your 100GbE or EDR Infiniband connectivity needs.

Gigabyte R181 NA0 OCP And Riser
Gigabyte R181 NA0 OCP And Riser

Moving to the rear of the chassis we see the redundant 1.2kW 80Plus Titanium power supplies. There are a few legacy ports including the VGA and two USB 3.0 ports for KVM cart physical connectivity.

Gigabyte R181 NA0 Rear
Gigabyte R181 NA0 Rear

Networking wise there is a single management LAN port and also an RJ-45 style serial console port. Standard out of the box networking is provided by an Intel i350 NIC and two 1GbE ports. If you configure one of these servers, you are most likely going to add 25/40/50/100GbE through the OCP or expansion slots. This 1GbE will likely be used more for provisioning and management rather than data. The 10x NVMe SSDs are able to push data so fast that 1GbE, and realistically, 10GbE would be a bottleneck for most applications.

Next, we will look at the management interface and a block diagram of the platform. We are then going to look at performance, power consumption, and then give our final thoughts on the platform.

1
2
3
4
5
REVIEW OVERVIEW
Design & Aesthetics
9.2
Performance
9.7
Feature Set
9.4
Value
9.3
Previous articleMicrosoft HGX-1 at the AI Hardware Summit
Next articleIntel Optane 905P Hits 1.5TB Finally and 380GB M.2 Specs Up
Patrick has been running STH since 2009 and covers a wide variety of SME, SMB, and SOHO IT topics. Patrick is a consultant in the technology industry and has worked with numerous large hardware and storage vendors in the Silicon Valley. The goal of STH is simply to help users find some information about server, storage and networking, building blocks. If you have any helpful information please feel free to post on the forums.
gigabyte-r181-na0-1u-10x-u-2-nvme-server-reviewThe Gigabyte R181-NA0 1U 10x U.2 NVMe server is the form factor we see as the dominant hyper-converged and software-defined storage platform for Intel Xeon Scalable systems. Gigabyte's design team delivered a server that can easily be filled with SSDs and used for high-performance storage clusters.

12 COMMENTS

  1. Only 2 UPI links? When true it’s not really made for the gold 6 series or platinum and to be honest why would you need suchs an amount of compute power in a storage server?

    I’ll stick to a brand that can handle 24 NVMe-drives with just one CPU.

  2. @Misha,

    You are obviously referring to a different CPU brand, since there is no single Intel CPU which can support 10x NVMe drives.

  3. @BinkyTO,

    10 NVMe’s is only 40 PCIe slots sol it should be possible with Xeon Scalable, you just don’t have many lanes left for other equipment.

  4. @misha hyperconverged one of the biggest growing sectors and a multi billion dollar hardware market. You’d need two CPUs since you can’t do even a single 100G link on an x8.

    I’d say this looks nice

  5. @Tommy F
    Only two 10.4 UPI links you are easally satisfied.
    24×4=96 PCIe lanes so their are 32 left – 4 for chipsets etc and a boot drive leaves you with 28 PCIe 3-lanes.
    28PCIe-lanes x 985MB/s x 8bit = 220 Gbit/s, good enough for 2018.
    And lets not forget octa-channel memory(DDR4-2666) on 1 CPU(7551p) for $2,200 vs. 2 x 5119T($1555 each) with only 6 channel DDR4-2400.

    In 2019 EPYC 2 will be released with PCIe-4, which has double the speed of PCIe-3.

    Not taken into account “Spectre, Meltdown, Foreshadow, etc….)

  6. @Patrick there’s a small error in the legend of the storage performance results. Both colors are labeled with read performance where I expect the black bars to represent write performance instead.

    What I don’t see is the audience for this solution. With an effective raw capacity of 20Tb maximum (and probably a 1:10 ratio between disk and platform cost), why would anyone buy this platform instead of a dedicated JBOF or other ruler format based platforms. The cost per TB as well as storage density of the server reviewed here seems to be significantly worse.

  7. David- thanks for the catch. It is fixed. We also noted that we tried 8TB drives, we just did not have a set of 10 for the review. 2TB is now on the lower end of the capacity scale for new enterprise drives. These 10x NVMe 1U’s there is a large market for, which is why the form factor is so prevalent.

    Misha – although I may personally like EPYC, and we have deployed some EPYC nodes into our hosting cluster, this review was not focused on that as an alternative. Most EPYC systems still crash if you try to hot-swap a NVMe SSD while that feature just works on Intel systems. We actually use mostly NVMe AICs to avoid remote hands trying to remove/ insert NVMe drives on EPYC systems.

    Also, your assumption that you will be able to put an EPYC 2nd generation in an existing system and have it run PCIe Gen4 to all of the devices is incorrect. You are then using both CPUs and systems that do not currently exist to compare to a shipping product.

  8. Few things:
    1. Such a system with an a single EPYC processor would save money to a customer who needs such a system since you can do everything with a single CPU.
    2. 10 NVME drives with those fans – if you’ll heavily use those CPU’s (lets say 60-90%) then the speed of those NVME drives will drop rapidly since those fans will run faster, sucking more air from outside and will cool the drives too much, which reduces the SSD read speed. I didn’t see anything mentioned on this article.

  9. @Patrick – You can also buy EPYC systems that do NOT crash when hot-swapping a NVMe SSD, you even mentioned it in earlier thread on STH.

    I did not assume that you can swap out the EPYC1 with an EPYC2 and get PCIe-4. When it is just for more compute speed it should work(same socket) as promised many times by AMD. When you want to make use of PCIe-4 you will need a new motherboard. When you want to upgrade from XEON to XEON-scalable you have no choice, you have to upgrade both the MB as the CPU

  10. Hetz, we have a few hundred dual Xeon E5 and Scalable 10 NVME 1U’s and have never seen read speeds due to fans drop.

    @Patrick don’t feed the troll. I don’t envy that part of your job.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.