Gigabyte R181-NA0 Server Overview
The Gigabyte R181-NA0 is a standard 1U platform with a big feature up front: 10x U.2 2.5″ NVMe SSD hot swap bays. This is, by far, the headline feature of the server. The server itself is only about 28.75″ deep, which means it will fit in just about any standard server rack.
Taking a look overhead, one can see the basic layout. The NVMe SSDs are in the front with a fan partition next pulling air through the chassis. Airflow is ducted over the CPU sockets from two fans for redundancy and more efficient cooling. The CPU sockets are flanked by DDR4 DIMM slots and in the rear of the chassis, we have expansion slots, NVMe cabling, and redundant power supplies.
The fans are delta models rated in the server for up to 23,000rpm. One item that is relatively difficult for server vendors to implement due to space constraints is hot-swapping 1U fans. The Gigabyte R181-NA0 fans do not have hot-swap carriers. Instead, one needs to pull the fan cable off of the header during replacement. This is not too hard, but it is slightly more difficult than replacing hot-swap fans. Modern fans are extremely reliable, so the argument can be made that it is unlikely that we will see a need to ever replace a fan.
The dual LGA3647 CPU sockets target Intel Xeon scalable platforms. You can see that the sockets are flanked by twelve DDR4 DIMM slots each. That means that the system is ready for large memory footprints (up to 3TB) today, and has the potential to utilize Intel Optane Persistent Memory alongside traditional RAM with the Cascade Lake generation.
That mass of light blue cables is used to convey the PCIe signaling from the motherboard to the front NVMe U.2 drive bays. You can see that Gigabyte’s design team is using PCIe cards plus motherboard ports to provide PCIe lanes for the front drive bays.
One of the cards used is a PCIe 3.0 x16 card that occupies one of the server’s PCIe expansion slots. This provides connectivity for four drives. The motherboard supports another riser in this assembly which, in a 2U server, provides a PCIe slot above the power supplies. In this 1U form factor, since there is no room above the power supplies, the secondary riser provides two more PCIe headers for U.2 drives.
One of the motherboard’s two OCP slots is also occupied by a card providing cabled connectivity to the for the PCIe lanes required by the U.2 front drive bays.
A quick note here on VROC. The system supports Intel VROC which is Intel’s RAID solution for NVMe SSDs. Specifically, it works with some Intel NVMe SSDs. VROC requires a physical key. On the Gigabyte R181-NA0, this sits under the PCIe 3.0 x16 card in the riser slot. Removing the riser is relatively easy, but upgrading VROC in a data center using this configuration may be difficult. On the other hand, it is unlikely that one will wish to do so which makes this more of an ordering of installation steps.
Since we built the server, we also wanted to show off a must-have feature and configuration item: SATA DOMs. The gold SATA DOM ports power the modules without an external power cable if the SATA DOMs support the feature. We suggest ordering your server with 64GB or 128GB modules. Doing so allows you to install an OS such as VMware ESXi, or a Linux distribution, without utilizing the 10x NVMe bays for that low-value role. Our advice is to use SATA DOMs to maximize your investment in NVMe storage.
Even with all of the PCIe connectivity heading to the front of the chassis, there are still I/O customization opportunities. There is an OCP networking port for your basic networking connectivity. There is also a PCIe x16 port for your 100GbE or EDR Infiniband connectivity needs.
Moving to the rear of the chassis we see the redundant 1.2kW 80Plus Titanium power supplies. There are a few legacy ports including the VGA and two USB 3.0 ports for KVM cart physical connectivity.
Networking wise there is a single management LAN port and also an RJ-45 style serial console port. Standard out of the box networking is provided by an Intel i350 NIC and two 1GbE ports. If you configure one of these servers, you are most likely going to add 25/40/50/100GbE through the OCP or expansion slots. This 1GbE will likely be used more for provisioning and management rather than data. The 10x NVMe SSDs are able to push data so fast that 1GbE, and realistically, 10GbE would be a bottleneck for most applications.
Next, we will look at the management interface and a block diagram of the platform. We are then going to look at performance, power consumption, and then give our final thoughts on the platform.