Gigabyte R181-NA0 1U 10x U.2 NVMe Server Review

12

Gigabyte R181-NA0 CPU Performance

As mentioned earlier in this article, we swapped sample pairs of processors from throughout the Intel Xeon Scalable range. We did not tell the company that we were doing this so they had no input on the process nor the CPUs used. At the same time, we wanted to provide a view of the different CPU levels. Intel has around 50 public domain Xeon Scalable (Skylake-SP) SKUs and the STH/ DemoEval lab has just over half of them on hand to use for testing. We did not have time to run through every set, so instead we picked a few samples to show how incrementing CPU levels impact performance.

Running through our standard test suite generated over 1000 data points for each set of CPUs. We are cherry picking a few to give some sense of CPU scaling.

Python Linux 4.4.2 Kernel Compile Benchmark

This is one of the most requested benchmarks for STH over the past few years. The task was simple, we have a standard configuration file, the Linux 4.4.2 kernel from kernel.org, and make the standard auto-generated configuration utilizing every thread in the system. We are expressing results in terms of compiles per hour to make the results easier to read.

Gigabyte R181 NA0 Linux Kernel Compile Benchmark CPU Options
Gigabyte R181 NA0 Linux Kernel Compile Benchmark CPU Options

In this test, we have a fairly wide scaling range between the top and bottom configurations. If you are about CPU performance, Intel has a wide range of Intel Xeon Scalable options available for your server.

c-ray 1.1 Performance

We have been using c-ray for our performance testing for years now. It is a ray tracing benchmark that is extremely popular to show differences in processors under multi-threaded workloads. We are going to use our new Linux-Bench2 8K render to show differences.

Gigabyte R181 NA0 C Ray 8K Benchmark CPU Options
Gigabyte R181 NA0 C Ray 8K Benchmark CPU Options

Here the difference between the lower-end Xeon Bronze configurations and the Xeon Silver 4116 is so massive that it skews the scale of our chart. We suggest that if you are buying this server, you should look at a minimum the higher-end of the Silver range, but most likely the Xeon Gold 6100 range.

7-zip Compression Performance

7-zip is a widely used compression/ decompression program that works cross-platform. We started using the program during our early days with Windows testing. It is now part of Linux-Bench.

Gigabyte R181 NA0 7zip Benchmark CPU Options
Gigabyte R181 NA0 7zip Benchmark CPU Options

Our compression test shows a similar ranking among the CPU options. Modern servers rely on compression at a number of different workload points so this is a useful chart to gauge how the relative performance stacks up.

OpenSSL Performance

OpenSSL is widely used to secure communications between servers. This is an important protocol in many server stacks. We first look at our sign tests:

Gigabyte R181 NA0 OpenSSL Sign Benchmark CPU Options
Gigabyte R181 NA0 OpenSSL Sign Benchmark CPU Options

Here are the verify results:

Gigabyte R181 NA0 OpenSSL Verify Benchmark CPU Options
Gigabyte R181 NA0 OpenSSL Verify Benchmark CPU Options

Here you can see the impact of high core counts and clock speeds. When you look at the investment in a server, CPUs are only a portion of the cost an are an opportunity to better leverage the rest of the system investment by moving up the stack.

Chess Benchmarking

Chess is an interesting use case since it has almost unlimited complexity. Over the years, we have received a number of requests to bring back chess benchmarking. We have been profiling systems and are ready to start sharing results:

Gigabyte R181 NA0 Chess Benchmark CPU Options
Gigabyte R181 NA0 Chess Benchmark CPU Options

Here is a good example of where a relatively low nominal cost can yield a massive performance improvement. This is both moving from the Xeon Bronze to the Silver/ Gold 5100 lines and from the Silver/ Gold 5100 lines to the Gold 6100 series.

GROMACS STH Small AVX2/ AVX-512 Enabled

We have a small GROMACS molecule simulation we previewed in the first AMD EPYC 7601 Linux benchmarks piece. In Linux-Bench2 we are using a “small” test for single and dual socket capable machines. Our medium test is more appropriate for higher-end dual and quad socket machines. Our GROMACS test will use the AVX-512 and AVX2 extensions if available.

Gigabyte R181 NA0 GROMACS STH Small Benchmark CPU Options
Gigabyte R181 NA0 GROMACS STH Small Benchmark CPU Options

Here you can see the impact of dual port FMA AVX-512 on the Intel Xeon Scalable CPUs, specifically in the Gold 6100 and Platinum 8100 series parts. The enormous gap between the Gold 6100 series and the Gold 5100 series and below is caused by the feature’s enablement on higher-end CPUs. You can read more about that in our Intel Xeon Scalable Processor Family Microarchitecture Overview. Suffice to say there is an enormous impact with the feature in AVX-512 workloads.

Next, we are going to conclude our Gigabyte R181-NA0 review with storage performance, power consumption, and our final words.

1
2
3
4
5
REVIEW OVERVIEW
Design & Aesthetics
9.2
Performance
9.7
Feature Set
9.4
Value
9.3
Previous articleMicrosoft HGX-1 at the AI Hardware Summit
Next articleIntel Optane 905P Hits 1.5TB Finally and 380GB M.2 Specs Up
Patrick has been running STH since 2009 and covers a wide variety of SME, SMB, and SOHO IT topics. Patrick is a consultant in the technology industry and has worked with numerous large hardware and storage vendors in the Silicon Valley. The goal of STH is simply to help users find some information about server, storage and networking, building blocks. If you have any helpful information please feel free to post on the forums.
gigabyte-r181-na0-1u-10x-u-2-nvme-server-reviewThe Gigabyte R181-NA0 1U 10x U.2 NVMe server is the form factor we see as the dominant hyper-converged and software-defined storage platform for Intel Xeon Scalable systems. Gigabyte's design team delivered a server that can easily be filled with SSDs and used for high-performance storage clusters.

12 COMMENTS

  1. Only 2 UPI links? When true it’s not really made for the gold 6 series or platinum and to be honest why would you need suchs an amount of compute power in a storage server?

    I’ll stick to a brand that can handle 24 NVMe-drives with just one CPU.

  2. @Misha,

    You are obviously referring to a different CPU brand, since there is no single Intel CPU which can support 10x NVMe drives.

  3. @BinkyTO,

    10 NVMe’s is only 40 PCIe slots sol it should be possible with Xeon Scalable, you just don’t have many lanes left for other equipment.

  4. @misha hyperconverged one of the biggest growing sectors and a multi billion dollar hardware market. You’d need two CPUs since you can’t do even a single 100G link on an x8.

    I’d say this looks nice

  5. @Tommy F
    Only two 10.4 UPI links you are easally satisfied.
    24×4=96 PCIe lanes so their are 32 left – 4 for chipsets etc and a boot drive leaves you with 28 PCIe 3-lanes.
    28PCIe-lanes x 985MB/s x 8bit = 220 Gbit/s, good enough for 2018.
    And lets not forget octa-channel memory(DDR4-2666) on 1 CPU(7551p) for $2,200 vs. 2 x 5119T($1555 each) with only 6 channel DDR4-2400.

    In 2019 EPYC 2 will be released with PCIe-4, which has double the speed of PCIe-3.

    Not taken into account “Spectre, Meltdown, Foreshadow, etc….)

  6. @Patrick there’s a small error in the legend of the storage performance results. Both colors are labeled with read performance where I expect the black bars to represent write performance instead.

    What I don’t see is the audience for this solution. With an effective raw capacity of 20Tb maximum (and probably a 1:10 ratio between disk and platform cost), why would anyone buy this platform instead of a dedicated JBOF or other ruler format based platforms. The cost per TB as well as storage density of the server reviewed here seems to be significantly worse.

  7. David- thanks for the catch. It is fixed. We also noted that we tried 8TB drives, we just did not have a set of 10 for the review. 2TB is now on the lower end of the capacity scale for new enterprise drives. These 10x NVMe 1U’s there is a large market for, which is why the form factor is so prevalent.

    Misha – although I may personally like EPYC, and we have deployed some EPYC nodes into our hosting cluster, this review was not focused on that as an alternative. Most EPYC systems still crash if you try to hot-swap a NVMe SSD while that feature just works on Intel systems. We actually use mostly NVMe AICs to avoid remote hands trying to remove/ insert NVMe drives on EPYC systems.

    Also, your assumption that you will be able to put an EPYC 2nd generation in an existing system and have it run PCIe Gen4 to all of the devices is incorrect. You are then using both CPUs and systems that do not currently exist to compare to a shipping product.

  8. Few things:
    1. Such a system with an a single EPYC processor would save money to a customer who needs such a system since you can do everything with a single CPU.
    2. 10 NVME drives with those fans – if you’ll heavily use those CPU’s (lets say 60-90%) then the speed of those NVME drives will drop rapidly since those fans will run faster, sucking more air from outside and will cool the drives too much, which reduces the SSD read speed. I didn’t see anything mentioned on this article.

  9. @Patrick – You can also buy EPYC systems that do NOT crash when hot-swapping a NVMe SSD, you even mentioned it in earlier thread on STH.

    I did not assume that you can swap out the EPYC1 with an EPYC2 and get PCIe-4. When it is just for more compute speed it should work(same socket) as promised many times by AMD. When you want to make use of PCIe-4 you will need a new motherboard. When you want to upgrade from XEON to XEON-scalable you have no choice, you have to upgrade both the MB as the CPU

  10. Hetz, we have a few hundred dual Xeon E5 and Scalable 10 NVME 1U’s and have never seen read speeds due to fans drop.

    @Patrick don’t feed the troll. I don’t envy that part of your job.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.