Gigabyte R272-Z32 Review This 24x NVMe AMD EPYC 7002 Server is a Home Run

17

Gigabyte R272-Z32 Performance

For this exercise, we are using our legacy Linux-Bench scripts which help us see cross-platform “least common denominator” results we have been using for years as well as several results from our updated Linux-Bench2 scripts. Starting with our 2nd Generation Intel Xeon Scalable benchmarks, we are adding a number of our workload testing features to the mix as the next evolution of our platform.

At this point, our benchmarking sessions take days to run and we are generating well over a thousand data points. We are also running workloads for software companies that want to see how their software works on the latest hardware. As a result, this is a small sample of the data we are collecting and can share publicly. Our position is always that we are happy to provide some free data but we also have services to let companies run their own workloads in our lab, such as with our DemoEval service. What we do provide is an extremely controlled environment where we know every step is exactly the same and each run is done in a real-world data center, not a test bench.

We are going to show off a few results, and highlight a number of interesting data points in this article.

Python Linux 4.4.2 Kernel Compile Benchmark

This is one of the most requested benchmarks for STH over the past few years. The task was simple, we have a standard configuration file, the Linux 4.4.2 kernel from kernel.org, and make the standard auto-generated configuration utilizing every thread in the system. We are expressing results in terms of compiles per hour to make the results easier to read:

Gigabyte R272 Z32 Linux Kernel Compile Benchmarks
Gigabyte R272 Z32 Linux Kernel Compile Benchmarks

We are seeing some other platforms that are more suited to lower-end AMD EPYC offerings. With a full set of 16 DIMM slots and all of the storage I/O, we think that the AMD EPYC 7702P, EPYC 7502P, and EPYC 7402P will be popular in the Gigabyte R272-Z32. Gigabyte’s platform allows one to fully utilize the feature set of these AMD CPUs to use a lot of cores, RAM, and I/O.

c-ray 1.1 Performance

We have been using c-ray for our performance testing for years now. It is a ray tracing benchmark that is extremely popular to show differences in processors under multi-threaded workloads. We are going to use our 8K results which work well at this end of the performance spectrum.

Gigabyte R272 Z32 C Ray 8K Benchmark
Gigabyte R272 Z32 C Ray 8K Benchmark

We wanted to take a quick pause and note the enormous performance delta that these processor options have. They range from an 8 core $450 CPU to a 64 core $7000 CPU.

7-zip Compression Performance

7-zip is a widely used compression/ decompression program that works cross-platform. We started using the program during our early days with Windows testing. It is now part of Linux-Bench.

Gigabyte R272 Z32 7zip Compression Benchmark
Gigabyte R272 Z32 7zip Compression Benchmark

As some perspective here, the Intel Xeon Platinum 8280 and Platinum 8276L SKUs we tested will fall between the 24 core AMD EPYC 7402P ($1250) and the AMD EPYC 7502P ($2300.) You will need two Intel Xeon CPUs to run a 24-bay NVMe system like the Gigabyte R272-Z32.

OpenSSL Performance

OpenSSL is widely used to secure communications between servers. This is an important protocol in many server stacks. We first look at our sign tests:

Gigabyte R272 Z32 OpenSSL Sign Benchmark
Gigabyte R272 Z32 OpenSSL Sign Benchmark

Here are the verify results:

Gigabyte R272 Z32 OpenSSL Verify Benchmark
Gigabyte R272 Z32 OpenSSL Verify Benchmark

With the AMD EPYC 7002 series, AMD again is pushing its single-socket value proposition by offering “P” SKUs. These “P” SKUs offer a substantial discount over non-P parts and as a result, we see the AMD EPYC 7702P being a much more common option in the Gigabyte R272-Z32 than the EPYC 7742.

UnixBench Dhrystone 2 and Whetstone Benchmarks

Some of the longest-running tests at STH are the venerable UnixBench 5.1.3 Dhrystone 2 and Whetstone results. They are certainly aging, however, we constantly get requests for them, and many angry notes when we leave them out. UnixBench is widely used so we are including it in this data set. Here are the Dhrystone 2 results:

Gigabyte R272 Z32 UnixBench Dhrystone 2 Benchmark
Gigabyte R272 Z32 UnixBench Dhrystone 2 Benchmark

Here are the whetstone results:

Gigabyte R272 Z32 UnixBench Whetstone Benchmark
Gigabyte R272 Z32 UnixBench Whetstone Benchmark

From a raw CPU performance perspective, the single AMD EPYC 7702P and EPYC 7742 configurations are competitive with dual Intel Xeon Platinum 8280/ Platinum 8276 systems.

Chess Benchmarking

Chess is an interesting use case since it has almost unlimited complexity. Over the years, we have received a number of requests to bring back chess benchmarking. We have been profiling systems and are ready to start sharing results:

Gigabyte R272 Z32 Chess Benchmark
Gigabyte R272 Z32 Chess Benchmark

AMD has a very strong single-socket value proposition. We went over the market impact of their revolutionary architecture in AMD EPYC 7002 Series Rome Delivers a Knockout. The other key item to keep in mind when configuring the Gigabyte R272-Z32 is that the entire platform can run using a single CPU. Depending on your software-defined storage application, NIC being used, and workloads you may want to run on the platform, you have a fairly wide range of CPU options from AMD. We focused on the five AMD EPYC “P” series parts and two others that we think may make sense in a single-socket configuration like the Gigabyte R272-Z32.

Next, we are going to cover storage performance and power consumption before getting to our final words.

1
2
3
4
REVIEW OVERVIEW
Design & Aesthetics
9.5
Performance
9.6
Feature Set
9.6
Value
9.5
SHARE
Previous articleMikroTik CRS312-4C+8XG-RM 12-Port 10GbE Switch Review
Next articleArm Joins CXL Making the Path Forward Clear
Patrick has been running STH since 2009 and covers a wide variety of SME, SMB, and SOHO IT topics. Patrick is a consultant in the technology industry and has worked with numerous large hardware and storage vendors in the Silicon Valley. The goal of STH is simply to help users find some information about server, storage and networking, building blocks. If you have any helpful information please feel free to post on the forums.

17 COMMENTS

  1. isn’t that only 112 pcie lanes total? 96 (front) + 8 (rear PCIe slot) + 8 (2 m.2 slots). did they not have enough space to route the other 16 lanes to the unused PCIe slot?

  2. M.2 to U.2 converters are pretty cheap.
    Use Slot 1 (16x PCIe4) for the network connection 200 gbit/s
    Use Slot 5 and the 2xM.2 for the NVMe-drives.

  3. We used targets on each drive, not as a big RAID 0. The multiple simultaneous drive access fits our internal use pattern more closely and is likely closer to how most will be deployed.

  4. CLL – we did not have a U.2 Gen4 NVMe SSD to try. Also, some of the lanes are hung off PCIe Gen3 lanes so at least some drives will be PCIe Gen3 only. For now, we could only test with PCIe Gen3 drives.

  5. Thanks Patrick for the answer. For our application we would like to use one big raid. Do you know if it is possible to configure this on the Epyc system? With Intel this seems to be possible by spanning disks overs VMDs using VROC.

  6. Intel is making life easy for AMD.
    “Xeon and Other Intel CPUs Hit by NetCAT Security Vulnerability, AMD Not Impacted”
    CVE-2019-11184

  7. I may be missing something obvious, and if so please let me know. But it seems to me that there is no NVMe drive in existence today that can come near saturating an x4 NVMe connection. So why would you need to make sure that every single one of the drive slots in this design has that much bandwidth? Seems to me you could use x2 connections and get far more drives, or far less cabling, or flexibility for other things. No?

  8. Patrick,

    If you like this Gigabyte server, you would love the Lenovo SR635 (1U) and SR655 (2U) systems!
    – Universal drive backplane; supporting SATA, SAS and NVMe devices
    – Much cleaner drive backplane (no expander cards and drive cabling required)
    – Support for up to 16x 2.5″ hot-swap drives (1U) or 32x 2.5″ drives (2U);
    – Maximum of 32x NVMe drives with 1:2 connection/over-subscription (2U)

  9. Thanks for the pointer, it must have skipped my mind.
    One comment about that article: It doesn’t really highlight that the front panel drive bays are universal (SATA, SAS and NVMe).
    This is a HUGE plus compared to other offerings, the ability to choose the storage interface that suits the need, at the moment, means that units like the Lenovo have much more versatility!

  10. This is what I was looking to see next for EPYC platforms… I always said it has Big Data written all over it… A 20 server rack has 480 drives… 10 of those is 4800 drives… 100 is 48000 drives… and at a peak of ~630W each, that’s an astounding amount of storage at around 70KW…
    I can see Twitter and Google going HUGE on these since their business is data… Of course DropBox can consolidate on these even from Naples…

LEAVE A REPLY

Please enter your comment!
Please enter your name here