Gigabyte R272-Z32 Review This 24x NVMe AMD EPYC 7002 Server is a Home Run

21

Gigabyte R272-Z32 Storage Performance

Since we usually use Intel DC P3520 2TB drives for 24-bay storage arrays we wanted to show those plus some of the Micron 9300 Pro 9300 3.84TB SSDs. We do not have 24s Micron 9300 Pro’s, so instead, we just filled the chassis and wanted to show performance across all 24 drives.

Gigabyte R272 Z32 24 Bay NVMe Performance
Gigabyte R272 Z32 24 Bay NVMe Performance

One of the reasons that we typically use the Intel P3520’s, aside from having quantities of them, is that on Intel systems, PCIe switches in 24-bay dual Intel Xeon systems mean there is only a PCIe 3.0 x16 lane to one or two switches. With so little bandwidth on most Intel Xeon systems, using lower-speed SSDs does not matter as it is the PCIe switch uplink bandwidth that becomes a bottleneck here.

With the Gigabyte R272-Z32 and AMD EPYC 7002 series, each drive gets a direct connection. The 10x Micron 9300 Pro array we have is faster than virtually every 24-bay dual Intel Xeon system due to not needing a PCIe switch. When you add the results of the 14x Intel DC P3520 SSDs and extrapolate how fast 24x Micron 9300 Pro SSDs would be, the results are not close. If you want a fast NVMe storage server, the Gigabyte R272-Z32 delivers single-socket performance that exceeds dual Intel Xeon Scalable processor performance.

Normally we test network performance as well. We managed to get just over 100Gbps speeds using the Mellanox ConnectX-5 PCIe Gen4 card and two ports active. Still, we wish there was another PCIe Gen4 x16 interface. We also did not have other PCIe Gen4 network cards to test with, so that seemed almost redundant to chart.

Gigabyte R272-Z32 Power Consumption

For this, we wanted to present two sets of numbers. One using the AMD EPYC 7702P 64-core part without storage being used, and then a maximum effort run with both storage and networking being hammered along with the CPU. In this system, a massive amount of power can be used especially with up to 24x 15W drives. We thought it would be important to give a range.

  • Idle: 0.13kW
  • STH 70% CPU Load: 0.28kW
  • 100% Load: 0.32kW
  • Maximum Recorded: 0.63kW

That is a great showing. The impact here is that one can consolidate multiple Intel Xeon E5-2600 V4 systems into a single socket Gigabyte R272-Z32 that also uses fewer switch ports and PDU ports. As a result, the overall data center power savings are excellent.

Note these results were taken using a 208V Schneider Electric / APC PDU at 17.6C and 72% RH. Our testing window shown here had a +/- 0.3C and +/- 2% RH variance.

STH Server Spider: Gigabyte R272-Z32

In the second half of 2018, we introduced the STH Server Spider as a quick reference to where a server system’s aptitude lies. Our goal is to start giving a quick visual depiction of the types of parameters that a server is targeted at.

STH Server Spider Gigabyte R272 Z32
STH Server Spider Gigabyte R272 Z32

Here, the CPU density is not great with only one CPU per 2U. Gigabyte sells other models with eight AMD EPYC 7002 CPUs in 2U. The system does have a full set of 16 DIMMs which helps memory density slightly. Where this box is mainly focused is towards high-performance storage applications with 24x NVMe SSDs.

Final Words

In every server generation, there is a system or type of system that just gets it right. For years, we have seen generations of systems that offered incremental improvements but still relied upon dual-socket servers. If you have even the latest April 2019 2nd generation Intel Xeon Scalable systems bandwidth is constrained by too few PCIe lanes for both a 100GbE network connection and 24x NVMe bays, even with two CPUs installed. With the Gigabyte R272-Z32, one gets 24x front panel U.2 NVMe bays, two M.2 boot drives and two rear SATA boot drives, plus PCIe expansion room to utilize 100GbE connectivity all using one CPU. That includes even the $450 AMD EPYC 7232P which can also address up to 4TB of memory (although we would not suggest that configuration.)

There are a few items that we would change. The web management interface can use a boot to BIOS option and could be a bit faster to load. Two M.2 slots could be sacrificed for a full PCIe Gen4 x16 slot. A sturdier airflow guide could be used with additional SATA SSD mounting points. These are really areas of potential improvement but do not take away from the fact that the Gigabyte R272-Z32 is a great platform that is one of the first to fully utilize what the AMD EPYC 7002 series has to offer. If you read our AMD EPYC 7002 Series Rome Delivers a Knockout article, you will see it indeed offers a lot.

If you are looking for a software-defined storage or hyper-converged platform with next-generation capabilities, the Gigabyte R272-Z32 is a step beyond what traditional vendors offer in dual-socket Intel Xeon servers. Gigabyte did a great job of creating an expansive platform around the new chips.

Where to Buy

We have gotten a lot of questions asking where one can buy these servers. That is pretty common since some of the gear is harder to find online. ThinkMate has these servers on their configurator, so we are going to point folks there:


Let us know if you find this helpful and we can include in future reviews as well.

21 COMMENTS

  1. isn’t that only 112 pcie lanes total? 96 (front) + 8 (rear PCIe slot) + 8 (2 m.2 slots). did they not have enough space to route the other 16 lanes to the unused PCIe slot?

  2. M.2 to U.2 converters are pretty cheap.
    Use Slot 1 (16x PCIe4) for the network connection 200 gbit/s
    Use Slot 5 and the 2xM.2 for the NVMe-drives.

  3. We used targets on each drive, not as a big RAID 0. The multiple simultaneous drive access fits our internal use pattern more closely and is likely closer to how most will be deployed.

  4. CLL – we did not have a U.2 Gen4 NVMe SSD to try. Also, some of the lanes are hung off PCIe Gen3 lanes so at least some drives will be PCIe Gen3 only. For now, we could only test with PCIe Gen3 drives.

  5. Thanks Patrick for the answer. For our application we would like to use one big raid. Do you know if it is possible to configure this on the Epyc system? With Intel this seems to be possible by spanning disks overs VMDs using VROC.

  6. Intel is making life easy for AMD.
    “Xeon and Other Intel CPUs Hit by NetCAT Security Vulnerability, AMD Not Impacted”
    CVE-2019-11184

  7. I may be missing something obvious, and if so please let me know. But it seems to me that there is no NVMe drive in existence today that can come near saturating an x4 NVMe connection. So why would you need to make sure that every single one of the drive slots in this design has that much bandwidth? Seems to me you could use x2 connections and get far more drives, or far less cabling, or flexibility for other things. No?

  8. Patrick,

    If you like this Gigabyte server, you would love the Lenovo SR635 (1U) and SR655 (2U) systems!
    – Universal drive backplane; supporting SATA, SAS and NVMe devices
    – Much cleaner drive backplane (no expander cards and drive cabling required)
    – Support for up to 16x 2.5″ hot-swap drives (1U) or 32x 2.5″ drives (2U);
    – Maximum of 32x NVMe drives with 1:2 connection/over-subscription (2U)

  9. Thanks for the pointer, it must have skipped my mind.
    One comment about that article: It doesn’t really highlight that the front panel drive bays are universal (SATA, SAS and NVMe).
    This is a HUGE plus compared to other offerings, the ability to choose the storage interface that suits the need, at the moment, means that units like the Lenovo have much more versatility!

  10. This is what I was looking to see next for EPYC platforms… I always said it has Big Data written all over it… A 20 server rack has 480 drives… 10 of those is 4800 drives… 100 is 48000 drives… and at a peak of ~630W each, that’s an astounding amount of storage at around 70KW…
    I can see Twitter and Google going HUGE on these since their business is data… Of course DropBox can consolidate on these even from Naples…

  11. Can such a server have all the drives connected via a RAID controller like Megaraid 9560-16i?
    Before commenting about software RAID or ZFS, in my application, the bandwidth provided by this RAID controller is way more than needed, with the comfort of providing worry free RAID 60 and hiding very well the write peak latencies due to the battery backup cache.

  12. Mr. Kennedy, recently I buy a R152-Z31 with a AMD Epyc 7402P, that have the same motherboard of R272-Z32 the MZ32-AR0 to test our apps with the AMD processor, but, now I am confused with the difference between ranks within RAM memory modules, in your review I saw you use a micron one, in the micron web page (https://www.crucial.com/compatible-upgrade-for/gigabyte/mz32-ar0) there are some models of 16GB, could you advice me with a model that will be at least work fine. I appreciate a lot if you could help me.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.