Gigabyte R272-Z32 Review This 24x NVMe AMD EPYC 7002 Server is a Home Run

17

Gigabyte R272-Z32 Topology

With AMD EPYC 7001 servers, topology was a big deal. Each AMD EPYC 7001 chip had four NUMA nodes per socket, which had implications for applications. With the AMD EPYC 7002 series, by default, each socket is a single NUMA node.

Gigabyte R272 Z32 Topology
Gigabyte R272 Z32 Topology

What you are seeing there is multiple NVMe SSDs and NICs sitting on a single NUMA node. That NUMA node happens to be an AMD EPYC 7702P 64-core CPU. While in the first generation, Intel’s common retort to the AMD competition was that AMD needed more NUMA nodes to hit core counts, the tables have turned. Intel cannot hit 64 cores in a single or even dual-socket configuration with the second generation Intel Xeon Scalable CPUs. Instead, it must resort to a quad-socket or quad NUMA node design to hit 64 cores. This is the power of the AMD EPYC 7002 series.

Gigabyte R272 Z32 System Block Diagram
Gigabyte R272 Z32 System Block Diagram

Many will completely miss this, but one of the PCIe x8 slots is not functional in the Gigabyte R272-Z32 design. That is because the switched PCIe port is dedicated to PCIe x16 for 4×4 PCIe lanes to the front panel U.2 drive bays. There is actually only a single available PCIe Gen4 x8 slot in the system along with two PCIe 3.0 x4 M.2 slots. That still allows the use of a 100GbE PCIe Gen4 NIC, but you must use a Gen4 NIC, not a Gen3 NIC to utilize 100GbE on this platform. This is one of those strange cases where we actually wish the two M.2 slots were sacrificed to make this a PCIe Gen4 x16 slot.

Another interesting topology note is that the Intel i350 gigabit LAN interface and BMC sit off of the WAFL PCIe 2.0 interface. That means this platform is designed for AMD EPYC 7002 processors, not EPYC 7001 CPUs. That is common for newer platforms and we expect the AMD EPYC 7003 CPUs will go into this platform as well when they are launched in the future.

Gigabyte R272-Z32 Management

As one can see, the Gigabyte R272-Z32 utilizes a newer MegaRAC SP-X interface. This interface is a more modern HTML5 UI that performs more like today’s web pages and less like pages from a decade ago. We like this change. Here is the dashboard.

Gigabyte MegaRAC SP X Dashboard
Gigabyte MegaRAC SP X Dashboard

One item we noticed is that this new solution takes a long time to log in. We used a stopwatch to time between the login prompt and the dashboard being functional. It took around 26 seconds.

Gigabyte MegaRAC SP X Loading Dashboard
Gigabyte MegaRAC SP X Loading Dashboard

You will find standard BMC IPMI management features here, such as the ability to monitor sensors. Here is an example:

Gigabyte MegaRAC SP X Sensor Readings
Gigabyte MegaRAC SP X Sensor Readings

Other tasks such as the CPU inventory are available. One can see this particular CPU is a new AMD EPYC 7702P high-core count chip with 64 cores in the single socket.

Gigabyte MegaRAC SP X CPU Inventory AMD EPYC 7702P
Gigabyte MegaRAC SP X CPU Inventory AMD EPYC 7702P

One of the other features is the new HTML5 iKVM for remote management. We think this is a great solution. Some other vendors have implemented iKVM HTML5 clients but did not implement virtual media support in them at the outset. Gigabyte has this functionality and power control support all from a single browser console.

Gigabyte MZ01 IPMI Remote IKVM
Gigabyte IPMI Remote IKVM

We want to emphasize that this is a key differentiation point for Gigabyte. Many large system vendors such as Dell EMC, HPE, and Lenovo charge for iKVM functionality. This feature is an essential tool for remote system administration these days. Gigabyte’s inclusion of the functionality as a standard feature is great for customers who have one less license to worry about.

Gigabyte MegaRAC SP X Power Control
Gigabyte MegaRAC SP X Power Control

The Power Control feature is fairly standard. We wish this had a reboot or boot to BIOS feature. Our particular test unit defaulted to SMT=Off every time we installed a new CPU. As a result, we had to sit and wait for keystrokes to enter BIOS when we reboot to turn SMT on and change other settings we wanted.

Gigabyte MegaRAC SP X Firmware Update BMC And BIOS
Gigabyte MegaRAC SP X Firmware Update BMC And BIOS

Gigabyte also includes the ability to update both BMC and BIOS firmware from the web interface. That is a feature that most modern servers have. At the same time, Supermicro adds an additional $20 license to update BIOS via the web interface.

Gigabyte R272-Z32 Test Configuration

Here is the test configuration we used for the Gigabyte R272-Z32:

  • System: Gigabyte R272-Z32
  • CPUs: AMD EPYC 7702P, 7502P, 7402P, 7302P, 7232P, 7262
  • Memory: 8x Micron DDR4-3200 32GB RDIMMs (256GB total)
  • Boot SSD: Intel DC S3710 400GB
  • Storage SSDs: 10x Micron 9300 3.84TB, 14x Intel DC P3520 2TB
  • Networking: Mellanox ConnectX-5 VPI PCIe Gen4 (CX556A) 100GbE

Overall, this gave us a solid foundation to get some performance numbers that we are going to show next.

1
2
3
4
REVIEW OVERVIEW
Design & Aesthetics
9.5
Performance
9.6
Feature Set
9.6
Value
9.5
SHARE
Previous articleMikroTik CRS312-4C+8XG-RM 12-Port 10GbE Switch Review
Next articleArm Joins CXL Making the Path Forward Clear
Patrick has been running STH since 2009 and covers a wide variety of SME, SMB, and SOHO IT topics. Patrick is a consultant in the technology industry and has worked with numerous large hardware and storage vendors in the Silicon Valley. The goal of STH is simply to help users find some information about server, storage and networking, building blocks. If you have any helpful information please feel free to post on the forums.

17 COMMENTS

  1. isn’t that only 112 pcie lanes total? 96 (front) + 8 (rear PCIe slot) + 8 (2 m.2 slots). did they not have enough space to route the other 16 lanes to the unused PCIe slot?

  2. M.2 to U.2 converters are pretty cheap.
    Use Slot 1 (16x PCIe4) for the network connection 200 gbit/s
    Use Slot 5 and the 2xM.2 for the NVMe-drives.

  3. We used targets on each drive, not as a big RAID 0. The multiple simultaneous drive access fits our internal use pattern more closely and is likely closer to how most will be deployed.

  4. CLL – we did not have a U.2 Gen4 NVMe SSD to try. Also, some of the lanes are hung off PCIe Gen3 lanes so at least some drives will be PCIe Gen3 only. For now, we could only test with PCIe Gen3 drives.

  5. Thanks Patrick for the answer. For our application we would like to use one big raid. Do you know if it is possible to configure this on the Epyc system? With Intel this seems to be possible by spanning disks overs VMDs using VROC.

  6. Intel is making life easy for AMD.
    “Xeon and Other Intel CPUs Hit by NetCAT Security Vulnerability, AMD Not Impacted”
    CVE-2019-11184

  7. I may be missing something obvious, and if so please let me know. But it seems to me that there is no NVMe drive in existence today that can come near saturating an x4 NVMe connection. So why would you need to make sure that every single one of the drive slots in this design has that much bandwidth? Seems to me you could use x2 connections and get far more drives, or far less cabling, or flexibility for other things. No?

  8. Patrick,

    If you like this Gigabyte server, you would love the Lenovo SR635 (1U) and SR655 (2U) systems!
    – Universal drive backplane; supporting SATA, SAS and NVMe devices
    – Much cleaner drive backplane (no expander cards and drive cabling required)
    – Support for up to 16x 2.5″ hot-swap drives (1U) or 32x 2.5″ drives (2U);
    – Maximum of 32x NVMe drives with 1:2 connection/over-subscription (2U)

  9. Thanks for the pointer, it must have skipped my mind.
    One comment about that article: It doesn’t really highlight that the front panel drive bays are universal (SATA, SAS and NVMe).
    This is a HUGE plus compared to other offerings, the ability to choose the storage interface that suits the need, at the moment, means that units like the Lenovo have much more versatility!

  10. This is what I was looking to see next for EPYC platforms… I always said it has Big Data written all over it… A 20 server rack has 480 drives… 10 of those is 4800 drives… 100 is 48000 drives… and at a peak of ~630W each, that’s an astounding amount of storage at around 70KW…
    I can see Twitter and Google going HUGE on these since their business is data… Of course DropBox can consolidate on these even from Naples…

LEAVE A REPLY

Please enter your comment!
Please enter your name here