Gigabyte R272-Z32 Review This 24x NVMe AMD EPYC 7002 Server is a Home Run

17
Gigabyte R272 Z32 Cover
Gigabyte R272 Z32 Cover

The Gigabyte R272-Z32 was one of our first AMD EPYC 7002 “Rome” generation platforms. The company did a lot with the first-generation AMD EPYC 7001 series, so it is little surprise that their first product for the new generation is excellent. To get slightly tantalizing for our readers, the Gigabyte R272-Z32 is a single socket AMD EPYC 7002 platform that can be configured to be more powerful than most dual-socket Intel Xeon Scalable systems, with more RAM capacity and PCIe storage. Having a game-changing platform for some ecosystems is something we always welcome. In our review, we are going to go over the high points, but also a few items we would like to see improved on future revisions.

Gigabyte R272-Z32 Overview

The front of the chassis is dominated by what may be its most important feature: 24x U.2 NVMe bays. We are going to discuss the wiring later, but the headline feature here is that all 24 bays have four full lanes of PCIe and they do not require a switch. In most Intel-based solutions, PCIe switches must be used that limit NVMe bandwidth due to oversubscription. With the Gigabyte R272-Z32 powered by AMD EPYC 7002 processors, one can get full bandwidth to each drive. Perhaps more impressively, this is enabled with the lowly AMD EPYC 7232P $450 SKU and there is still enough PCIe bandwidth left over for a 100GbE networking adapter.

Gigabyte R272 Z32 Front
Gigabyte R272 Z32 Front

The rear of the server is very modern. Redundant 1.2kW 80Plus Platinum power supplies are on the left. There are two 2.5″ SATA hot-swap bays in the rear which will most likely be used for OS boot devices.

Gigabyte R272 Z32 Rear
Gigabyte R272 Z32 Rear

Legacy ports include a serial console port and VGA port. There are three USB 3.0 ports as well for local KVM support. For remote iKVM, there is an out-of-band management port. Rounding out the standard rear I/O is dual 1GbE networking via an Intel i350 NIC.

Opening the chassis up, one can see the airflow design from NVMe SSDs to the fan partition to the CPU and then the rest of the motherboard. We wanted to call-out Gigabyte for great use of the front label to provide a quick reference guide. These labels are commonly found on servers from large OEMs such as Dell EMC, HPE, and Lenovo, but were often missing from smaller vendor designs. This is not the most detailed quick reference guide, but it provides many of the basics someone servicing the chassis exterior would need. In colocation with remote hands, these types of guides can be extremely helpful.

Gigabyte R272 Z32 Overview
Gigabyte R272 Z32 Overview

The motherboard itself is a Gigabyte MZ32-AR0. This is a successor to the Gigabyte MZ31-AR0 we reviewed in 2017. Make no mistake, Gigabyte made some major improvements in this revision.

One of the big improvements is a shift from a single to dual M.2 NVMe slots. The dual M.2 option is limited to M.2 2280 (80mm) SSDs. For boot devices, that is plenty. It will not, however, fit the Intel Optane DC P4801X cache drive as an example nor M.2 22110 (110mm) SSDs with power loss protection (PLP.) This is a great feature, but one we would actually sacrifice for more PCIe expansion. We are going to explain why next.

Gigabyte R272 Z32 M.2 Slots
Gigabyte R272 Z32 M.2 Slots

If you look at the physical motherboard layout, the PCIe slots appear to have three open PCIe x16 slots. The other four are utilized by the PCIe risers used for front U.2 drive bay connectivity. The remaining three slots are not all functional.

Gigabyte R272 Z32 CNV3024 NVMe Risers
Gigabyte R272 Z32 CNV3024 NVMe Risers

Instead, there is only one functional slot, Slot #5 which is an x16 physical but only a PCIe Gen4 x8 electrical slot. To us, this is a big deal. That means to utilize a 100GbE adapter, one must have a PCIe Gen4 capable NIC. With a PCIe Gen3 NIC in this slot, one is limited to just over 50Gbps of bandwidth. For a server with 96 PCIe lanes to the U.2 front panel drives, it would have been nice to have a PCIe Gen4 x16 lane for 200GbE or 2x 100GbE links for external networking. This would be worth losing the two M.2 drives, especially with the dual SATA boot drive option.

The OCP slot would be a networking option, however, it is utilized for four more NVMe front bay connectivity points.

Gigabyte R272 Z32 OCP Slot Usage
Gigabyte R272 Z32 OCP Slot Usage

The AMD EPYC 7002 socket itself is not going to be backward compatible with the first generation AMD EPYC 7001 series if you want PCIe Gen4, but it will be compatible with the future EPYC 7003 series. One of the great features is that this solution has a full set of 16 DDR4 DIMMs. If one wants, you can use 8x DIMMs at DDR4-3200 speeds or 16x at DDR4-2933 but ensure that you ask your rep to get the right rank/ speed modules.

Gigabyte R272 Z32 CPU And RAM
Gigabyte R272 Z32 CPU And RAM

Fans in the Gigabyte R272-Z32 are hot-swap designs as one would expect in a chassis like this. We have seen this fan design before from Gigabyte and they are fairly easy to install. The one part of the cooling solution we did not love is the relatively flimsy airflow guide. We would have preferred a more robust solution that we have seen on other Gigabyte designs (e.g. the Gigabyte R280-G2O GPU/GPGPU rackmount server) That would have allowed Gigabyte to design-in space for additional SATA III drives using internal mounting on the airflow shroud.

Gigabyte R272 Z32 CPU And RAM Cover And Fans
Gigabyte R272 Z32 CPU And RAM Cover And Fans

The last point we wanted to show with the solution is the cabling which you may have seen. As PCIe connections increase in distance, we will, as an industry, see more cables. The blue data cables in the Gigabyte R272-Z32 are prime examples of that. Beyond these cables, the MZ32-AR0 motherboard is a standard form factor motherboard which means it is easy to replace, but there are a lot of cables. As you can see, the cables are tucked away around the redundant power supply’s power distribution. This design adds more cables, but it also means one can fairly easily replace the motherboard in the future.

At the end of the day, Gigabyte made perfectly reasonable tradeoffs in order to quickly bring this platform to market. With 96 of the 128 high-speed I/O lanes being used for front panel NVMe, Gigabyte is breaking new ground here. At the same time, Gigabyte has an opportunity to make a few tweaks to this design to create an absolute monster for the hyper-converged market if it is not already so.

Next, we are going to talk topology since that is drastically different on these new systems. We are then going to look at the management aspects before getting to performance, power consumption, and our final words.

1
2
3
4
REVIEW OVERVIEW
Design & Aesthetics
9.5
Performance
9.6
Feature Set
9.6
Value
9.5
SHARE
Previous articleMikroTik CRS312-4C+8XG-RM 12-Port 10GbE Switch Review
Next articleArm Joins CXL Making the Path Forward Clear
Patrick has been running STH since 2009 and covers a wide variety of SME, SMB, and SOHO IT topics. Patrick is a consultant in the technology industry and has worked with numerous large hardware and storage vendors in the Silicon Valley. The goal of STH is simply to help users find some information about server, storage and networking, building blocks. If you have any helpful information please feel free to post on the forums.

17 COMMENTS

  1. isn’t that only 112 pcie lanes total? 96 (front) + 8 (rear PCIe slot) + 8 (2 m.2 slots). did they not have enough space to route the other 16 lanes to the unused PCIe slot?

  2. M.2 to U.2 converters are pretty cheap.
    Use Slot 1 (16x PCIe4) for the network connection 200 gbit/s
    Use Slot 5 and the 2xM.2 for the NVMe-drives.

  3. We used targets on each drive, not as a big RAID 0. The multiple simultaneous drive access fits our internal use pattern more closely and is likely closer to how most will be deployed.

  4. CLL – we did not have a U.2 Gen4 NVMe SSD to try. Also, some of the lanes are hung off PCIe Gen3 lanes so at least some drives will be PCIe Gen3 only. For now, we could only test with PCIe Gen3 drives.

  5. Thanks Patrick for the answer. For our application we would like to use one big raid. Do you know if it is possible to configure this on the Epyc system? With Intel this seems to be possible by spanning disks overs VMDs using VROC.

  6. Intel is making life easy for AMD.
    “Xeon and Other Intel CPUs Hit by NetCAT Security Vulnerability, AMD Not Impacted”
    CVE-2019-11184

  7. I may be missing something obvious, and if so please let me know. But it seems to me that there is no NVMe drive in existence today that can come near saturating an x4 NVMe connection. So why would you need to make sure that every single one of the drive slots in this design has that much bandwidth? Seems to me you could use x2 connections and get far more drives, or far less cabling, or flexibility for other things. No?

  8. Patrick,

    If you like this Gigabyte server, you would love the Lenovo SR635 (1U) and SR655 (2U) systems!
    – Universal drive backplane; supporting SATA, SAS and NVMe devices
    – Much cleaner drive backplane (no expander cards and drive cabling required)
    – Support for up to 16x 2.5″ hot-swap drives (1U) or 32x 2.5″ drives (2U);
    – Maximum of 32x NVMe drives with 1:2 connection/over-subscription (2U)

  9. Thanks for the pointer, it must have skipped my mind.
    One comment about that article: It doesn’t really highlight that the front panel drive bays are universal (SATA, SAS and NVMe).
    This is a HUGE plus compared to other offerings, the ability to choose the storage interface that suits the need, at the moment, means that units like the Lenovo have much more versatility!

  10. This is what I was looking to see next for EPYC platforms… I always said it has Big Data written all over it… A 20 server rack has 480 drives… 10 of those is 4800 drives… 100 is 48000 drives… and at a peak of ~630W each, that’s an astounding amount of storage at around 70KW…
    I can see Twitter and Google going HUGE on these since their business is data… Of course DropBox can consolidate on these even from Naples…

LEAVE A REPLY

Please enter your comment!
Please enter your name here