Gigabyte R272-Z32 Topology
With AMD EPYC 7001 servers, topology was a big deal. Each AMD EPYC 7001 chip had four NUMA nodes per socket, which had implications for applications. With the AMD EPYC 7002 series, by default, each socket is a single NUMA node.
What you are seeing there is multiple NVMe SSDs and NICs sitting on a single NUMA node. That NUMA node happens to be an AMD EPYC 7702P 64-core CPU. While in the first generation, Intel’s common retort to the AMD competition was that AMD needed more NUMA nodes to hit core counts, the tables have turned. Intel cannot hit 64 cores in a single or even dual-socket configuration with the second generation Intel Xeon Scalable CPUs. Instead, it must resort to a quad-socket or quad NUMA node design to hit 64 cores. This is the power of the AMD EPYC 7002 series.
Many will completely miss this, but one of the PCIe x8 slots is not functional in the Gigabyte R272-Z32 design. That is because the switched PCIe port is dedicated to PCIe x16 for 4×4 PCIe lanes to the front panel U.2 drive bays. There is actually only a single available PCIe Gen4 x8 slot in the system along with two PCIe 3.0 x4 M.2 slots. That still allows the use of a 100GbE PCIe Gen4 NIC, but you must use a Gen4 NIC, not a Gen3 NIC to utilize 100GbE on this platform. This is one of those strange cases where we actually wish the two M.2 slots were sacrificed to make this a PCIe Gen4 x16 slot.
Another interesting topology note is that the Intel i350 gigabit LAN interface and BMC sit off of the WAFL PCIe 2.0 interface. That means this platform is designed for AMD EPYC 7002 processors, not EPYC 7001 CPUs. That is common for newer platforms and we expect the AMD EPYC 7003 CPUs will go into this platform as well when they are launched in the future.
Gigabyte R272-Z32 Management
As one can see, the Gigabyte R272-Z32 utilizes a newer MegaRAC SP-X interface. This interface is a more modern HTML5 UI that performs more like today’s web pages and less like pages from a decade ago. We like this change. Here is the dashboard.
One item we noticed is that this new solution takes a long time to log in. We used a stopwatch to time between the login prompt and the dashboard being functional. It took around 26 seconds.
You will find standard BMC IPMI management features here, such as the ability to monitor sensors. Here is an example:
Other tasks such as the CPU inventory are available. One can see this particular CPU is a new AMD EPYC 7702P high-core count chip with 64 cores in the single socket.
One of the other features is the new HTML5 iKVM for remote management. We think this is a great solution. Some other vendors have implemented iKVM HTML5 clients but did not implement virtual media support in them at the outset. Gigabyte has this functionality and power control support all from a single browser console.
We want to emphasize that this is a key differentiation point for Gigabyte. Many large system vendors such as Dell EMC, HPE, and Lenovo charge for iKVM functionality. This feature is an essential tool for remote system administration these days. Gigabyte’s inclusion of the functionality as a standard feature is great for customers who have one less license to worry about.
The Power Control feature is fairly standard. We wish this had a reboot or boot to BIOS feature. Our particular test unit defaulted to SMT=Off every time we installed a new CPU. As a result, we had to sit and wait for keystrokes to enter BIOS when we reboot to turn SMT on and change other settings we wanted.
Gigabyte also includes the ability to update both BMC and BIOS firmware from the web interface. That is a feature that most modern servers have. At the same time, Supermicro adds an additional $20 license to update BIOS via the web interface.
Gigabyte R272-Z32 Test Configuration
Here is the test configuration we used for the Gigabyte R272-Z32:
- System: Gigabyte R272-Z32
- CPUs: AMD EPYC 7702P, 7502P, 7402P, 7302P, 7232P, 7262
- Memory: 8x Micron DDR4-3200 32GB RDIMMs (256GB total)
- Boot SSD: Intel DC S3710 400GB
- Storage SSDs: 10x Micron 9300 3.84TB, 14x Intel DC P3520 2TB
- Networking: Mellanox ConnectX-5 VPI PCIe Gen4 (CX556A) 100GbE
Overall, this gave us a solid foundation to get some performance numbers that we are going to show next.