Gigabyte H261-Z60 Server Review 2U4N AMD EPYC for Dense Compute

5

Gigabyte H261-Z60 Storage Performance

Storage in the Gigabyte H261-Z60 is primarily driven by SATA interfaces. As a result, storage performance is not of the same level of focus as it would be for the H261-Z61 NVMe variant. Still, we wanted to compare high-quality SSDs and HDDs to give some sense of what one can get out of each node.

Gigabyte H261 Z60 Storage Performance
Gigabyte H261 Z60 Storage Performance

Deploying today, unless cost is an enormous constraint, we would opt for 1.92TB SSDs over the hard drives. We see the Gigabyte H261-Z60 primarily using SATA drive bays for boot devices and using NVMe or network for primary storage.

Compute Performance and Power Baselines

One of the biggest areas that manufacturers can differentiate their 2U4N offerings on is cooling capacity. As modern processors heat up, they lower clock speed thus decreasing performance. Fans spin faster to cool which increases power consumption and power supply efficiency.

STH goes through extraordinary lengths to test 2U4N servers in a real-world type scenario. You can see our methodology here: How We Test 2U 4-Node System Power Consumption.

STH 2U 4 Node Power Comparison Test Setup
STH 2U 4 Node Power Comparison Test Setup Example

Since this was our first AMD EPYC test, we used four 1U servers from different vendors to compare power consumption and performance. The STH “sandwich” ensures that each system is heated on the top and bottom as they would be deployed in dense deployment.

This type of configuration has an enormous impact on some systems. All 2U4N systems must be tested in a similar manner or else performance and power consumption results are borderline useless.

Compute Performance to Baseline

We loaded the Gigabyte H261-Z60 nodes with 256 cores and 512 threads worth of AMD EPYC CPUs. Each node also had a 10GbE OCP NIC and a 100GbE PCIe x16 NIC. We then ran one of our favorite workloads on all four nodes simultaneously for 1400 runs. We threw out the first 100 runs worth of data and considered the 101st run to be sufficiently heat soaked. The other runs are used to keep the machine warm until all systems have completed their runs. We also used the same CPUs in both sets of test systems to remove silicon differences from the comparison.

Gigabyte H261 Z60 Compute Performance
Gigabyte H261 Z60 Compute Performance

As you can see, the Gigabyte H261-Z60 nodes are able to cool CPUs essentially on par with their 1U counterparts. That is a testament to how well the system is designed. We had to alter the Y-axis here to show there was even a difference at all. If we had used a 0-101% axis the difference would have been less than a pixel. This is a great result from the Gigabyte H261-Z60.

1
2
3
4
5
REVIEW OVERVIEW
Design & Aesthetics
9.7
Performance
9.5
Feature Set
9.4
Value
9.3
Previous articlePutting AWS Graviton and its Arm CPU Performance in Context
Next articleSynology RackStation RS1619xs Plus Launched
Patrick has been running STH since 2009 and covers a wide variety of SME, SMB, and SOHO IT topics. Patrick is a consultant in the technology industry and has worked with numerous large hardware and storage vendors in the Silicon Valley. The goal of STH is simply to help users find some information about server, storage and networking, building blocks. If you have any helpful information please feel free to post on the forums.
gigabyte-h261-z60-server-review-2u4n-amd-epycOur Gigabyte H261-Z60 server review shows how this 2U 4-node AMD EPYC server is targeted at dense compute deployments. We test the server with 256 core and 512 threads to see how it performs versus four 1U servers using fewer cables, using fewer components, and achieving lower power consumption.

5 COMMENTS

  1. Great STH review!

    One thing though – how about linking the graphics to a full size graphics files? it’s really hard to read text inside these images…

  2. My old Dell C6105 burned in fire last May and I hadn’t fired it up for a year or more before that, but I recall using a single patch cable to access the BMC functionality on all 4 nodes. There may be critical differences, but that ancient 2U4N box certainly provided single-cable access to all 4 nodes.

    Other than the benefits of html5 and remote media, what’s the standout benefit of the new CMC?

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.