Dell EMC PowerEdge C6525 Review 2U4N AMD EPYC Kilo-thread Server

15

Compute Performance and Power Baselines

One of the biggest areas that manufacturers can differentiate their 2U4N offerings on is cooling capacity. As modern processors heat up, they lower clock speed thus decreasing performance. Fans spin faster to cool which increases power consumption and power supply efficiency.

Dell EMC PowerEdge C6525 Chassis Fans
Dell EMC PowerEdge C6525 Chassis Fans

STH goes through extraordinary lengths to test 2U4N servers in a real-world type scenario. You can see our methodology here: How We Test 2U 4-Node System Power Consumption.

Inside this configuration, what we are testing for is the ability for these dense 2U4N systems to keep processors cool compared to baseline 1U systems. In previous generations, one issue many systems had was that there would be wide performance deltas between 1U1N (1U 1-node) and 2U4N (2U 4-node) platforms because processors would throttle under load. An ideal result is that a 2U4N platform matches the performance of 1U1N performance since that would mean that cooling is sufficient to allow processors to run at their maximum turbo frequencies.

Dell EMC PowerEdge C6525 Compute Performance to Baseline

We loaded the PowerEdge C6525 with a few configurations to see the performance compared to our average 1U system performance. We then ran one of our favorite workloads on all four nodes simultaneously for 1400 runs. We threw out the first 100 runs worth of data and considered the 101st run to be sufficiently heat soaked after days of testing. The other runs are used to keep the machine warm until all systems have completed their runs. We also used the same CPUs in both sets of test systems to remove silicon differences from the comparison.

This is an extraordinarily labor-intensive process, but it is done to eliminate variables and simulate a large installation that has a significant load.

Dell PowerEdge C6525 V. 4x 1U 2P EPYC 7452 Server Performance
Dell PowerEdge C6525 V. 4x 1U 2P EPYC 7452 Server Performance

Here, please note this is not using a 0 Y-axis. The differences are sub 0.8% so they would all look as though they are exactly on the 100% marker. The key takeaway is that we are within test variances for different runs, even using a large number of runs. We did not see any notable thermal throttling even with the chassis in a heavy-stress environment.

With these results, it seems as though the PowerEdge C6525 is running CPUs at their full performance levels without throttling CPUs in the process. We did not get to test higher-end CPUs since Dell’s iDRAC can pop field-programmable fuses in AMD EPYC CPUs that locks them to Dell systems. Normally we would test with different CPUs, but we cannot do that with our lab CPUs given Dell’s firmware behavior.

Next, we will continue with our power consumption testing followed by our STH Server Spider and final words.

Dell EMC PowerEdge C6525 Power Consumption to Baseline

One of the other, sometimes overlooked, benefits of the 2U4N form factor is power consumption savings. We ran our standard STH 80% CPU utilization workload, which is a common figure for a well-utilized virtualization server, and ran that in the sandwich between the 1U servers and the PowerEdge C6525. With dense servers, heat is a concern, so we replicate what one would likely see in the field. This is the only way to get useful comparison information for 2U4N servers.

STH 2U 4 Node Power Comparison Test Setup
STH 2U 4 Node Power Comparison Test Setup

Here is what we saw in terms of performance compared to our baseline nodes.

Dell PowerEdge C6525 V. 4x 1U 2P EPYC 7452 Servers Power Consumption
Dell PowerEdge C6525 V. 4x 1U 2P EPYC 7452 Servers Power Consumption

As you can see, we are getting a small but noticeable power efficiency improvement with the PowerEdge C6525’s 2U4N design. Had Dell used fully independent nodes, each with its own cooling, we would expect to see higher power consumption here than in our control set.

Dell EMC PowerEdge C6525 Chassis 80 Plus Platinum 2400W
Dell EMC PowerEdge C6525 Chassis 80 Plus Platinum 2400W

This is a 2-3% power consumption improvement. Part of that may be running power supplies at higher efficiency levels. Part of it is the shared cooling. When one combines power consumption savings along with increasing density, that can be a big win for data center operators. If Dell switches to 80Plus Titanium power supplies, adding even 1% more efficiency here may have some tangible TCO benefit for these dense systems.

Next, we will get to our STH Server Spider and our final words.

15 COMMENTS

  1. “We did not get to test higher-end CPUs since Dell’s iDRAC can pop field-programmable fuses in AMD EPYC CPUs that locks them to Dell systems. Normally we would test with different CPUs, but we cannot do that with our lab CPUs given Dell’s firmware behavior.”

    I am astonished by just how much of a gargantuan dick move this is from Dell.

  2. Could you elaborate here or in a future article how blowing some OTP fuses in the EPYC CPU so it will only work on Dell motherboards improves security. As far as I can tell, anyone stealing the CPUs out of a server simply has to steal some Dell motherboards to plug them into as well. Maybe there will also be third party motherboards not for sale in the US that take these CPUs.

    I’m just curious to know how this improves security.

  3. This is an UNREAL review. Compared to the principle tech junk Dell pushes all over. I’m loving the amount of depth on competitive and even just the use. That’s insane.

    Cool system too!

  4. “Dell’s iDRAC can pop field-programmable fuses in AMD EPYC CPUs that locks them to Dell systems”?? i’m not quickly finding any information on that? please do point to that or even better do an article on that, sounds horrible.

  5. I’m digging this system. I’ll also agree with the earlier commenters that STH is on another level of depth and insights. Praise Jesus that Dell still does this kind of marketing. Every time my Dell rep sends me a principled tech paper I delete and look if STH has done a system yet. It’s good you guys are great at this because you’re the only ones doing this.

  6. To who it may concern, Dell’s explanation:

    Layer 1: AMD EPYC-based System Security for Processor, Memory and VMs on PowerEdge

    The first generation of the AMD EPYC processors have the AMD Secure Processor – an independent processor core integrated in the CPU package alongside the main CPU cores. On system power-on or reset, the AMD Secure Processor executes its firmware while the main CPU cores are held in reset. One of the AMD Secure Processor’s tasks is to provide a secure hardware root-of-trust by authenticating the initial PowerEdge BIOS firmware. If the initial PowerEdge BIOS is corrupted or compromised, the AMD Secure Processor will halt the system and prevent OS boot. If no corruption, the AMD Secure Processor starts the main CPU cores, and initial BIOS execution begins.

    The very first time a CPU is powered on (typically in the Dell EMC factory) the AMD Secure Processor permanently stores a unique Dell EMC ID inside the CPU. This is also the case when a new off-the-shelf CPU is installed in a Dell EMC server. The unique Dell EMC ID inside the CPU binds the CPU to the Dell EMC server. Consequently, the AMD Secure Processor may not allow a PowerEdge server to boot if a CPU is transferred from a non-Del EMC server (and CPU transferred from a Dell EMC server to a non-Dell EMC server may not boot).

    Source: “Defense in-depth: Comprehensive Security on PowerEdge AMD EPYC Generation 2 (Rome) Servers” – security_poweredge_amd_epyc_gen2.pdf

    PS: I don’t work for Dell, and also don’t purchase their hardware – they have some great features, and some unwanted gotchas from time to time.

  7. Wish the 1gb Management nic would just go away. There is no need to have this per blade. It would be simple for dell to route the connections to a dumb unmanaged switch chip on that center compartment and then run a single port for the chassis. Wiring up lots of cables to each blade is a messy. Better yet, place 2 ports allowing daisy chaining every chassis in a rack and elimate the management switch entirety.

  8. It’s part of AMD’s Secure Processor, and it allows the CPU to verify that the BIOS isn’t modified or corrupted, and if it is, it refuses to post.. It’s not exactly an efuse and more of a cryptographic signing thing going on where the Secure Processor validates that the computer is running a trusted BIOS image. The iDRAC 9 can even validate the BIOS image while the system is running. The iDRAC can also reflash the BIOS back to a non-corrupt, trusted image. On the first boot of an Epyc processor in a Dell EMC system, it gets a key to verify with; this is what can stop the processor from working in other systems as well.

  9. Honestly, there is no reason Dell can’t have it both ways with iDrac. iDrac is mostly software, and could be virtualized with each VM having access to one set of the hardware interfaces. This ould cut their costs by three, roughly, while giving them their current management solution. After all, ho often do you access all four at once?

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.