The Dell EMC PowerEdge C6525 is a “kilo-thread” class server. This 2U 4-node (2U4N) design uses AMD EPYC 7002 “Rome” (and likely future EPYC 7003 “Milan”) processors to pack up to 512 cores and up to 1024 threads into a single chassis. Even moving to a midrange AMD EPYC 7002 processor such as the EPYC 7452, one can still get 256 cores and 512 threads in a single chassis which is more than Intel Xeon currently offers. In our review, we are going to take a look at the C6525 and show what it has to offer.
Dell EMC PowerEdge C6525 2U4N Server Overview
Since this is a more complex system, we are first going to look at the system chassis. We are then going to focus our discussion on the design of the four nodes. We also have a video version of this review for those who prefer to listen along. Our advice is to open it in another YouTube tab and listen along while you go through the review.
Since our full review text is getting close to 5000 words as we are adding this video, this review has more detail. Still, we know some prefer to consume content in different ways so we are adding that option.
Dell EMC PowerEdge C6525 Chassis Overview
The Dell EMC PowerEdge C6525 is a 2U 4-node system. Fitting in a single 2U enclosure one gets four nodes which effectively doubles rack compute density over four 1U servers. On the front of the chassis, one gets one of three options. Either 12x 3.5″ drives, 24x 2.5″ drives, or a diskless configuration. Since 2.5″ storage is popular in these platforms, that is the configuration we are testing. There are also 24x SATA/SAS and 16x SATA/SAS and 8x NVMe configurations for the 2.5″ version.
There are two other features we wanted to point out on the front of the chassis. First, each node gets power and reset buttons along with LED indicators all mounted on the rack ears. This is fairly common on modern 2U4N servers to increase density. While many vendors use uneven drive spacing in the middle of their chassis, the PowerEdge C6525 has extra vents located outside the drive array. In dense systems like this, extra airflow is needed and that is often provided by these vents.
Moving to the rear of the chassis, we see the four nodes. Dell denotes the model number on a blue handle. If you had both C6420 and C6525 models in your data center this would help give an easy indicator of what you are looking at. For either system, the Dell C6400 chassis is shared, and that is why we see “C6400” on the right ear of the C6525. Even though it is the same chassis, we asked, and Dell told us you cannot just mix and match. Instead, the chassis needs a firmware swap to have the right firmware for Intel or AMD. Thinking further into system lifecycles, it would have been great if this was not the case since it would operationally turn racks full of C6400 chassis into simply plugging the nodes in where there is an available slot. It also allows for greener operations since it would make the C6400 agnostic for Intel or AMD. Some other vendors have this capability for 2U4N chassis.
In the rear of the chassis, we have power supplies in the middle with two nodes on either side. These redundant hot-swap 80Plus Platinum power supplies. Our test unit uses 2400W power supplies but also has 2000W and 1600W options available. With AMD EPYC, 1600W is going to push it for usefulness in a full 8x CPU configuration since AMD CPUs tend to run at higher TDPs. We also wish that Dell started using 80Plus Titanium units in these designs. Other vendors have those options, and adding a small amount of power efficiency would help lower operating costs and be a bit more environmentally friendly.
Dell has a service cover the runs atop the chassis. This provides useful information for a field service tech regarding chassis-level integration.
Inside, one can see four pairs of 80mm fans. We will see how these perform in our efficiency testing. There are effectively two ways that 2U4N systems are designed these days. One option is an array of 80mm fans, such as what the C6525 is using. The other common design for 2U4N systems is to put smaller 40mm fans on each node. Typically the arrays of larger fans like the C6525 uses translate to lower power consumption. Some organizations prefer to have the entire node assembly, including fans be removable for easier service. Given the reliability of fans these days, for STH’s own use we prefer the midplane 80mm fan design like the C6525 has and we will show why it helps lower overall system power consumption later in this review.
Inside the chassis, there is a center channel with power cables and other chassis-level features. One item we will note here is that the PowerEdge C6525 does not have an iDRAC BMC in the chassis (iDRAC is found on the nodes.) We commonly saw this design for the past few years, dating back to the Dell PowerEdge C6100 XS23-TY3 Cloud Server. Many 2U4N systems that are more influenced by hyper-scaler requirements and that use industry-standard BMCs (e.g. the AST2500 series these days) tend to add a BMC here. We have even seen examples where there is even a dedicated management NIC and port for the chassis.
The channel design keeps all cabling out of nodes. Dell has large walls on either side of this channel which ensures these cables are not inadvertently snagged by nodes. The entire section is designed to be cooled via power supplies leaving the 80mm fan complex to cool the nodes.
Before we move on to the nodes, we just wanted to point to the differences in the systems you can put in this chassis. Here is the C6525 and C6420 side-by-side with the C4140 there as well as a different class of system.
Now that we have looked at the exterior, it is time to look at what each node in the C6525 offers.