The Dell EMC PowerEdge R7525 is the fastest dual-socket mainstream x86 server you can buy today. To make that happen, Dell EMC is using a number of different tricks. The configuration we are testing has 24x NVMe SSDs which make for an impressive storage array. Going beyond that, this server can handle up to AMD EPYC 7H12 280W TDP 64-core CPUs. Those are the same CPUs that when they were launched almost a year ago were primarily reserved only for those customers building supercomputers. Adding to the server’s allure is the inclusion of an array of expansion options to further showcase the system’s capabilities. In our review, we are going to get into the details and show you why we came away impressed with the server.
Dell EMC PowerEdge R7525 Hardware Overview
Since these days our reviews feature more images than in previous years, we wanted to split this section into the external views and the internal views of the hardware.
We also have a video (above) that has B-roll showing more angles than we can do on the web. As always, we suggest opening the video in a new browser to listen along or check out views. As a quick note, we have done a few videos for this system previously in our AMD PSB piece as well as our 160 PCIe Lane Design piece.
Dell EMC PowerEdge R7525 External Overview
On the front of the system Dell sent, we see a “fancy” faceplate with its small status screen. This is an option that can be configured. Frankly, this look is shared with Dell’s Xeon servers of the generation and looks great. It is also costly. Standard features also include USB service ports as well as a monitor port for cold aisle service. The system can be configured with Quick Sync 2 which is Dell EMC’s fast setup via a mobile device option, not to be confused with Intel’s QuickSync which is an Intel GPU video transcoding technology.
On the front of our test system, we have 24x 2.5″ drive bays. Dell EMC has other offerings available including those for SAS, SATA, and also fewer drive bays. Some will not want this much drive bay connectivity, others will certainly want as much storage as possible. As we would expect from a PowerEdge, this is highly-configurable.
Something that is immediately different on this AMD EPYC server versus its Intel Xeon counterparts is that all 24x NVMe SSD bays are all directly connected to the motherboard with dedicated lanes. That takes a total of 24x PCIe x4 lanes for 96 total. 96 is the total PCIe lane count we get in current 2nd Generation Intel Xeon Scalable systems like the Dell EMC PowerEdge R740xd. The big difference here is that we do not need PCIe switches to hit this PCIe lane count. The other, and perhaps more profound, the difference is that we get PCIe Gen4 lanes with EPYC instead of the legacy PCIe Gen3 lanes with Intel Xeon.
Since our internal overview section is quite long, we are just going to quickly mention here that the backplane is interesting itself. Dell is using a 24x drive PCB here. Many other vendors in the market are using three 8x drive backplane PCBs. That is just a small design detail that immediately sticks out when you review a lot of servers.
On the rear of the system, we get normal PowerEdge flexibility. Instead of looking at either side, we wanted to focus on what we still get, even with 24x NVMe drives using 96 PCIe Gen4 lanes. In a system like the PowerEdge R740xd, we would have very limited expansion. Here, we have two full-heights and two low-profile expansion slots along with an OCP NIC 3.0 slot. We will get into more detail on this in our internal overview, but a unique feature here is that none of the I/O including the two 1GbE ports (Broadcom), two SFP+ ports (Intel) nor the iDRAC, USB, and VGA ports are connected to the motherboard. Dell has a truly modular design.
On either side of the rear of the chassis, we have 1.4kW 80Plus Platinum Dell EMC power supplies. As we saw in our power consumption testing, this larger capacity is likely needed given how much one can configure in this system.
One other small note is that the rear of the system has a sturdy handle. One can remove this later, but it can also be used for a bit of cable management as a tie-down point if needed. If the low-profile card slots are used, it may require removing this handle for cabing. Again, this is a small touch, but a nice one to see.
Next, we are going to look inside the system and see what it has to offer under the cover.