The Dell PowerEdge R760 is this generation’s design study in server excellence by Dell’s engineers. In our review, we are going to see how the company’s new mainstream 2U dual Intel Xeon server compares to the rest of the industry. This is going to be a very in-depth server review as usual, so let us get to it.
Dell PowerEdge R760 Hardware Overview
As we have been doing, we are going to split this into an internal and external overview. We also have a video accompanying this article that you can find here:
We suggest watching it in its own browser, tab, or app for a better viewing experience. With that, let us get to the hardware.
Dell PowerEdge R760 External Hardware Overview
Looking at the front of the 2U PowerEdge R760, we can see a partially populated 24x 2.5″ design. Dell has other options for front storage including 3.5″ more SAS, more NVMe, and so forth, but we are only showing one configuration here.
On the left side, we get service buttons and then the eight SAS bays. Four of those bays are populated with 1.6T SAS SSDs. We would expect many of our readers to focus more on NVMe storage, these days, but many Dell customers are accustomed to using Broadcom-based PERC controllers for SAS arrays.
One of the cool features of the Dell backplane design is that it can put SAS components directly on the backplane, leaving the PCIe slot area free.
On the other side, we get the service tag, then USB console ports, a VGA port, and the power button. The big feature on this side is the NVMe drive connectivity. Here we have eight 3.2TB NVMe SSDs.
Dell’s 24x 2.5″ backplane is partitioned into three separate PCBs so one can customize this as we see in this unit. Some vendors use a single backplane, but these are easier to replace and have the SAS/SATA option. The physical design of these backplanes is the fanciest we have seen on a server to date.
Looking to the rear of the system, we can see a lot of customization potential.
First, we have the power supplies. Our unit has two 1.4kW 80Plus Platinum units. These are probably what we would target with most higher-end CPUs and without high-power GPUs and accelerators or full NVMe drive bays in the front. 80Plus Platinum is good, but we are seeing more servers with 80Plus Titanium power supplies. Dell has optional Titanium level power supplies as well, but many are still Platinum rated.
The power supplies are on either side of the chassis and are a bit different. Dell has a slender PSU here giving it a bit more room for airflow around the power supplies.
Next, is a STH favorite feature: the Dell BOSS. M.2 SSDs are very reliable these days so often we now see them inside the chassis. Dell has a solution that takes two SSDs and makes them rear-replaceable called the Dell BOSS.
This is a solution designed for boot media. The BOSS controller is a lower-end RAID controller so one can RAID two M.2 SSDs, then use that array for boot. This is important for OSes like Windows and VMware ESXi. One can use software RAID for most Linux distributions. Still, it is an easy-to-deploy solution.
One small item worth noting here is just how complex the BOSS solution is. Other vendors just use motherboard M.2 or will have a Marvell controller on a simple riser. Dell has custom sheet metal, cables, a captive thumb screw, and more.
Standard I/O on the server is non-existent. All of the I/O is via cards. Still, we wanted to cover what is in this server and show how it is implemented.
Dell has a custom Broadcom dual RJ45 module for its base networking. This is a Dell-specific custom module.
We can see this is a Broadcom NetXtreme BCM5720 dual 1GbE solution.
Next, we have the OCP NIC 3.0 port with an Intel E810-XXV dual 25GbE NIC installed. You can learn more about OCP NIC 3.0 form factors here, but this is the 4C + OCP connector and uses the SFF with an internal lock design to keep the card in place. That means that one has to open the system and replace risers to service this NIC. SFF with Pull Tab can be serviced without opening the system so that is why we see it on servers designed to minimize service costs like hyper-scale servers. Dell’s business model includes significant service revenue so it is likely less keen to use that design here.
The iDRAC service port, USB ports, and VGA port are on another custom module. This module also has chassis intrusion detection.
In terms of the risers, the PowerEdge R760 has an amazing ability to handle eight PCIe Gen5 slots, and that does not include the OCP NIC 3.0 slot.
Just above the OCP NIC 3.0 slot, there is a dual low-profile riser. We will cover the connectors in more detail later, but Dell is using high-density connectors instead of cables for all of its risers. It does have some cable-in-the-middle designs like this dual PCIe Gen4 slot riser design.
This is the middle riser where we have a PCIe 25GbE NIC.
This is another cable-in-the-middle dual PCIe Gen5 riser.
This is a great exterior design for the server, but now let us get inside the server to see how it works.