Dell PowerEdge R6715 Review A Spiffy 1U AMD EPYC Server

9

Dell PowerEdge R6715 Internal Hardware Overview

Let us now move from the front to the rear of the interior of the chassis.

Dell PowerEdge R6715 Liquid Cooled Internal Overview 1
Dell PowerEdge R6715 Liquid Cooled Internal Overview 1

Here are the front storage backplane with the MCIO connections to the front drives.

Dell PowerEdge R6715 Backplane 5
Dell PowerEdge R6715 Backplane 5

Behind the storage backplanes there are 1U fan modules.

Dell PowerEdge R6715 Fan 1
Dell PowerEdge R6715 Fan 1

The fan modules Dell uses are a big step up from many whitebox servers since they are easy to swap. This is small but nice feature.

Dell PowerEdge R6715 Fan 4
Dell PowerEdge R6715 Fan 4

At the heart of the system is a single AMD Socket SP5. That is for a single AMD EPYC 9004/ 9005 series processor, and Dell supports up to 160 cores in the server. Something else neat with this platform is that there are 24 DDR5 RDIMM slots which is 12 channels and 2 DIMMs per channel (2DPC.) That is a lot of cores and memory for a single socket server.

Dell PowerEdge R6715 RDIMMs 2
Dell PowerEdge R6715 RDIMMs 2

The big difference with this platform is that the CPU is liquid cooled. While there are still fans in the system, those fans are for the other components. The CPU is cooled via this liquid cooling block.

Dell PowerEdge R6715 CPU 2
Dell PowerEdge R6715 CPU 2

Here is a quick look at the bottom of the cooling block.

Dell PowerEdge R6715 Liquid Cooling Block Bottom 1
Dell PowerEdge R6715 Liquid Cooling Block Bottom 1

As a quick aside, with the AMD Socket SP5 and 24x DDR5 RDIMMs there is not enough room in a 19″ rack chassis for a second CPU next to the first one.

Dell PowerEdge R6715 Inside 5
Dell PowerEdge R6715 Inside 5

Providing connectivity is an array of MCIO connectors onboard.

Dell PowerEdge R6715 Inside 4
Dell PowerEdge R6715 Inside 4

Another small but nice feature are the quick release latches on the motherboard.

Dell PowerEdge R6715 Inside 6
Dell PowerEdge R6715 Inside 6

Here is the other side of the power supply connecting into the server.

Dell PowerEdge R6715 Inside 1
Dell PowerEdge R6715 Inside 1

Here is the motherboard’s onboard OCP NIC 3.0 slot.

Dell PowerEdge R6715 Riser 4 2
Dell PowerEdge R6715 Riser 4 2

Here is another look at the center below that large riser complex. We can see the two boot SSDs and the iDRAC card here.

Dell PowerEdge R6715 Riser 2 2
Dell PowerEdge R6715 Internal Rear

Next, let us get the system up and running.

9 COMMENTS

  1. I think I saw the reference in this post, but it might have been somewhere else. I don’t understand why people still test Linux with Ubuntu or Debian – or VMs with Promox – and not Fedora. Fedora is the cutting edge of mainstream operating systems, using the latest kernel, adding new tech asap, retiring obsolete tech asap. Fedora server is dead easy to use. Its package system IMO is superior to everything else. I can setup one of these MinisForum minipcs from box to customised server in less than an hour. From what I understand, getting Ubuntu or Proxmox to work with the latest kernel one needs to perform the operating system equipment of a root canal. What’s the point of having the latest and greatest hardware if the operating system doesn’t exploit the technology. Even with Fedora, there is a few months wait unless one wants to play with rawhide.

  2. @Mike: Because Ubuntu and Debian have well tested stable versions, and most people running servers want reliability above all else (hence the redundant power supplies, redundant Ethernet, etc.) If you want bleeding edge hardware support and you can tolerate the occasional kernel crash that’s when you go with something that keeps packages more up to date, although Fedora is a bit of an odd choice for that as it still lags behind other distros like Arch or Gentoo that are typically only a day or so behind upstream kernel releases.

    But I do question why you need the latest kernel on a server since they usually don’t get new hardware added to them very often, they typically come with slightly older well tested hardware that already has good driver support, and most server admins don’t like using their machines as guinea pigs either, especially when a kernel bug could easily take the machine out and prevent it from booting at all. It sounds more like something relevant to a workstation or gaming PC where you’re regularly upgrading parts, and you don’t need redundant PSUs etc. because uptime isn’t a primary concern.

  3. Ubuntu is the most popular Linux distribution for servers. Usually, Red Hat and Ubuntu are the two pre-installed Linux distributions that almost every server OEM has options for. If you want to do NVIDIA AI, then you’re using Ubuntu for the best support. RHEL is maybe second.

    If you’ve got to pick a distribution, you’d either have to go RHEL or Ubuntu-Debian. I don’t see others as even relevant, other than for lab environments.

    If STH were a RHEL shop, I’d understand, as that’s a valid option. Ubuntu is the big distribution that’s out there, so that’s the right option. They just happened to use Ubuntu since before it was the biggest. I’d say a long time ago when they started using Ubuntu it was valid to ask Ubuntu or CentOS, but they’ve made the right choice in hindsight even though at the time I thought they were dumb for not using CentOS. CentOS is gone so that’s how that turned out.

  4. In the Dell R6715 technical guide, they publish memory speeds at 5200MT/s even though the server (air cooled) uses 6400 MT/s DIMM’s. Does the liquid cooled version of the R6715 also down shift the memory speed?

  5. Patrick, I understand that the second bank always creates a reduced memory bandwidth. What I don’t understand is why is the R6715 documentation indicating a 5200 MT/s for the memory bandwidth when running 6400 MT/s DIMM’s. The HPE DL325 has the same published numbers in their documentation. Why aren’t these servers achieving full bandwidth on 6400 MT/s DIMM’s when running a single, fully populated bank?

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.