Dell PowerEdge R770 Internal Hardware Overview
Taking a quick look at the system overview, we are going to go from front to rear, but this is a really cool picture if you enjoy server hardware. Even with all of those riser options, there are minimal cables. in the rear of the system. On the front, the cables are generally focused on storage and are low in the chassis. With PCIe Gen5, signaling has become more challenging meaning we see more cables in systems, but that is perhaps less so here.

Behind the storage is the fan partition. It pulls out for easier access to the cabling. You can also see that below the fans and in the center there are some great foam blocked cable paths.

Although the entire fan partition can be removed, each fan is also a hot swappable design.

Here is a quick look at the center airflow section.

PCIe in servers often comes out of the north and south of the chip while the east west directions are usually for memory. Here we can see some of the front PCIe MCIO connections that are used for front NVMe storage in our configuration. You can also see the headers where the fan partition plugs into here.

On the right side we get more MCIO connectors and the latch point for the fan partition.

The same is on the left along with power for the NVMe backplanes.

Dell’s airflow guide has some configurability as well. As you can see, on the left portion of the guide in the photo below, the top portion is open to allow airflow for the NVIDIA H100 NVL GPU. On the right portion the airflow is largely blocked. That is an example of how Dell is directing airflow in the chassis.

Under that airflow guide, we get a mass of CPUs and memory. Inside we get dual Intel Xeon 6700 series processors.

Each processor gets eight channels of DDR5 memory and two DIMMs per channel for 32 ECC RDIMM slots.

Dell has an OCP inspired HPM design so we see the riser slots across the rear of the motherboard with the OCP style slots at the rear edge.

I open many servers each year, and this just looks great.

There are power connectors for components like the NVIDIA H100 NVL GPU we have installed just behind the power supplies.

Here is the other side.

And the top view.

Dell is using the OCP DC-SCM for iDRAC and local management with an “attic” which is always fun.

In the center, we have the BOSS.

Then there is the OCP NIC 3.0 slot. There is an internal latch design which is a bummer since replacing a NIC takes a lot of disassembly. Other vendors use SFF with pull tab designs which are more common in cloud servers since they are easy to service without removing risers.

That OCP NIC 3.0 slot is actually just one of two, since in the top riser of the center stack we have another one.

Something worth noting is that this design is great for the fluidity of configurations in the PowerEdge R770. As someone who takes apart dozens of servers across vendors each year, this is not the easiest to service by any means. In packing so much functionality, the system has a lot of dependencies where components need to be removed before others can be accessed. In some circles that is going to be a risqué take, but there were more than a few times that I thought that servicing made sense, but required a lot of steps.

Next, let us get to the topology.
If only there was some fluid in there for cooling.
Indeed, I had assumed this was a review of one of the liquid-cooled models.
With all the dust the word “fluid” doesn’t come to mind.
Our hopes for a review of a liquid cooled server evaporated as we drank in the first paragraph. Like the bursting of a dam our hopes washed away, until we were somewhat buoyed by the review’s inclusion of E3.S SSDs instead of 2.5″ SSDs (but the option to use them, for those that have them, was refreshing).
So offering help, try: “adaptive”, “versatile” or “highly configurable”.