Dell PowerEdge R670 Internal Hardware Overview
Of course, the first step is removing the lid which is done with one latch. Sometimes manufacturers struggle with the structural rigidity of servers, so you see many screws added to 1U servers. Here, it is just a latch which is good.

Also, Dell has its service guide inside the lid. These are becoming more standard in the industry these days, but they are nice features that Dell has been including for many generations.

As you can see, the front is the storage section, then we have the fans, followed by the CPUs and memory, then the rear I/O.

Dell has hot-swap 1U fan modules. These are great, and a class-leading design as they are easy to service. Hot-swap 1U fans are notoriously more difficult than in 2U servers, so many vendors still expect service to happen with someone pulling out a fan cable/ connector.

Here is another look from the other side.

Those fans are charged with cooling the entire system. A small feature you may have noticed is that the fans are positioned so that one fan from two different modules is directed at each CPU heatsink. This helps provide redundancy in the event a fan module fails.

The server takes two Intel Xeon 6700E, 6700P, or 6500P CPUs. In our system we have dual Intel Xeon 6767P processors which are very in-demand SKUs. We have tested multiple AI servers with these exact 64-core SKUs because they provide a balance of cores and clock speeds along with memory in a socket.

Here we have a full set of 16x DDR5 RDIMMs per CPU for 32 total. The higher-end Intel Xeon 6900P series sockets have 12-channel memory but are practically limited to 24 DIMMs per system due to the physical width of fitting that many DIMMs. It may seem counterintuitive, but you can plug in more memory capacity on Intel’s 8-channel Xeon 6700 series platforms than you can on the company’s higher-end 12-channel Xeon 6900 series platforms.

This is a small one, but Dell is able to accomplish this with relatively tame looking heatsinks. In this generation, we have seen Massive Microsoft Azure HBv4 AMD EPYC Genoa Heatsinks as an example, but here Dell has relatively little overhang past the socket.

Also around the socket, you can see a number of PCIe cabled connectors.

These can be used for functions like providing the front storage with PCIe lanes.

Here is an example of the storage backplane with a cabled connection. Something that Dell offers here is a front PERC H975i that we will discuss more of in our management section.

In our management section, we will go into more about how this works, but having the PERC H975i in the front of the server means that it is not taking up PCIe slots in the rear. This is a very useful setup with limited numbers of slots in a 1U server.

Dell also has a nod to the serviceability of systems with this locking latch. That is more akin to what hyper-scalers used to do.

Between the memory and the power supplies we can see power connectors for if you have a higher power NIC or GPU.

In the center we get our man x16 riser connectors. These used to be called “GENZ” connectors, but that created a lot of confusion in the industry.

One other small feature is that there is an additional cabled connector for the OCP NIC 3.0 in the event you wanted to get more PCIe bandwidth to the slot. This has become a very common design element in new servers.

Here is the Dell iDRAC management card. This is what provides the rear I/O functionality and the BMC features.

This is a small piece, but one that we thought we would show. Dell has a cable/ airflow guide in the middle of its chassis. That may sound like a small feature, but it is fairly uncommon to see these in hyper-scale servers. Remember, someone had to design this thinking about all of the cabling possibilities, check the airflow with it, then get it into the system.

Finally, Dell offers a number of rear I/O options including OCP NIC 3.0 slots. We have the OCP NIC 3.0 Form Factors The Quick Guide and having the internal lock means that Dell gets a bit more space on the rear, but servicing a NIC takes much longer. You will also notice that while it may look like we have two OCP NIC 3.0 slots here, one is labeled for a Dell BOSS card that we do not have in the system, but that we had in the Dell PowerEdge R6715 we reviewed.

As a fun aside, our system did not come with any NICs installed. Luckily, since we have perhaps the best network test setup for a review site, even able to get 800Gbps bi-directional traffic though a NVIDIA ConnectX-8 C8240 800G Dual 400G NIC even in a PCIe Gen5 server, we have stacks of NICs to install.
Next, let us get to the system topology.



The Dell honeycomb faceplate looks more photogenic to me than HP and any post-IBM Lenovo design.
In my opinion air-cooled dual-socket 1U servers never made much sense, because the fans are just too small. Now that power consumption has gone up, the sensible choice for 1U is liquid cooling.
Surprised to see it score so low in the spider chart for Capacity since you mention more than once that it can hit over 1PB in a single U. Seems with flash, High capacity and performance are one and the same in 2026.