Dell PowerEdge R670 Review A 1U Intel Xeon 6 Speedster

2

Dell PowerEdge R670 Internal Hardware Overview

Of course, the first step is removing the lid which is done with one latch. Sometimes manufacturers struggle with the structural rigidity of servers, so you see many screws added to 1U servers. Here, it is just a latch which is good.

Dell PowerEdge R670 Front Angled 1
Dell PowerEdge R670 Front Angled 1

Also, Dell has its service guide inside the lid. These are becoming more standard in the industry these days, but they are nice features that Dell has been including for many generations.

Dell PowerEdge R670 Service Information 1
Dell PowerEdge R670 Service Information 1

As you can see, the front is the storage section, then we have the fans, followed by the CPUs and memory, then the rear I/O.

Dell PowerEdge R670 Hot Swap Fan Bay 2
Dell PowerEdge R670 Hot Swap Fan Bay 2

Dell has hot-swap 1U fan modules. These are great, and a class-leading design as they are easy to service. Hot-swap 1U fans are notoriously more difficult than in 2U servers, so many vendors still expect service to happen with someone pulling out a fan cable/ connector.

Dell PowerEdge R670 Hot Swap Fans 4
Dell PowerEdge R670 Hot Swap Fans 4

Here is another look from the other side.

Dell PowerEdge R670 Hot Swap Fans 5
Dell PowerEdge R670 Hot Swap Fans 5

Those fans are charged with cooling the entire system. A small feature you may have noticed is that the fans are positioned so that one fan from two different modules is directed at each CPU heatsink. This helps provide redundancy in the event a fan module fails.

Dell PowerEdge R670 Heat Sink 3
Dell PowerEdge R670 Heat Sink 3

The server takes two Intel Xeon 6700E, 6700P, or 6500P CPUs. In our system we have dual Intel Xeon 6767P processors which are very in-demand SKUs. We have tested multiple AI servers with these exact 64-core SKUs because they provide a balance of cores and clock speeds along with memory in a socket.

Dell PowerEdge R670 DDR5 DIMM Slots 2
Dell PowerEdge R670 DDR5 DIMM Slots 2

Here we have a full set of 16x DDR5 RDIMMs per CPU for 32 total. The higher-end Intel Xeon 6900P series sockets have 12-channel memory but are practically limited to 24 DIMMs per system due to the physical width of fitting that many DIMMs. It may seem counterintuitive, but you can plug in more memory capacity on Intel’s 8-channel Xeon 6700 series platforms than you can on the company’s higher-end 12-channel Xeon 6900 series platforms.

Dell PowerEdge R670 DDR5 DIMM Slots 5
Dell PowerEdge R670 DDR5 DIMM Slots 5

This is a small one, but Dell is able to accomplish this with relatively tame looking heatsinks. In this generation, we have seen Massive Microsoft Azure HBv4 AMD EPYC Genoa Heatsinks as an example, but here Dell has relatively little overhang past the socket.

Dell PowerEdge R670 How Swap Fans 6
Dell PowerEdge R670 How Swap Fans 6

Also around the socket, you can see a number of PCIe cabled connectors.

Dell PowerEdge R670 Inside 5
Dell PowerEdge R670 Inside 5

These can be used for functions like providing the front storage with PCIe lanes.

Dell PowerEdge R670 Inside 7
Dell PowerEdge R670 Inside 7

Here is an example of the storage backplane with a cabled connection. Something that Dell offers here is a front PERC H975i that we will discuss more of in our management section.

Dell PowerEdge R670 Inside 20
Dell PowerEdge R670 Inside 20

In our management section, we will go into more about how this works, but having the PERC H975i in the front of the server means that it is not taking up PCIe slots in the rear. This is a very useful setup with limited numbers of slots in a 1U server.

Dell PowerEdge R670 IDRAC 10 PERC H975i Front
Dell PowerEdge R670 IDRAC 10 PERC H975i Front

Dell also has a nod to the serviceability of systems with this locking latch. That is more akin to what hyper-scalers used to do.

Dell PowerEdge R670 Inside 8
Dell PowerEdge R670 Inside 8

Between the memory and the power supplies we can see power connectors for if you have a higher power NIC or GPU.

Dell PowerEdge R670 Inside 9
Dell PowerEdge R670 Inside 9

In the center we get our man x16 riser connectors. These used to be called “GENZ” connectors, but that created a lot of confusion in the industry.

Dell PowerEdge R670 Inside 13
Dell PowerEdge R670 Inside 13

One other small feature is that there is an additional cabled connector for the OCP NIC 3.0 in the event you wanted to get more PCIe bandwidth to the slot. This has become a very common design element in new servers.

Dell PowerEdge R670 Inside 14
Dell PowerEdge R670 Inside 14

Here is the Dell iDRAC management card. This is what provides the rear I/O functionality and the BMC features.

Dell PowerEdge R670 Inside 17
Dell PowerEdge R670 Inside 17

This is a small piece, but one that we thought we would show. Dell has a cable/ airflow guide in the middle of its chassis. That may sound like a small feature, but it is fairly uncommon to see these in hyper-scale servers. Remember, someone had to design this thinking about all of the cabling possibilities, check the airflow with it, then get it into the system.

Dell PowerEdge R670 Inside 16
Dell PowerEdge R670 Inside 16

Finally, Dell offers a number of rear I/O options including OCP NIC 3.0 slots. We have the OCP NIC 3.0 Form Factors The Quick Guide and having the internal lock means that Dell gets a bit more space on the rear, but servicing a NIC takes much longer. You will also notice that while it may look like we have two OCP NIC 3.0 slots here, one is labeled for a Dell BOSS card that we do not have in the system, but that we had in the Dell PowerEdge R6715 we reviewed.

Dell PowerEdge R670 Riser 4 9
Dell PowerEdge R670 Riser 4 9

As a fun aside, our system did not come with any NICs installed. Luckily, since we have perhaps the best network test setup for a review site, even able to get 800Gbps bi-directional traffic though a NVIDIA ConnectX-8 C8240 800G Dual 400G NIC even in a PCIe Gen5 server, we have stacks of NICs to install.

Next, let us get to the system topology.

2 COMMENTS

  1. The Dell honeycomb faceplate looks more photogenic to me than HP and any post-IBM Lenovo design.

    In my opinion air-cooled dual-socket 1U servers never made much sense, because the fans are just too small. Now that power consumption has gone up, the sensible choice for 1U is liquid cooling.

  2. Surprised to see it score so low in the spider chart for Capacity since you mention more than once that it can hit over 1PB in a single U. Seems with flash, High capacity and performance are one and the same in 2026.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.