The Dell PowerEdge R670 is a powerful 1U server. The dual-socket 1U server has been around for eons, but Dell has managed to give this generation new tricks that greatly increase the usefulness of a 1U server. More storage, faster networking, and new accelerators add to the capabilities of a server line that continues its march of getting better in each generation. Let us now get into the Dell PowerEdge R670.
Dell PowerEdge R670 External Hardware Overview
Something that you may not have noticed, unless you track Dell’s bezels, is that the bezel makes the server look cool, and secures SSDs from accidental removal. It also adds just under 2mm or 0.11in to the 815.14mm or 32.09 inches deep that the server comes standard in, without the bezel. We should also note that Dell has a front I/O configuration that you cannot use with the bezel, but that sounds very exciting.

On the left, we get our rack ear.

On the right, we get the other rack ear with a USB Type-C service port and a power button.

In the center, we get something different. Typically, you might see 8x U.2 2.5″ NVMe drive bays on the front, or 10x. Here we have 16x E3.S drive bays, and there are options to go up to 20x on this front face.

Each block of eight drives with 61.44TB SSDs gives us just under 1PB of storage, but the 20 drive configuration can surpass 1PB per U. Not too long ago that was an exotic storage chassis with a lower-volume SSD form factor. Now, it is a standard PowerEdge feature.

The benefit to the E3.S design is that it is a thinner SSD form factor which allows for higher density.

On the other side, we get 16x drives as well.

Looking ahead to PCIe Gen6 servers, the U.2 connector will no longer be supported, and the EDSFF connector. We were a bit early with calling this transition, but it is happening.
Also on the front we get the Dell service tag.

On the rear, there are many options, but the basic featuers of dual power supplies, slots for add-in cards, and the rear I/O block are all here.

In our system, we have redundant 800W power supplies, one on each side. This is to help cabling to racks that have PDUs on either side of the racks.

Here are the two 800W 80Plus Platinum power supplies.

On the rear, we get the VGA port, two USB Type-A ports, and the out-of-band iDRAC port.

There are three low profile risers in this configuration. Dell has other options for different card sizes, rear storage, and more.

Since we need to balance out the external/ internal overviews a bit, here is Riser 2 that has two PCIe Gen5 x16 low profile slots.

Dell has a great tool-less riser experience. Also, as a hallmark of modern servers, we have cables adding PCIe lanes to the risers.

Riser 4 is the low-profile riser slot.

The riser has a neat feature where both the front and the rear of cards can be secured using tool-less supports. It is a small feature, but a nice one.

Next, let us get inside the server.




The Dell honeycomb faceplate looks more photogenic to me than HP and any post-IBM Lenovo design.
In my opinion air-cooled dual-socket 1U servers never made much sense, because the fans are just too small. Now that power consumption has gone up, the sensible choice for 1U is liquid cooling.