Dell PowerEdge R670 Power Consumption
In our configuration we had dual 800W 80Plus Platinum power supplies. We used both since we had two 350W TDP CPUs and by the time we added NICs, cooling, and the front SSDs/ memory we would be over 800W.

A small but nice feature is that Dell monitors the power not just at the CPU and PSU level, but we also get DRAM power consumption that we can see here is 17-19W per socket at idle (1 DPC 64GB DDR5 RDIMMs.) That puts us in the 120-140W range per socket at idle, without any of the other system components. This matters because if you are consolidating from previous generations of servers, the idle and load power consumption will be notably higher in current generations, but the trade-off is that this is for significantly higher density.

We were often in the 420-460W range at idle, and under load we got into the 930-950W range, without adding high-power NICs and optics. With two NVIDIA ConnectX-7 400GbE NICs and DR4 optics, we were over 1kW for this configuration.

This is a lot of server, and it uses a good amount of power. We would probably say it is a lot, except for the fact that it uses little power per node, and even per U compared to current generation AI servers.
STH Server Spider: Dell PowerEdge R670
In the second half of 2018, we introduced the STH Server Spider as a quick reference to where a server system’s aptitude lies. Our goal is to start giving a quick visual depiction of the types of parameters that a server is targeted at.

Having the ability to add multiple high-speed PCIe Gen5 NICs in the rear, along with up to 20x E3.S NVMe SSDs in the front, means that even for a single-node 1U system, we get an excellent amount of flexibility. Where the 1U servers excel is in density, and we can see that here. Of course, if you want to build servers around 3.5″ hard drives, or GPUs/ AI acceleratiors, there are more optimized platforms just for those use cases these days.
Final Words
We review so many systems, that in many ways it is fair to say that our opinions regress to the mean. At the same time, it is very apparent from the availability of front I/O options, great fan modules, multiple rear configurations, solid cooling for the CPUs, both PERC and iDRAC, and even down to the little custom bits like a cable organizer/ airflow guide in the rear that Dell’s design team did a great job on the PowerEdge R670.

The addition of the E3.S bays was excellent, and offers something very different by offering even more storage in a 1U footprint. 1PB/ U (or more) using standard compute servers is now easy to achieve, as is 400Gbps per port networking. You could, of course, use less dense storage, or networking, or decide to put three NVIDIA GPUs in the rear risers.
Thinking a bit critically, however, I do think that the 1U form factor as being the standard dual socket offering is being challenged in 2026. There is a lot of wisdom and merit to using 2U servers given that today’s servers use more power per node, but offer much higher density in terms of CPU, memory, stroage, and I/O versus previous generation servers. Of course, Dell makes the PowerEdge R770 that shares a lot with this PowerEdge R670 if you were also of that mindset. Perhaps that is the entire point that one can pick the right density chassis but get a similar experience.

Overall, the Dell PowerEdge R670 shows what happens when a great engineering team tackles ever expanding performance and density frontiers in standard 1U servers.



The Dell honeycomb faceplate looks more photogenic to me than HP and any post-IBM Lenovo design.
In my opinion air-cooled dual-socket 1U servers never made much sense, because the fans are just too small. Now that power consumption has gone up, the sensible choice for 1U is liquid cooling.
Surprised to see it score so low in the spider chart for Capacity since you mention more than once that it can hit over 1PB in a single U. Seems with flash, High capacity and performance are one and the same in 2026.