ASUS RS700-E11-RS12U Review A New 1U Intel Xeon Sapphire Rapids Server

5

ASUS RS700-E11-RS12U Power Consumption

On the power consumption side, we saw something that we were not entirely expecting given the performance. The two 1.6kW 80Plus Titanium power supplies were very efficient with a 247W idle. That was much lower than we saw on the 2U platform we tested recently with the same processors, and is likely a combination of higher-efficiency PSUs and different factory BIOS tuning.

ASUS RS700 E11 RS12U 1.6kW 80Plus Platinum PSU 2
ASUS RS700 E11 RS12U 1.6kW 80Plus Platinum PSU 2

At 70% load, we hit 801W and we saw a maximum of 970W. We will quickly note that there is a lot of room to increase power consumption in this platform by adding more and higher-power devices, so the 1.6kW PSUs from Gospower seem like good choices.

ASUS RS700 E11 RS12U 1.6kW 80Plus Platinum PSU 1
ASUS RS700 E11 RS12U 1.6kW 80Plus Platinum PSU 1

Next, let us get to the STH Server Spider.

STH Server Spider ASUS RS700-E11-RS12U

In the second half of 2018, we introduced the STH Server Spider as a quick reference to where a server system’s aptitude lies. Our goal is to start giving a quick visual depiction of the types of parameters that a server is targeted at.

STH Server Spider ASUS RS700 E11 RS12U
STH Server Spider ASUS RS700 E11 RS12U

This is a 1U platform that offers the full CPU and memory configuration of this Xeon generation making it a balanced and dense platform. Even as a 1U server, there is a lot of NVMe storage in front and PCIe Gen5 expandability in the rear, especially with the internal slot for the PIKE RAID controller/ HBA and on-motherboard M.2 SSD slots.

While this platform can support GPUs, and has an external fan option to add a dual-width GPU to the full-height riser, it is not meant to be the densest GPU/ accelerator platform.

Final Words

Overall, the design with forward heat pipes and heatsink ears performed very well for us. ASUS did a great job here.

ASUS RS700 E11 RS12U CPU Heatsink 2
ASUS RS700 E11 RS12U CPU Heatsink 2

Beyond that, the server itself packs a ton of functionality into a 1U server platform. 32 DIMM slots, two 4th Gen Intel Xeon Scalable CPUs, 12x NVMe drives plus two M.2 boot devices, two full height PCIe Gen5 slots, one low profile, and even an internal PIKE card. Plus, we get 10Gbase-T as standard. There is a ton here.

ASUS RS700 E11 RS12U Front
ASUS RS700 E11 RS12U Front

Overall, the ASUS RS700-E11-RS12U was refreshingly surprising as we saw better-than-expected results in this 1U server.

5 COMMENTS

  1. It’s amazing how fast the power consumption is going up in servers…My ancient 8-core per processor 1U HP DL 360 Gen9 in my homelab has 2x 500 Watt platinum power supplies (well also 8x 2.5″ SATA drive slots versus 12 slots in this new server and no on-board M.2 slots and less RAM slots).

    So IF someone gave me one of these new beasts for my basement server lab, would my wife notice that I hijacked the electric dryer circuit? Hmmm.

  2. @carl, you dont need 1600W PSUs to power these servers. honestly, i dont see a usecase when this server uses more than 600W, even with one GPU – i guess ASUS just put the best rated 1U PSU they can find

  3. Data Center (DC) “(server room) Urban renewal” vs “(rack) Missing teeth”: When I first landed in a sea of ~4000 Xeon servers in 2011, until I powered-off my IT career in said facility in 2021, pre-existing racks went from 32 servers per rack to many less per rack (“missing teeth”).

    Yes the cores per socket went from Xeon 4, 6, 8, 10, 12, Eypc Rome 32’s. And yes with each server upgrade I was getting more work done per server, but less servers per rack in the circa 2009 original server rooms in this corporate DC after maxing out power/cooling.

    Yes we upped our power/cooling game at the 12 core Xeon era with immersion cooling, as we built out a new server room. My first group of vats had 104 servers (2U/4node chassis) per 52U vat…The next group of vats with the 32-core Romes, we could not fill (yes still more core per vat though)….So again losing ground on a real estate basis.

    ….

    So do we just agree that as this hockey stick curve of server power usage grows quickly, we live with a growing “missing teeth” issue over upgrade cycles, perhaps start to look at 5 – 8 year “urban renewal” cycles (rebuild of the given server room’s power/cooling infrastructure at great expense) instead of 2010-ish perhaps 10 – 15 year cycles?

    For anyone running their own data center, this will greatly effect their TCO spreadsheets.

  4. @altmind… I am not sure why you can’t imagine it using more than 600w when it used much much more (+200w @70% load, +370w at peak) in the review, all without a gpu.

    @Carl, I think there is room for innovation in the DC space, I don’t see the power/heat density changing and it is not exactly easy to “just run more power” to an existing DC let alone cooling.

  5. Which leads to the current nominal way to get out of the “Urban renewal” vs “Missing teeth” dilemma as demand for compute rises as new compute devices power/cooling needs per unit rise: “Burn down more rain forest” (build more data centers as our cattle herd grows).

    But I’m not sure everyplace wants to be Northern Virginia, nor want to devote a growing % of their energy grid to facilities that use a lot of power (thus requiring more hard-to-site power generation facilities).

    As for “I think there is room for innovation in the DC space”, this seems to be a basic physics problem that I don’t see any solution for on the horizon. Hmmm.

LEAVE A REPLY

Please enter your comment!
Please enter your name here
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

The reCAPTCHA verification period has expired. Please reload the page.

This site uses Akismet to reduce spam. Learn how your comment data is processed.