Intel Xeon D-2141I Benchmarks and Review 8 Core Skylake-D

4

Intel Xeon D-2141I Power Consumption

We wanted to post a few figures from our testing that show the real selling point of the chips, low power.

Idle is around 57W and maximum power consumption hits just over 124W. While you do get more performance from the Intel Xeon D-2141I as an edge device, you pay for that performance in terms of power consumption.

Note these results were taken using a 208V Schneider Electric / APC PDU at 17.7C and 72% RH. Our testing window shown here had a +/- 0.3C and +/- 2% RH variance.

Intel Xeon D-2141I Market Positioning

Thes chips are not released in a vacuum instead, they have competition on both the Intel and AMD sides. When you purchase a server and select a CPU, it is important to see the value of a platform versus its competitors.

Intel Xeon D-2141I v. Intel Xeon

From where we sit, the Intel Xeon D-2141I (that is an “i” not an “L”) is a solid mainstream CPU. Platforms with the CPU should cost just under $1000 with the motherboard and cooler included. If you are in a space-constrained environment, the embedded part makes sense.

Compared to the Xeon D-1500 generation, we have a higher performance 8-core design at the cost of more power. There are other features such as CPU feature set compatibility with Skylake CPUs and more memory capacity which make this an intriguing option.

The other competition is the Intel Xeon Silver line. It is true that you can get similar performance, more RAM capacity, and more expandability in the Intel Xeon Silver line. At the same time, an Intel Xeon Silver platform is going to have a larger footprint. There are trade-offs to be made either way.

Intel Xeon D-2141IT v. AMD EPYC

With the Intel Xeon D-2183IT, we saw the AMD EPYC 7000 series as a competitor that traded even more power consumption and a larger footprint for even more expansion options. With the 8 core Xeon D-2141IT, we do not see the AMD EPYC 7251 as a viable competitor. The platform is much larger, it uses more power, and has a ton of PCIe lanes. However, the Intel Xeon D-2141IT is made for the edge and has a single NUMA node.

We previewed this in the AMD EPYC 3000 series piece, but we now have test data on the AMD EPYC 3251. The AMD EPYC 3251 is only a dual memory channel design like the D-1500 series, however, it has core performance closer to the Xeon D-2100 series. It also costs only $315 meaning that the Intel Xeon D-2141I costs about 76% more than the EPYC 3251. One of the major features is Skylake feature set compatibility with the Xeon D-2100 but there are certainly cases where an embedded appliance manufacturer may be willing to forgo compatibility for decent cost savings. There are many embedded devices that use less than 32GB of RAM, so having 512GB capacity is not necessarily an advantage.

At the end of the day, Intel has been in this segment of the embedded server market consistently for years. Mostly to fend off competition from Arm SoCs. Embedded systems providers are accustomed to buying Intel and for this generation, that is likely enough to make the Intel Xeon D-2141I successful over the AMD EPYC 3251.

Final Words

We really like the Intel Xeon D-2141I for a specific application: space constrained deployments that need CPU power, memory capacity, Skylake instruction set compatibility, and that is not overly power constrained. Once you move outside of those parameters, there are a number of options between the Intel Xeon D-1500 series, the Intel Xeon Silver SKUs, and now even the AMD EPYC 3251.

Our position remains: the Intel Xeon D-2100 series seems to be the best AVX-512 performance per dollar CPU around. If it is a single port FMA AVX-512 implementation, then there is something miraculous going on. Otherwise, Intel is sandbagging numbers as we saw in our piece Intel Xeon D-2183IT Benchmarks and Review 16C SoC an AVX-512 Monster. If you run code that can use AVX-512, the Intel Xeon D-2141I is an amazing value these days.

4 COMMENTS

  1. I always hear a lot of hype about AVX-512 on STH. You show test results with Gromacs, so I went to gromacs.org. Not to my surprise they promote GPU acceleration. What I do find surprising is, that we never see performance figures from STH on Gromacs with Deeplearning 10 and 11 (12 is so new that we can’t expect that). To me it seems a bit that STH is pushing AVX-512 and the rest of the world couldn’t care less, they just don’t give a sh*t.

  2. Misha, from what I understand, Intel is pushing AVX-512 in hopes that it will be utilized for 5G implementation, whatever that is… AVX-512 is useful for several different edge utilities, including specific networking encryption/decryption forms. However, the cost is that the AVX-512 FMA units push the processor harder resource wise and it has to compensate for that. It compensates by throttling the core speed of the chip.

    So the real question is this: In a real edge environment, where there is a mixed workflow of different processor intensive tasks (web serving, firewall, local storage cache, all kinds of stuff), do clients want a “feature” that will help a small percentage (today) of the workflow, but do so at the cost of slowing the rest?

    Personally, I think the AMD EPYC 3000 series is a slam dunk for what we need now. Intel is betting on future software moving most of the data to the FMA, AVX-512. We’re still waiting on everything to go multithreaded… lol

  3. Micah, I know what intel is trying to do with AVX-512. What wonders me is that STH never showed the price performance between Gromacs on CPU + CUDA vs. CPU with AVX-512 and in almost every review Gromacs CPU vs. CPU with AVX-512.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.