Dell EMC PowerEdge XE7100 Review An Intelligent 100x HDD System

17

Dell EMC PowerEdge XE7100 Power Consumption

The Dell EMC PowerEdge XE7100 is a very complex and configurable system, it is also a very large system.

Dell EMC PowerEdge XE7100 Power Supplies
Dell EMC PowerEdge XE7100 Power Supplies

This is going to sound strange for some of our readers, but this is an interesting system with 100x Seagate EXOS X16 12TB drives, we somewhat have a floor if everything is spinning. At idle, those drives use around 5W each and can hit 10W. With 100 drives, that is 0.5-1kW of power consumption for the drives and that is without the SAS infrastructure, SSDs, dual CPU nodes/ GPUs, memory, NICs, cooling. This is a system that can hit 2kW without much difficulty with its configurations.

Dell EMC PowerEdge XE7100 Rear 3
Dell EMC PowerEdge XE7100 Rear 3

At the same time, that 2kW is for a 5U system which is “only” 500W/ U. That is very manageable. Another consideration is frankly how many drives one can put in a rack. Racks have weight limits so that may limit how many of these systems can be used in a rack. Also, since they are top-loading, anything above eye height requires a step/ ladder to service. With 100 drives, there is a non-zero chance the system will see a drive fail every year. The point here is that the node density is one aspect, but also there will be practical deployment considerations that go well beyond power consumption.

STH Server Spider: Dell EMC PowerEdge XE7100

In the second half of 2018, we introduced the STH Server Spider as a quick reference to where a server system’s aptitude lies. Our goal is to start giving a quick visual depiction of the types of parameters that a server is targeted at.

STH Server Spider Dell EMC PowerEdge XE7100
STH Server Spider Dell EMC PowerEdge XE7100

As much as the two dual-socket nodes and GPU/ NVMe storage options add to flexibility, this system is not a dense way to get compute. Instead, it is a dense way to get a lot of drives into 5U of space. This is one of the more singular focus STH Server Spiders you will see. While many systems try to be good at many things, this is a system that only makes sense if you need dense 3.5″ storage.

Final Words

Overall, this is a really great system from Dell. The engineers behind this platform did a great job going beyond a standard industry practice of sticking a low-power node into a long 4U system. Instead, they realized that with top-loading storage there are practical density limitations for many of its customer base. Instead of design a 4U system, the move to a 5U chassis added for compute flexibility.

The main issue with this system is not the hardware, it is the marketing. The company is, in the footnotes, comparing this system to the HPE Apollo 4510. That is the wrong comparison point. Having the XE7100 is not going to get a customer with many racks of HPE gear switch to Dell. Instead, this is an important system for another reason, the real competition.

Dell EMC XE7100 Announcement Highest Density
Dell EMC XE7100 Announcement Highest Density

Here is a quick look from 1Q18 to 3Q20 figures from the IDC Quarterly Server Tracker that we showed here. Dell, HPE, and Lenovo are lumped together. Trend lines are added to the view which fairly clearly shows what is happening in the industry.

IDC 3Q20 Quarterly Server Tracker Cover
IDC 3Q20 Quarterly Server Tracker Cover

As more companies embrace cloud and open standards, there is a migration away from large legacy vendors. In the SMB/ SME/ enterprise space where Dell’s customer base primarily resides, there are a number of platform capabilities that draw customers to use other white box or lower-cost vendors. Some are a simple as high-end 8x GPU servers. Over the years, a significant one has been companies that want dense top-loading storage servers. Organizations that decided to build dense Ceph clusters, as an example, often looked to alternative suppliers that were building these solutions for cloud providers. Dell was in a difficult place since building open-source storage clusters is a force that puts pressure on legacy EMC revenue.

Dell EMC PowerEdge XE7100 100x Drive Bays Populated 2
Dell EMC PowerEdge XE7100 100x Drive Bays Populated 2

The significance of the Dell EMC PowerEdge XE7100 is not that Dell has a better box than HPE. Instead, it is that the company has an absolutely excellent system that has enough capacity, and flexibility to meet the majority of its customers’ needs. The real power of having a well-designed solution is that the XE7100 prevents customers from needing to look beyond the Dell EMC ecosystem.

17 COMMENTS

  1. Just out of curiosity, when a drive fails, what sort of information are you given regarding the location of said drive? Does it have LEDs on the drive trays, or are you told “row x, column y” or ???

  2. On the PCB backplane and on the drive trays you can see the infrastructure for two status LEDs per drive. The system is also laid out to have all of the drive slots numbered and documented.

  3. Unless, you are into Ceph or the likes and you are okay with huge failure domains (the only way to make Ceph economical for capacity), then I just don’t understand this product. No dual path (HA) makes it a no go. Perhaps there are places where a PB+ outage is no biggie.

  4. Not sure what software you will run on this, most clustered file system or object storage system that I know recommend going for smaller nodes, primarily for rebuild times, one vendor that I know recommends 10 – 15 drives, and drives with no more than 8TB.

  5. I enjoy your review of this system, definitely cool features, love to the flexibility with the gpu/ssd options and as you mentioned this would not be an individual setup but rather integrated in a clustered setup. I’d also imagine it has the typical IDRAC functionality, and all the dell basics for monitoring.

    After all the fun putting it together, do you get to keep it for a while and play with or once you reviewed it….just pack it up and ship it out?

  6. Hans Henrik Hape even with CEPH this is to big, loosing 1,2 PB becouse one servers fails, rebalancing, network and cpu load on nodes. It’s just bad idea not from data loose point of view but from maintaining operations durring failure.

  7. Great Job STH, Patrick, you never stopped surprising us with these kind of reviews. sine you configured system as a raw 1.2PB can you check the internal bandwidth available using something like
    iozone -t 1 -i 0 -i 1 -r 1M -s 1024G -+n
    I would love to know since no parity HDD is it possible the bandwidth will 250MBpsx100HDD i.e. 25GBps. I’m sure their will be bottlenecks maybe the PCIe bus speed for the HBA controller or CPU limitation. But since you have it will nice to known

    Again thanks for the great reviews

  8. “250MBpsx100HDD i.e. 25GBps” ANy 7200 RPM drive is capable of ~140 MB/sec sustained, so, 100 of them would be ~14 GigaBYTES per second… Ergo, 100 Gbps connection will then propel this server to hypothetical ~10 GB/sec transfers… As it is, 25 Gbps would limit it quite quickly.

  9. A great review of this awesome server! I wish I had a few hundred K$ to spare, and, needed a couple of them!

  10. There is a typo in the conclusion section compeition -> competition.

    But else a really cool review of a nice “toy” I sadly will never get too play with.

  11. James – if you look at Milan as an example, AMD does not even offer a sub 155W TDP CPU in the EPYC 7003 series. We discussed in that launch review why and it comes down to the I/O die using so much power.

  12. While nothing was mentioned about prices, Dell typically charge at least 100% premium for the hard drives with their logo (all made NOT by Dell) despite having much shorter than normal 5yr warranty – so combination of 1U 2 CPU (AMD EPYC) server and 102-106 4U SAS JBOD from WD or Seagate will be MUCH cheaper, faster (in a right configuration) and much easier to upgrade if needed (the server part which improves faster than SAS HDD).
    Color me unimpressed by this product despite impressive engineering.

  13. I just managed to purchase the 20 m.2 DSS FE1 card for my Dell R730xd. I did not to too much test but It works (both in esxi and truenas). And the cost for this card is absolute bargin. Feel free to contact me for the card.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.