In-depth Dell EMC PowerEdge MX Review Hands-on with a Woweredge

16
Dell EMC PowerEdge MX Rear
Dell EMC PowerEdge MX Rear

In December, I made a long and arduous journey (almost 11 minutes) to Santa Clara to review the Dell EMC PowerEdge MX. This was an idea I had been working with the Dell EMC marketing teams with for many months. Getting the highest-end system from the top server vendor in the world was going to be challenging, and was not going to be fast enough for me. We devised a plan to get into the Dell Technologies Executive Briefing and Solution Center in Santa Clara, California, where the company would setup a PowerEdge MX for STH to review and get some hands-on time with.

I just wanted to say thank you to Ajit and the rest of the Dell EMC team for letting me tear apart a PowerEdge MX, while it was running, in front of a glass case in their Customer Solution Center.

Dell EMC Santa Clara Executive Briefing And Solution Center
Dell EMC Santa Clara Executive Briefing And Solution Center

At STH we do hands-on reviews. What we are going to cover today is an in-depth look at the Dell EMC PowerEdge MX. The easy thing to do would be to regurgitate a press release or marketing deck, but we go the extra step to get hands-on with hardware from all vendors to get an industry perspective.

The Dell EMC PowerEdge MX is an interesting case. On one hand, one can look at the front of the chassis and declare it a 1-8 compute blade chassis with redundant cooling and power than one can also stick storage into.

Dell EMC PowerEdge MX Front Diagram
Dell EMC PowerEdge MX Front Diagram

Likewise, one can look at the rear of the chassis and opine that it has redundant networking, storage fabric, and management modules.

Dell EMC PowerEdge MX Rear Diagram
Dell EMC PowerEdge MX Rear Diagram

That view and a view limited to what is in-market today are too limited. Having hands-on time with the Dell EMC PowerEdge MX and working with competitive systems, one can see that the PowerEdge MX is a different design philosophy. It is perhaps the first chassis server designed from the ground up to address the foreseeable future in server development. Dell EMC has the roadmaps from all of the major component suppliers, and those roadmaps led the company to a new design that eschews the conventional wisdom of using a midplane. After tearing down the PowerEdge MX, our conclusion is that this was a deliberate design direction intended to allow the system to reap the benefits of networking, interconnect, and topological advances we will see in both the near and longer term.

In the process of our review, we collected frankly too many pictures and screenshots. We are going to organize this review in the following order:

  • First, we will look at the Dell EMC PowerEdge MX7000 chassis that underpins the platform. We are going to see how and why Dell EMC executed the no-midplane design. We are also going to look at the power and cooling design of the chassis.
  • We are then going to delve into an abbreviated Dell EMC PowerEdge MX management overview. This will cover both the chassis and the component level management options that the solution provides.
  • We will then have overviews of the Dell EMC PowerEdge MX compute sled options as they stand on the date of this review’s publication. That will be followed by a discussion around the PowerEdge MX fabric, and then storage.
  • We are going to then conclude the review with the STH Server Spider and our final thoughts.

At the end of this review, you will see the results of our process whereby we now call the PowerEdge MX the “WowerEdge.”

16 COMMENTS

  1. Ya’ll are on some next level reviews. I was expecting a paragraph or two and instead got 7 pages and a zillion pictures — ok I didn’t count.

    I think Dell needs to release more modules. AMD EPYC support? I’m sure they can. Are they going to have Cascade Lake-AP? I’m sure Ice Lake right?

  2. Can you guys do a piece on Gen-Z? I’d like to know more about that.

    It’s funny. I’d seen the PowerEdge MX design, but I hadn’t seen how it works. The connector system has another profound impact you’re overlooking. There’s nothing stopping Dell from introducing a new edge connector in that footprint that can carry more data. It can also design motherboards with high-density x16 connectors and build-in a PCIe 4.0 fabric next year.

  3. I’d like to see a roadmap of at least 2019 fabric options. Infiniband? They’ll need that for parity with HPE. It’s early in the cycle and I’d want to see this review in a year.

  4. Of course STH finds fabric modules that aren’t on the product page… I thought ya’ll were crazy, buy the you have a picture of them. Only here

  5. Fabric B cannot be FC, it has to be fabric C, A and B are networking (FCOE) only.

    “Each fabric can be different, for example, one can have fabric A be 25GbE while fabric B is fibre channel. Since the fabric cards are dual port, each fabric can have two I/O modules for redundancy (four in total.)”

  6. what a GREAT REVIEW as usual patrick. Ofcourse, ONLY someone who has seen/used the cool HW that STH/pk over the years would give this system a 9.6! (and not a 10!) ha. Still my favorite STH review of all time is the dell 740xd.

    btw, i think you may be surprised how many views/likes you would get on that raw 37min screen capture you made, posted to your sth youtube channel.
    I know i for one would watch all 37min of it! Its a screen capture/video that many of us only dream of seeing/working on. Thanks again pk, 1st class as always.

  7. you didnt mention that this design does not provide nearly the density of the m1000e. going from 16 blades in 10 ru to 8 blades in 7ru… to get 16 blades I would now be using 14 RU. Not to mention that 40GB links have been basically standard on the m1000e for what 8 years? and this is 25 as the default? Come ON!

  8. Hey Salty you misunderstood that. The external uplinks have 4x25Gb / 100Gb at least for each connector. Fabric links have even 200Gb for each connector (most of them are these) on the switches. ATM this is the fastest shit out there.

  9. Higher-end gear has internal cable management that allows one to pull out sleds and still maintain drive connectivity. On compute nodes, there are also traditional front hot-swap bays.

  10. 400Gbps/ 25Gbps = 8 is a mistake. When NIC with dual ports are used, only one uplink from MX7116n fabric extender is connected 8x25G = 200G. The second 2x100G is used only for Quad port NIC.

  11. Good review, Patrick! I was just wondering if that Fabric C, the SAS sled that you have on the test unit, can be used to connect external devices such as a JBOD via SAS cables? It’s not clear on the datasheets and other information provided by Dell if that’s possible, and the Dell reps don’t seem to be very knowledgeable on this either. Thanks!

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.