In-depth Dell EMC PowerEdge MX Review Hands-on with a Woweredge

16

Dell EMC PowerEdge MX Fabric

Fabric and networking is what will set the Dell EMC PowerEdge MX apart from many of its competitors. At launch, the PowerEdge MX is designed to use 25GbE as the base, although it supports 32Gb Fibre Channel I/O as well.

Dell EMC PowerEdge MX Ethernet And Fibre Channel Modules
Dell EMC PowerEdge MX Ethernet And Fibre Channel Modules

Internally the PowerEdge MX7000 I/O subsystem has redundant fabrics called A and B. Each sled has mezzanine slots that align with A and B fabrics. In this example, we can see the A fabric mezzanine is populated, while the B mezzanine connector is not populated.

Dell EMC PowerEdge MX740c Fabric
Dell EMC PowerEdge MX740c Fabric

Each fabric can be different, for example, one can have fabric A be 25GbE while fabric B is fibre channel. Since the fabric cards are dual port, each fabric can have two I/O modules for redundancy (four in total.)

Our test system only had fabric A populated. Here there is a PowerEdge MX9116n fabric switching engine on top with the PowerEdge MX7116n fabric expander module below.

Dell EMC PowerEdge MX In Rack
Dell EMC PowerEdge MX In Rack

In this configuration, each of the eight blades connects to the top MX9116n. That MX9116n handles all of the switching logic. The MX7116n is there to connect to the other Dell EMC PowerEdge MX in Dell’s Santa Clara Customer Solution Center. Using the MX7116n fabric extender, one can uplink to a switch in a different chassis. Although there are only two ports on the MX7116n, each is a QSFP28-DD cage, capable of 200Gbps operation. If you are doing the math, 200Gbps * 2 ports = 400Gbps. 400Gbps/ 25Gbps = 8 or a 25GbE connection for each of the blades in the chassis.

Using these fabric extender modules, one can have switchless PowerEdge MX chassis instead using fabric extenders to link to the switching elements in other PowerEdge MX chassis. That is a great design that can help lower costs and complexity of PowerEdge MX deployments.

There Will be More Fabric Modules

I have been a big proponent that despite what Dell EMC publicly acknowledges, the PowerEdge MX is ready for completely new paradigms such as Gen-Z. If you contemplate a Gen-Z or 400GbE future, the PowerEdge MX makes a lot of sense. If you do not want to support either and both, then the PowerEdge MX may be overkill.

From the slide above and the company’s product page as of this review’s publication, these are the officially supported modules:

  • PowerEdge MX9116n optimum performance 25G fabric switching engine
  • PowerEdge MX5108n high-performance, low-latency 25G Ethernet switch
  • PowerEdge MX7116n low latency 25G fabric expander module
  • PowerEdge MXG610s high-performance, non-blocking 32G Fibre Channel switch

Our test unit has the PowerEdge MX9116n and MX7116N modules installed. At the same time, you may have seen the PowerEdge MX from VMworld 2018 in Las Vegas, Nevada. There we were treated to this configuration:

Dell EMC PowerEdge MX Rear
Dell EMC PowerEdge MX Rear

The top unit looks like the PowerEdge MX9116n. The three horizontal network I/O modules each have sixteen RJ-45 connectors. Those are likely 10Gbase-T pass-through I/O (non-switched) modules. While Dell EMC may officially say there are four modules, there are certainly others at least in some stage of development out there. Future modules with 50GbE-400GbE, Gen-Z, and etc. are possible with the PowerEdge MX design without needing to swap the midplane.

Next, we are going to look at storage for the PowerEdge MX.

16 COMMENTS

  1. Ya’ll are on some next level reviews. I was expecting a paragraph or two and instead got 7 pages and a zillion pictures — ok I didn’t count.

    I think Dell needs to release more modules. AMD EPYC support? I’m sure they can. Are they going to have Cascade Lake-AP? I’m sure Ice Lake right?

  2. Can you guys do a piece on Gen-Z? I’d like to know more about that.

    It’s funny. I’d seen the PowerEdge MX design, but I hadn’t seen how it works. The connector system has another profound impact you’re overlooking. There’s nothing stopping Dell from introducing a new edge connector in that footprint that can carry more data. It can also design motherboards with high-density x16 connectors and build-in a PCIe 4.0 fabric next year.

  3. I’d like to see a roadmap of at least 2019 fabric options. Infiniband? They’ll need that for parity with HPE. It’s early in the cycle and I’d want to see this review in a year.

  4. Of course STH finds fabric modules that aren’t on the product page… I thought ya’ll were crazy, buy the you have a picture of them. Only here

  5. Fabric B cannot be FC, it has to be fabric C, A and B are networking (FCOE) only.

    “Each fabric can be different, for example, one can have fabric A be 25GbE while fabric B is fibre channel. Since the fabric cards are dual port, each fabric can have two I/O modules for redundancy (four in total.)”

  6. what a GREAT REVIEW as usual patrick. Ofcourse, ONLY someone who has seen/used the cool HW that STH/pk over the years would give this system a 9.6! (and not a 10!) ha. Still my favorite STH review of all time is the dell 740xd.

    btw, i think you may be surprised how many views/likes you would get on that raw 37min screen capture you made, posted to your sth youtube channel.
    I know i for one would watch all 37min of it! Its a screen capture/video that many of us only dream of seeing/working on. Thanks again pk, 1st class as always.

  7. you didnt mention that this design does not provide nearly the density of the m1000e. going from 16 blades in 10 ru to 8 blades in 7ru… to get 16 blades I would now be using 14 RU. Not to mention that 40GB links have been basically standard on the m1000e for what 8 years? and this is 25 as the default? Come ON!

  8. Hey Salty you misunderstood that. The external uplinks have 4x25Gb / 100Gb at least for each connector. Fabric links have even 200Gb for each connector (most of them are these) on the switches. ATM this is the fastest shit out there.

  9. Higher-end gear has internal cable management that allows one to pull out sleds and still maintain drive connectivity. On compute nodes, there are also traditional front hot-swap bays.

  10. 400Gbps/ 25Gbps = 8 is a mistake. When NIC with dual ports are used, only one uplink from MX7116n fabric extender is connected 8x25G = 200G. The second 2x100G is used only for Quad port NIC.

  11. Good review, Patrick! I was just wondering if that Fabric C, the SAS sled that you have on the test unit, can be used to connect external devices such as a JBOD via SAS cables? It’s not clear on the datasheets and other information provided by Dell if that’s possible, and the Dell reps don’t seem to be very knowledgeable on this either. Thanks!

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.