Move over EDR: Mellanox announces 200Gbps HDR Infiniband Products

0
Mellanox HDR Launch
Mellanox HDR Launch

Ahead of the SC16 conference next week, Mellanox announced 200Gbps HDR Infiniband products, effectively doubling the performance of current-generation 100Gbps EDR Infiniband. In the high-end HPC segment and with large AI labs organizations are willing to invest in “exotic” interconnects like Infiniband because they can improve cluster performance by several percentage points. Mellanox has ridden the wave of Infiniband popularity over the past few years and has become the dominant networking technology to connect supercomputers and high-end storage.

The 100Gbps market has strong competition. 100Gbps Ethernet has become significantly more available in 2017 and after two re-spins of the Broadcom Tomahawk platforms we are just starting to see more products hit the market. Intel has Omni-Path which has the advantage of being available, very inexpensively, on package both with their Xeon Phi X200 series “Knights Landing” processors as well as their upcoming Skylake-EP platforms. Adding Omni-Path adapters to a system has a marginal cost impact. Both Ethernet and Omni-Path will be running at half the speed of HDR Infiniband once Mellanox Quantum switches and ConnectX-6 cards hit the market.

Mellanox HDR Launch
Mellanox HDR Launch

If you were thinking that 200Gbps is fast, consider this, the ConnectX-6 adapter requires either 32 PCIe 3.0 lanes or 16 PCIe 4.0 lanes to connect to a system. That is a ton of bandwidth. 100Gbps generation cards required special design considerations because many servers offer only PCIe 3.0 x8 slots. Moving to a 32 lane PCIe 3.0 implementation may require using two PCIe 3.0 x16 slots to fill the bandwidth needs. PCIe 4.0 is available to the industry however it is not available in mainstream Intel platforms that have well over 95% server volume market share.

The company expects HDR availability in 2017.

From the Mellanox press release:

Mellanox Technologies, Ltd., a leading supplier of high-performance, end-to-end interconnect solutions for data center servers and storage systems, today announced the world’s first 200Gb/s data center interconnect solutions. Mellanox ConnectX-6 adapters, Quantum switches and LinkX cables and transceivers together provide a complete 200Gb/s HDR InfiniBand interconnect infrastructure for the next generation of high performance computing, machine learning, big data, cloud, web 2.0 and storage platforms. These 200Gb/s HDR InfiniBand solutions maintain Mellanox’s generation-ahead leadership while enabling customers and users to leverage an open, standards-based technology that maximizes application performance and scalability while minimizing overall data center total cost of ownership. Mellanox 200Gb/s HDR solutions will become generally available in 2017.

Key Specs for ConnectX-6 from Mellanox:

  • HDR 200Gb/s InfiniBand or 200Gb/s Ethernet per port and all lower speeds
  • Up to 200M messages/second
  • Tag Matching and Rendezvous Offloads
  • Adaptive Routing on Reliable Transport
  • Burst Buffer Offloads for Background Checkpointing
  • NVMe over Fabric (NVMf) Target Offloads
  • Back-End Switch Elimination by Host Chaining
  • Enhanced vSwitch / vRouter Offloads
  • Flexible Pipeline
  • RoCE for Overlay Networks
  • PCIe Gen3 and Gen4 Support
  • Erasure Coding offload
  • T10-DIF Signature Handover
  • IBM CAPI v2 support
  • Mellanox PeerDirect™ communication acceleration
  • Hardware offloads for NVGRE and VXLAN encapsulated traffic
  • End-to-end QoS and congestion control
  • Hardware-based I/O virtualization

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.