Intel Omni-Path – August 2015 Update

Intel Omni-Path August 2015 - Adapter Shot
Intel Omni-Path August 2015 - Adapter Shot

At IDF 2015 we were briefed on the status of Intel Omni-Path including a great briefing on how it works and why Intel made the design decisions it did. Suffice to say, as we are putting together our third and much denser/ higher spec colocation project to launch next week, the topic of interconnects is top of mind. Intel is disclosing new details about Omni-Path today at Hot Interconnects. We will be doing an in-depth look at the topic as we get cards in the lab and with a Knights Landing piece coming soon. For now, here are some of the new details we can share.

First off, what is Intel Omni-Path. Essentially in the world of large systems (think HPC, Supercomputers, Storage and other clustered applications) standard 1Gb, 10Gb and 40Gb Ethernet does not cut it. Even 100Gb Ethernet with its network stack does not work with the needs of low latency transactions. Infiniband was designed well over a decade ago to address the needs of high-speed storage, but it has turned into the high bandwidth, low latency interconnect for the vast majority of supercomputers and HPC clusters. A few years ago when Qlogic sold their Infiniband business leaving Mellanox as the only standalone player, industry pundits said Infiniband was dead. What actually happened is that the industry realized there was a better-than-Ethernet alternative that was readily available. A bit of work on the hardware and software stack made Infiniband work for big MPI compute problems and even storage clusters. If you head over to the STH forums there are many users who have turned to 40Gb/ 56Gb Infiniband for lower cost and power consumption networking.

In the meantime Intel bought Infiniband IP from Qlogic, interconnect IP from Cray and a host of other acquisitions in the area. The result we finally see at the 100Gb generation of interconnects incarnated as Omni-Path. For those wondering, Omni-Scale was the old marketing name and we are now using Omni-Path. Intel is starting with 48 port switches and PCIe cards as the first products released. We did get some “hands on” time  with Omni-Path but unfortunately not with the card powered up and usable.

Intel Omni-Path August 2015 - Adapter Shot
Intel Omni-Path August 2015 – Adapter Shot

The PCIe card is expected to come in around 8w although Intel did not disclose the process it is on. We did ask if it was on the same 28nm TSMC process that Fortville is on but got a “no comment” response.

Knights Landing will soon have an on-package integrated Omni-Path dual port controller. The bottom line is that this is rapidly coming to market and Intel is making a very large push. Afterall, Xeons, Altera FPGAs and new Knights Landing Xeon Phis all can benefit from faster interconnects.

Intel Omni-Path August 2015 - Architecture
Intel Omni-Path August 2015 – Architecture

In terms of other architecture notes, other than Xeon Phi (Knights Landing) we also expect to see 14nm process Xeon E5 chips with Omni-Path interconnects on package or on die. Intel has stated that the direction is on-die integration which means lower power and latency.

Intel Omni-Path August 2015 - What is public
Intel Omni-Path August 2015 – What is public

Moving to the new public information for Xeon Phi and Knights Landing one can see that Omni-Path is going to be a target for the platform. Intel is essentially taking a card in many ways analogous to a GPU accelerator for HPC applications. It is then making it bootable with a Xeon E5 V3 compatible ISA and giving it both high speed local memory and access to hex channel DDR4. Essentially, Intel is taking what we normally see as a PCIe accelerator and turning it into a full system that needs an interconnect. Omni-Path will be an on-package solution for lower power and easier integration but there will be a PCIe bus to connect Mellanox Infiniband or other Ethernet based adapters as well.

Intel Omni-Path August 2015 - New for hot interconnects
Intel Omni-Path August 2015 – New for hot interconnects

We will get into more of the architecture soon, but one of the most interesting aspects of Omni-Path is that it has a layer 1.5 transport protocol which sends consistent 65kb messages. These messages carry traditional packets. As a result, Intel is able to dynamically insert high priority packets over the line even while a large low priority packet has started to be transmitted over the wire. For applications needing low latency transfers, this is going to be a game changer.

Competitive solutions such as the Oracle Sonoma SPARC processor will have integrated on-die Infiniband so this is a clear direction the industry is headed. We are now at a point where there there are three strong possibilities for interconnects at 100Gb: Ethernet, Infiniband and Omni-Path. As the industry transitions over the next few quarters, it will be exciting to see many of the fabric latency penalties get some relief form new technologies.

Previous articleSupermicro 1028GQ-TRT Quad GPU 1U SuperServer Review
Next articleIntel Knights Landing Details Emerge

Patrick has been running STH since 2009 and covers a wide variety of SME, SMB, and SOHO IT topics. Patrick is a consultant in the technology industry and has worked with numerous large hardware and storage vendors in the Silicon Valley. The goal of STH is simply to help users find some information about server, storage and networking, building blocks. If you have any helpful information please feel free to post on the forums.


Please enter your comment!
Please enter your name here