This year Intel began showing off the Fortville family of network adapters which we are going to see in earnest with Haswell-EP launching. I have been asked NOT to share specific performance figures or thermal imaging results until the official date. That request will be honored. At the same time, Intel has information on Fortville readily available online so it is time to start discussing why this is a game changer that will revolutionize server networking in the next few months.
First off, here are links on Intel’s publicly available sites as of 17 August 2014:
Also – rumor has it the latest Intel driver set released in August 2014 may have provisions for these new controllers.
Edit 22 August 2014 – Intel sent us a note saying they pulled down the product pages ahead of the official announcement after seeing this piece.
Why 40GbE Fortville is going to revolutionize networking
We will be releasing more information in short order around these chips. Here is why this is a game changer: 3-4x performance gains at a power envelope about 19% lower than previous generations.
The Intel X520 family offers dual 10 gigabit Ethernet ports (20gbps total) at 8.6w. They use the Intel 82599ES controller built upon a 65nm process. Figure that is roughly 2.3gbps/watt. I am not including the 10GBase-T parts since those are going to use more power.
On the Fortville side it gets slightly more complex. One can get two 40 gigabit Ethernet ports using standard QSFP+ connections. The trick is that is 80gbps total. That number is higher than the PCIe 3.0 bus. PCIe 3.0 is 7.877 gigabits per second. Network cards like the XL710-am2 are limited to server standard PCIe 3.0 x8 slots. We can therefore do some simple math and 7.877 * 8 = 63.08gbps. Of course there is overhead so I would be happy to achieve say 88% of that.
Where things get really interesting is that despite being able to pump 3-4x what the previous generation did, Fortville has a lower TDP. According to the official ARK pages Fortville is a 28nm part with a 7w TDP!
Doing some ridiculously simple math on efficiency:
- Spring Fountain X520-DA2 – 20/8.6 =2.33gbps/ watt
- Fortville – 63.08/7 = 9.00gbps/ watt
That is a 3.87x improvement in efficiency with Fortville.
QSFP+ based 40gbps ports are nothing new. Even the STH colocation architecture uses Mellanox ConnectX-3 based 40GbE as of earlier this year. Many 10GbE switches use 40GbE backhauls so the technology is not new by any means. On the other hand, offerings from companies like Mellanox are expensive, especially the VPI Infiniband/ Ethernet versions we use in the lab. We have been utilizing the ConnectX-3 cards for almost two years now in our motherboard compatibility testing. Switches are the big pain point right now with large switches making 10GbE versions looks as cheap as bubble gum.
Intel has a solution though. Much like other 40GbE implementations we have used, one can use 1x QSFP+ to 4x SFP+ breakout cables. Essentially this means for 7w one can have connections to a combination of 8 10GbE switch or server ports. Cloud providers are certainly on the 10GbE bandwagon and this is going to be an absolute game changer for them.
Intel never had a 28nm process, so odds are that Intel will use TSMC or GlobalFoundries to build the chips. Given one of those companies’ ties, it should be easy to deduce which one. That shrink from 65nm to 28nm means that not only are the new Fortville controllers able to deliver more performance than previous generations. It also means we are seeing relatively stable pricing. ARK has the XL710-AM2 priced around $216 although we will need to see what controllers retail for. Then again, with that value compared to previous generations, it is unlikely we will see pricing hit 3-4x previous generation parts.
An interesting perspective: Talking with a SDN company executive
I had the opportunity to have breakfast with a networking executive this weekend. The general consensus is that this is going to lower the number of Cisco 1GbE switch ports needed. Furthermore, although it was agreed that SDN at these kinds of speeds is far from optimal, the new power/ performance envelope of these systems actually makes it economical to start working with more exotic SDN configurations. One can install three of these cards along with storage and networking in a server and directly attach 24 ports of 10GbE to a storage and virtualization server. From a latency perspective making a “big” virtual switch in a server is going to be slower than a physical switch but without the rack space, power consumption, cooling and extra hop the switch requires.
For some perspective: X520 era 24 10GbE ports would take 12 controllers (and you would run out of PCIe slots) and about 103 watts. There were some quad port Broadcom based cards in the 14w range so even using six of those expensive cards that would be about 84w, maybe a bit higher. With Fortville, one can do this with 3-4 cards and 21-28w depending on if you were using 6 ports/ card or 8. In terms of physical slots, 2U servers can easily fit three Fortville controllers (four if one is onboard) and have another 2-3 PCIe 3.0 x8 slots open for storage. Again, this may not be the ideal architecture for a number of reasons, but with Fortville, these types of architectures become practical to experiment with.
Fortville is going to change networking. One of the most exciting things we are looking forward to is how it changes network topology and hopefully starts moving 10GbE and 40GbE to higher adoption rates. For the software defined (x) teams, there were frankly existing offerings that could have filled this gap, but Intel’s stamp of approval and its large market share that goes along with that stamp, is a game changer. Expect more coverage as we can share.