40GbE Intel Fortville XL710 – Networking will never be the same

Intel Fortville Controller Page
Intel Fortville Controller Page

This year Intel began showing off the Fortville family of network adapters which we are going to see in earnest with Haswell-EP launching. I have been asked NOT to share specific performance figures or thermal imaging results until the official date. That request will be honored. At the same time, Intel has information on Fortville readily available online so it is time to start discussing why this is a game changer that will revolutionize server networking in the next few months.


First off, here are links on Intel’s publicly available sites as of 17 August 2014:

Also – rumor has it the latest Intel driver set released in August 2014 may have provisions for these new controllers.

Intel Fortville Controller Page
Intel Fortville Controller Page @ Intel.com

Edit 22 August 2014  – Intel sent us a note saying they pulled down the product pages ahead of the official announcement after seeing this piece.

Why 40GbE Fortville is going to revolutionize networking

We will be releasing more information in short order around these chips. Here is why this is a game changer: 3-4x performance gains at a power envelope about 19% lower than previous generations.

The Intel X520 family offers dual 10 gigabit Ethernet ports (20gbps total) at 8.6w. They use the Intel 82599ES controller built upon a 65nm process. Figure that is roughly 2.3gbps/watt. I am not including the 10GBase-T parts since those are going to use more power.

On the Fortville side it gets slightly more complex. One can get two 40 gigabit Ethernet ports using standard QSFP+ connections. The trick is that is 80gbps total. That number is higher than the PCIe 3.0 bus. PCIe 3.0 is 7.877 gigabits per second. Network cards like the XL710-am2 are limited to server standard PCIe 3.0 x8 slots. We can therefore do some simple math and 7.877 * 8 = 63.08gbps. Of course there is overhead so I would be happy to achieve say 88% of that.

Where things get really interesting is that despite being able to pump 3-4x what the previous generation did, Fortville has a lower TDP. According to the official ARK pages Fortville is a 28nm part with a 7w TDP!

Doing some ridiculously simple math on efficiency:

  • Spring Fountain X520-DA2 – 20/8.6 =2.33gbps/ watt
  • Fortville – 63.08/7 = 9.00gbps/ watt

That is a 3.87x improvement in efficiency with Fortville.

QSFP+ based 40gbps ports are nothing new. Even the STH colocation architecture uses Mellanox ConnectX-3 based 40GbE as of earlier this year. Many 10GbE switches use 40GbE backhauls so the technology is not new by any means. On the other hand, offerings from companies like Mellanox are expensive, especially the VPI Infiniband/ Ethernet versions we use in the lab. We have been utilizing the ConnectX-3 cards for almost two years now in our motherboard compatibility testing. Switches are the big pain point right now with large switches making 10GbE versions looks as cheap as bubble gum.

Intel has a solution though. Much like other 40GbE implementations we have used, one can use 1x QSFP+ to 4x SFP+ breakout cables. Essentially this means for 7w one can have connections to a combination of 8 10GbE switch or server ports. Cloud providers are certainly on the 10GbE bandwagon and this is going to be an absolute game changer for them.

Intel never had a 28nm process, so odds are that Intel will use TSMC or GlobalFoundries to build the chips. Given one of those companies’ ties, it should be easy to deduce which one. That shrink from 65nm to 28nm means that not only are the new Fortville controllers able to deliver more performance than previous generations. It also means we are seeing relatively stable pricing. ARK has the XL710-AM2 priced around $216 although we will need to see what controllers retail for. Then again, with that value compared to previous generations, it is unlikely we will see pricing hit 3-4x previous generation parts.

An interesting perspective: Talking with a SDN company executive

I had the opportunity to have breakfast with a networking executive this weekend. The general consensus is that this is going to lower the number of Cisco 1GbE switch ports needed. Furthermore, although it was agreed that SDN at these kinds of speeds is far from optimal, the new power/ performance envelope of these systems actually makes it economical to start working with more exotic SDN configurations. One can install three of these cards along with storage and networking in a server and directly attach 24 ports of 10GbE to a storage and virtualization server. From a latency perspective making a “big” virtual switch in a server is going to be slower than a physical switch but without the rack space, power consumption, cooling and extra hop the switch requires.

For some perspective: X520 era 24 10GbE ports would take 12 controllers (and you would run out of PCIe slots) and about 103 watts. There were some quad port Broadcom based cards in the 14w range so even using six of those expensive cards that would be about 84w, maybe a bit higher. With Fortville, one can do this with 3-4 cards and 21-28w depending on if you were using 6 ports/ card or 8. In terms of physical slots, 2U servers can easily fit three Fortville controllers (four if one is onboard) and have another 2-3 PCIe 3.0 x8 slots open for storage. Again, this may not be the ideal architecture for a number of reasons, but with Fortville, these types of architectures become practical to experiment with.


Fortville is going to change networking. One of the most exciting things we are looking forward to is how it changes network topology and hopefully starts moving 10GbE and 40GbE to higher adoption rates. For the software defined (x) teams, there were frankly existing offerings that could have filled this gap, but Intel’s stamp of approval and its large market share that goes along with that stamp, is a game changer. Expect more coverage as we can share.


  1. Once these cards and Haswell-EP are out, I am totally going to build a converged switch with SSD/ storage.

    With that much potential bandwidth and connectivity, you probably save $1000/ year just in powe and space by getting rid of the switch. I know switches are better but we have a lot of applications that are really heavy (eg VM deploy) at times but then a particular link can not have much traffic for hours afterwards. that kinda stuff is exactly what we would need this for.

    It’s also going to make booting headless + HA networked shared storage that much mroe attractive,

  2. I’m salavating at how awesome these look.

    Imagine if Intel actually sold the XL710-am2 cards in the 200/ each range. I would start doing converged boxes in labs.

    I’m so excited about this launch. Thanks for sharin

  3. What about mesh network or something like that. To me, if I can get density, I want to start pushing out my Cisco gear.

    I was browsing the forums here and there seems to be a $750 40 gig/ 10 gig capable L2 switch.

  4. It seems that Intel has taken all pages about this NIC down. You can search their site for XL710 and get several hits, but all the links are basically dead.

    Note: this is as of today (2014-08-27).

  5. > Networking will never be the same …”

    If I interpret that too literally it fails. The oldest methods occasionally get reintroduced, and even new methods often rely on old technology; it did not leap out how this Card is revolutionary beyond prior revolutions.

    How about the Intel Xeon 1500 Series SoC CPUs with 2 x 10GbE Intel Ethernet Ports on the Chip.

    Only half as fast you say, how about the TDP and the ‘Brain Power’ those Ports have; next year will be faster and you can run an OS on it now …

    This Chip might ‘revolutionize.Network ing’ as it can make a smart Router and could still take a few additional PCI Cards (Networking or otherwise) in an mITX form factor.

    Motto: Be gentle with your excitement in the Tech World as what you learned a few decades ago or purchased one decade ago is worthless; what you learned a few years ago or purchased one year ago you can not sell (for much).

  6. According to the latest documentation XL710 does not support more than 1 QSFP+ to 4 SFP+ breakout cable. On top of that, it does not support RDMA. With these options lacking, the real revolution will be further into the future with CannonLake.


Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.