Mellanox ConnectX-2 MHQH19B-XTR – 40gbps Windows SMB 3.0 Adapters

2

In Windows Server 2012 and Windows 8 Microsoft introduced SMB 3.0 which greatly improves shared storage performance. After playing with it a bit, and I think others can concur, whereas SMB 2.1 was not a superior performance option, this generation of Windows SMB 3.0 does a lot to bridge the gap between it and faster protocols such as iSCSI (given that there are still significant differences in how they work.) With new Hyper-V enhancements this time around, and the fact that there are a ton of fast storage options out there with SSD storage now pushing into the $0.50 to $0.65 per GB range, users are looking to speed up the network side of the equation. 10 gigabit Ethernet is great, especially since it is easy to plug adapters into existing Ethernet networks but the cost per port both on switches and NICs is still very high when compared to Infiniband counterparts.

After Max’s Mellanox MHEA28-XTC piece where he looked at the performance of two dual port 10gbps Infiniband cards, there has been a healthy amount of activity on the forums looking for faster options. One example of this is that one of the forum users, ehorn, found Mellanox ConnectX-2 MHQH19B-XTR Infiniband cards on ebay for under $180 and we found that a $100 best offer bid was accepted. That is 40gbps single port cards for $100 each which is amazing. One does need to do get a little bit lucky, but $299 for the dual port models seems fairly regular. On the 10GbE side, the vast majority of adapters are single or dual port with quad port 10GbE adapters such as the ATTO FastFrame NS14 costing upwards of $2,000, or about twenty times the cost of a single-slot Infiniband solution such as the Mellanox MHQH19B-XTR. That makes $100 for 40gbps fairly attractive. Just to put this into perspective, the first 3.0gbps JBOD SAS expander chassis I built some time back used a single SFF-8088 to SFF-8088 cable to push 12.0gbps over four SAS/ SATA lanes. Moving to SAS II or SATA III at 6.0gbps that figure moves to 24gbps still well short of what one can get with Infiniband in terms of raw throughput.

One major note here is that these are models, like the MHEA28-XTC without onboard memory. That means while you can use them with Windows Server 2012 you cannot use these with Solaris derivatives in applications such as an OpenIndina based ZFS server with these cards so this is more of a Windows SMB 3.0 play. I wanted to bring this to the attention of folks for two reasons. First, join the forums to either contribute great finds you have or see if anyone has found an interesting buy recently. You can use this ebay search for the Mellanox MHQH19B-XTR but you may want to also see what else folks are finding. Second, I am going to add the Mellanox MHQH19B-XTR to my array of server oriented cards I verify with each new motherboard I review. For those that are not aware, when I do motherboard reviews I have eight add-in cards spanning Ethernet controllers, Infiniband controllers and RAID controllers/ HBAs that I test every motherboard with. I think compatibility is something very important for any server or workstation application since there is generally a need to customize using add-in cards. The Mellanox MHQH19B-XTR is several generations newer than the 10gbps Mellanox MHEA28-XTC cards I have been using so I figure it would be a good addition.

Mellanox ConnectX-2 MHQH19B-XTR
Mellanox ConnectX-2 MHQH19B-XTR

I did want to leave one final thought, the Mellanox MHQH19B-XTR is a ConnectX-2 part. There are some ConnectX (first generation) QDR Infiniband cards out there, such as the Mellanox MHQH19-XTC, but they do not appear to support SMB 3.0 from what we know right now. They would still be interesting in Windows Server 2012 plus Windows 8 or Linux setups similar to what Max showed in his How to Set up an Infiniband SRP Target on Ubuntu 12.04 guide, but I did want to point out that distinction between the two generations.

2 COMMENTS

  1. Patrick, good write up.

    “..this generation of Windows SMB 3.0 does a lot to bridge the gap between it and faster protocols such as iSCSI..”

    According to Microsoft, with SMB 3.0 that gap is now bridged.
    SMB 3.0 (with the right NIC) is faster & offers more enhancements (aka higher performance) than iSCSI

  2. Yes You can use SMB3 in place of iSCSI but after a LOT of disk testing, I’ve found performance falls off a cliff when you use Hyper-v over SMB. SMB 3 really works best with large sequential writes and seems to do very poorly with small blocks of data.

    Once you start testing 4k and IOPS it becomes really clear that there is a huge penalty for using disks over SMB. I’m seeing about 1/8 the speed over the local storage performance. That’s with a Mellanox connectx 2 using RDMA. I can transfer data at 3gb (limited by the 8xPCIe slot) when using 4-8mb.

    I’m going to reach out to the SMB team and see if they can either confirm my results or explain how to tune SMB for Hyper-V.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.