Featured Build Log: 2-Node Windows Server 2016 Storage Spaces Direct Cluster

6
Storage Spaces Direct Red S2D Node Build Parts
Storage Spaces Direct Red S2D Node Build Parts

I have been running VMs on an older overclocked 4790k with a single SSD for quite a while and decided it was time to step up my game. I’ve been adding VMs regularly and pushed the server to its limits so a higher capacity system was in order. A single system would be simple to maintain but after Microsoft Windows Server 2016 came out with storage spaces direct technology so I decided I would be an early adopter and see how much pain I could endure setting up a two node lab. I am an MSP and this Windows Server 2016 Storage Spaces Direct hyper-converged cluster is going to be used to house a dozen or two VMs. Some for fun (plex), some for my business (Altaro offsite backup server). It will host a virtual firewall or two as well as all of the tools I use to manage my clients. Here is the original post of this build log on the STH Forums.

Introducing the 2-Node Windows Server 2016 Storage Spaces Direct Cluster

I’ve been in the technology game since I was about 14 when I made friends with the owner of a local computer shop. I ended up managing the shop and eventually went to school and became a full-time Sys Admin. Since I’ve been rolling my own hardware for so long that is generally my preferred way to go when it comes to personal projects. I am also space limited so I had to figure out a way to do this without a rack. I decided to build the two node cluster using off the shelf equipment so that it would look, perform, and sound the way I wanted. Did I mention I hate fan noise? If it’s not silent it doesn’t boot in my house. A rack will definitely be in my future and the use of standardized server equipment means I can migrate the two servers to rack mountable chassis with minimum effort.

Storage Spaces Direct Red V Blue S2D Nodes
Storage Spaces Direct Red V Blue S2D Nodes

The end result is a two node hyper-converged Windows storage spaces direct cluster using Supermicro motherboards, Xeon processors, and Noctua fans. Silence is something you can’t buy from HP or Dell unfortunately. It only has the ability to withstand a single drive or server failure but that won’t be a problem since it is “on premise” and using SSDs.

Storage Spaces Direct Blue S2D Node Build Parts
Storage Spaces Direct Blue S2D Node Build Parts

The two nodes are connected on the back end at 40Gbps using HP branded Connectx-3 cards/DAC which handles the S2D (Storage Spaces Direct), heartbeat, and live migration traffic. Using this method I did not need a 40GbE switch which saves an enormous amount of power. I still got 40GbE which gives lots of performance. On the front end, they are each connected via fiber @ 10Gbps to a Ubiquiti US-16-XG switch (a close relative of the Ubiquiti ES-16-XG reviewed on STH) and @ 1Gbps to an unmanaged switch connected to a Comcast modem. This allows me to use a virtual firewall and migrate it between the nodes.

Storage Spaces Direct Red S2D Node Build Parts
Storage Spaces Direct Red S2D Node Build Parts

Each node is running a Supermicro microATX board and an Intel V4 Broadwell-based CPU. I found a great deal on a 14 core (2.5Ghz) E5 V3 CPU on ebay for one of the nodes. I purchased a 6 core (3.6Ghz) Xeon for the other node. The logic behind the two different CPUs is to have one node with higher clocks but a lower core count and the other with a high core count CPU but with lower clocks. This does impose a 12 core per VM limit (the max number of threads on the 6 core CPU). Any more cores and the VM could only run on the 14 core node.

Storage Spaces Direct Red S2D Node 40GbE Installed
Storage Spaces Direct Red S2D Node 40GbE Installed

In order to use S2D on a 2 node cluster with all flash storage you need a minimum of 4 SSDs, two per server. I am using a total of 6 SamsungPM863 drives, 3 per server. Since a 2 node cluster can only use mirroring I am able to utilize half of the total storage, about 1.3TB. Since this configuration can only handle a single drive failure with significant risk to the cluster I will be adding an additional drive to each server in the future. Having an additional drives worth of “unclaimed” space allows S2D to immediately repair itself to the unused drive if there is a failure, similarly to a hot spare in a RAID array. Performance is snappy but not terribly fast on paper, 60K read IOPS, 10K write IOPS.

Storage Spaces Direct Intel Xeon E5 2658 V4 Node
Storage Spaces Direct Intel Xeon E5 2658 V4 Node

I also decided to play with virtualized routers and am currently running pfSense and Sophos XG in VMs. By creating a network dedicated to my Comcast connection I am able to migrate the VMs between the nodes with no down time other than a single lost packet if the move is being lazy. I will be trying out firewalls from Untangle and a few others to see which works best.dddd

Storage Spaces Direct Failover Cluster Manager Nodes
Storage Spaces Direct Failover Cluster Manager Nodes

The hardware build process went very smooth thanks to the helpful people on the STH forum and the deals section. I saved a lot of money by buying used when I could. I had that nifty Supermicro fluctuating fan problem that I managed to fix thanks to more help and both nodes are essentially silent, the Noctua fans run around 300rpm on average and haven’t gone above 900 under prime95.

Power consumption is right in line with what I was hoping for.

  • Node1 idle: 50W
  • Node 2 idle: 45W
  • Node 2 Prime95: 188W

Neither node puts out enough heat to mean anything and even under full load they are both dead silent. Due to the oddities of the Styx case airflow is actually back to front and top to bottom. This works great with the fanless power supplies as they have constant airflow.

Storage Spaces Direct Blue S2D Node 40GbE Installed
Storage Spaces Direct Blue S2D Node 40GbE Installed

The software configuration was a good learning experience. There are about a million steps that need to be done and while I can do most of them in my sleep S2D was a new experience. Since S2D automates your drive system I ran into a problem I wasn’t expecting. The major gotcha I found involved S2D grabbing an iSCSI share the moment I added it to the machines. It tried to join it to the pool and ended up breaking the whole cluster… twice. Admittedly I knew what would happen the second time but I’m a glutton for punishment apparently.

Storage Spaces Direct Failover Cluster Manager Storage Disks
Storage Spaces Direct Failover Cluster Manager Storage Disks

Other than that everything has worked flawlessly. Rebooting a node causes a 10-minute window in which the storage system is in a degraded state while the rebooted node has its drives regenerated. I can move all VMs from one node to the other in just a few seconds (over a dozen VMs at the same time which only uses about 12Gbps of bandwidth) or patch/reboot a node without shutting anything else down.

Storage Spaces Direct Intel Xeon E5 1650 V4 Node
Storage Spaces Direct Intel Xeon E5 1650 V4 Node

Overall I think MS hit a home run with Windows Server 2016 and storage spaces direct. The drive configuration is one of the most flexible of all of the hyper-converged solutions out there and their implementation has been rock solid no matter what I’ve thrown at it. The biggest drawback of a 2-node setup is that it can only handle a single point of failure but I will be mitigating most of that in the near future.

Part List for the 2-Node Windows Server 2016 Storage Spaces Direct Cluster

If you want to replicate something similar here are the parts I used. Since much of this gear was second hand, the build price was less than a quarter of what it would have cost from Dell or HP new.

Node1 (Blue1):

Node2 (Red1):

Other hardware used in build:

Additional Resources

This project came about mostly because of STH so thank you Patrick for the great site! I also want to thank everyone that helped by answering all of my questions, especially those about the Mellanox ConnectX-3 cards, this thing would have died young without the help. I have some commands and a mostly finished outline available if anyone wants to build something similar. Here are some links I found useful while doing this project:

Final Thoughts

I wouldn’t put a 2-node cluster into production for any of my clients. The minimum recommended is a 4 node cluster but I think 3 nodes would work well if be a little inefficient in terms of storage capacity. This is purely a low power, screaming fast and inexpensive test cluster.

I’ve built a 2 node cluster using 1Gbps links, don’t bother, it’s functional but not much more, the 10Gbps recommendation minimum is pretty accurate. 40GbE is only marginally more expensive but much faster with SSDs.

There are enough resources available to find an answer to almost anything but it does require a bit of searching. I had issues multiple times because the pre-release versions of Server 2016 used slightly different syntax than the release version, thanks Microsoft. S2D has a mind of its own, don’t screw with it, just let it do its job. If you fight it you will lose, ask me how I know…

Stop by the STH forums if you are looking to build something similar.

6 COMMENTS

  1. S2D is definitely the killer app of windows 2016. Unfortunately requiring datacenter edition of windows 2016 just killed it. I think Microsoft should allow 2 node S2D in Essentials edition
    4 node S2D in Standard Edition(networked using dual port 40Gbe NICs in ring topology, so no need for a pair of high end switches)
    leaving 8-16 node S2D for DataCenter editions because more than 4 nodes would require $20K switches.

  2. Great write-up Jeff! I love the X-wing and Tie fighter co-existing in a cluster working together. You pointed out a few pains you had, I would love to chat and learn more on how we can improve S2D. Shoot me an email.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.