Mini Cluster in a Box V2 – 64GB of RAM 24 Cores 13 NICs 3 Systems 2 Switches
Recently I have been working on a new mini cluster project. With the release of Intel’s Avoton platform, there is finally a solution that makes sense to build small points of presences with. Industry standards allow for more or less 110-120w of power to be consumed per rack unit at just about any data center. While no company has made a perfect solution yet, the goal has been to start playing with small clusters in a box to push density using easy-to-find, off-the-shelf parts.
For the build we utilized the BitFenix Prodigy mini-ITX chassis. As far as mini-ITX chassis go, this is certainly not a small case. It is meant to be an excellent platform for the higher-end mITX systems with many drives and even dual-slot GPU coolers. The chassis is still very sub-optimal for what we are doing, but it did work which is good enough for this proof of concept.
Inside the chassis is the cluster. Motherboards are stacked inside the chassis. Power remains in the same compartment as a standard power supply, yet consumes much less space. Networking is handled via two switches, one for IPMI and one for data. The mass of short CAT-6 Ethernet cables can be found where the BitFenix removable 3.5″ hard drive rack would otherwise go. We also moved the 120mm fan to a higher position. The Intel Atom C2750 is a very cool running chip, but components do need at least some airflow.
Here is the stack of three Intel Atom C2750 platforms. The top two are Supermicro A1SAi-2750F platforms with the bottom being an ASRock C2750D4I. Notably, they also represent the platforms around for use in this type of project. The Supermicro platforms offer four network ports so we were able to connect both to the switch and to the other units directly. All three motherboards had at least two data network ports connected as well as the IPMI NICs connected.
Another change here was that we used readily available PicoPSUs to power the machines. This was the simplest way to get everything running given our space constraints. Certainly an enterprising company could wire up one PSU potentially to all three motherboards as a single PicoPSU 150XT unit can easily power the cluster’s total power consumption.
In terms of networking, this required a bit of creativity. 16 port switches were physically too large so a compromise was made. A 5-port Netgear GS105 v4 fit neatly in a 3.5″ drive bay and provides IPMI networking for the existing nodes plus a potential future node. An 8-port TP-LINK gigabit switch fit between the hard drive cage and the front chassis mesh with a little bit of maneuvering. One of the switch ports is routed to the rear of the chassis to provide connectivity to the external network.
On the opposite side of the chassis one can see the expansion slots for each platform. and some additional cable routing (not pretty I know but this is still a cluster being added to.)
One major call out is that under the bottom motherboard, there is a piece of red electrical tape. This is due to the fact that the motherboards are rotated 180 degrees to the rear of the motherboards faces the front of the chassis. That change means the mITX mounting holes align with the motherboards at only two of the mounting points instead of all four. Electrical tape was used on the two unused mounting points to provide insulation.
In the 5.25″ expansion bay we added an Icy Dock 6-in-1 2.5″ hot swap chassis. This can easily provide six drives to be split among the three nodes. Further, the chassis does have on-door mounting and the additional unused 3.5″ mounting spot for additional storage capacity.
Power is a mess. We are using thee small power bricks in the standard PSU cutout. Along with these three items we also have the TP-Link and Netgear switch power adapters.
We are still looking for a clean power solution such as a power strip so that we can simply have one power and one Ethernet plug to make the Mini Cluster in a Box V2 simple to setup.
This build was significantly more practical than the initial build which utilized an Intel Atom S1260 as well as two Raspberry Pi nodes. In terms of power consumption we are utilizing around 55-60w during normal idling. Under full load and after time for some heat soak we generally hit around 112w with only three SSDs for storage. Certainly in-line with the design goals.
In the end, we crammed three nodes (in some applications more important than individual performance), up to 96GB of RAM (we used only 64GB for POC purposes), 10 data NICs, just about 1.5TB of storage capacity with 24 onboard SATA ports, an IPMI network and switches into a relatively small and portable form factor. The best part is, adding a fourth mITX platform is not out of the question (and is, in fact, a project.)
The major thing this proof of concept showed is the fact that 1A at 120v per 1U is now the realm of a small cluster rather than a single system. In fact, given the Intel Atom C2750 performance, we can easily now fit a cluster of servers, including networking in a small form factor and low power consumption thresholds.
Certainly there are things that can be improved upon, the networking situation is sub-optimal and realistically an Avoton node with a pfsense VM could be used instead of an additional network switch which would save some power and provide better management capabilities. On the power side, a right-sized power strip is desperately needed. Even better still if a small battery back-up unit could be utilized. The chassis is significantly too large, but was easy to work with. Still, as far as proofs of concepts go, this is just about what we wanted to see.
Head over to the forum post on this cluster if you have ideas and to see more information about the build.