Mini Cluster in a Box V2 – 64GB of RAM 24 Cores 13 NICs 3 Systems 2 Switches

Posted November 13, 2013 by Patrick Kennedy in Client Tips
Mini Cluster in a Box V2 Exterior

Recently I have been working on a new mini cluster project. With the release of Intel’s Avoton platform, there is finally a solution that makes sense to build small points of presences with. Industry standards allow for more or less 110-120w of power to be consumed per rack unit at just about any data center. While no company has made a perfect solution yet, the goal has been to start playing with small clusters in a box to push density using easy-to-find, off-the-shelf parts.

For the build we utilized the BitFenix Prodigy mini-ITX chassis. As far as mini-ITX chassis go, this is certainly not a small case. It is meant to be an excellent platform for the higher-end mITX systems with many drives and even dual-slot GPU coolers. The chassis is still very sub-optimal for what we are doing, but it did work which is good enough for this proof of concept.

Mini Cluster in a Box V2 Exterior

Mini Cluster in a Box V2 Exterior

Inside the chassis is the cluster. Motherboards are stacked inside the chassis. Power remains in the same compartment as a standard power supply, yet consumes much less space. Networking is handled via two switches, one for IPMI and one for data. The mass of short CAT-6 Ethernet cables can be found where the BitFenix removable 3.5″ hard drive rack would otherwise go. We also moved the 120mm fan to a higher position. The Intel Atom C2750 is a very cool running chip, but components do need at least some airflow.

Mini Cluster in a Box V2 Internal View

Mini Cluster in a Box V2 Internal View

Here is the stack of three Intel Atom C2750 platforms. The top two are Supermicro A1SAi-2750F platforms with the bottom being an ASRock C2750D4I. Notably, they also represent the platforms around for use in this type of project. The Supermicro platforms offer four network ports so we were able to connect both to the switch and to the other units directly. All three motherboards had at least two data network ports connected as well as the IPMI NICs connected.

Another change here was that we used readily available PicoPSUs to power the machines. This was the simplest way to get everything running given our space constraints. Certainly an enterprising company could wire up one PSU potentially to all three motherboards as a single PicoPSU 150XT unit can easily power the cluster’s total power consumption.

Mini Cluster in a Box V2 Platform Stacks

Mini Cluster in a Box V2 Platform Stacks

In terms of networking, this required a bit of creativity. 16 port switches were physically too large so a compromise was made. A 5-port Netgear GS105 v4 fit neatly in a 3.5″ drive bay and provides IPMI networking for the existing nodes plus a potential future node. An 8-port TP-LINK gigabit switch fit between the hard drive cage and the front chassis mesh with a little bit of maneuvering. One of the switch ports is routed to the rear of the chassis to provide connectivity to the external network.

Mini Cluster in a Box V2 Networking

Mini Cluster in a Box V2 Networking

On the opposite side of the chassis one can see the expansion slots for each platform. and some additional cable routing (not pretty I know but this is still a cluster being added to.)

One major call out is that under the bottom motherboard, there is a piece of red electrical tape. This is due to the fact that the motherboards are rotated 180 degrees to the rear of the motherboards faces the front of the chassis. That change means the mITX mounting holes align with the motherboards at only two of the mounting points instead of all four. Electrical tape was used on the two unused mounting points to provide insulation.

Mini Cluster in a Box V2 Expansion Side

Mini Cluster in a Box V2 Expansion Side

In the 5.25″ expansion bay we added an Icy Dock 6-in-1 2.5″ hot swap chassis. This can easily provide six drives to be split among the three nodes. Further, the chassis does have on-door mounting and the additional unused 3.5″ mounting spot for additional storage capacity.

Mini Cluster in a Box V2 Icy Dock

Mini Cluster in a Box V2 Icy Dock

Power is a mess. We are using thee small power bricks in the standard PSU cutout. Along with these three items we also have the TP-Link and Netgear switch power adapters.

Mini Cluster in a Box V2 Power

Mini Cluster in a Box V2 Power

We are still looking for a clean power solution such as a power strip so that we can simply have one power and one Ethernet plug to make the Mini Cluster in a Box V2 simple to setup.


This build was significantly more practical than the initial build which utilized an Intel Atom S1260 as well as two Raspberry Pi nodes. In terms of power consumption we are utilizing around 55-60w during normal idling. Under full load and after time for some heat soak we generally hit around 112w with only three SSDs for storage. Certainly in-line with the design goals.

In the end, we crammed three nodes (in some applications more important than individual performance), up to 96GB of RAM (we used only 64GB for POC purposes), 10 data NICs, just about 1.5TB of storage capacity with 24 onboard SATA ports, an IPMI network and switches into a relatively small and portable form factor. The best part is, adding a fourth mITX platform is not out of the question (and is, in fact, a project.)

The major thing this proof of concept showed is the fact that 1A at 120v per 1U is now the realm of a small cluster rather than a single system. In fact, given the Intel Atom C2750 performance, we can easily now fit a cluster of servers, including networking in a small form factor and low power consumption thresholds.

Certainly there are things that can be improved upon, the networking situation is sub-optimal and realistically an Avoton node with a pfsense VM could be used instead of an additional network switch which would save some power and provide better management capabilities. On the power side, a right-sized power strip is desperately needed. Even better still if a small battery back-up unit could be utilized. The chassis is significantly too large, but was easy to work with. Still, as far as proofs of concepts go, this is just about what we wanted to see.

Head over to the forum post on this cluster if you have ideas and to see more information about the build.

About the Author

Patrick Kennedy

Patrick has been running ServeTheHome since 2009 and covers a wide variety of home and small business IT topics. For his day job, Patrick is a management consultant focused in the technology industry and has worked with numerous large hardware and storage vendors in the Silicon Valley. The goal of STH is simply to help users find some information about basic server building blocks. If you have any helpful information please feel free to post on the forums.



    Do all of the power supplies output 12v ? If so, there are many compact 12v PSU’s that can be had, even ones that are designed to fit in a standard 5.25″ drive. You would simply trim the power cables and attach them to the screw terminals on the PSU. (Or you could get something like this: )

    Point is, if they actually are all a standardized voltage, your power supply options really open up.

    VM admin

    You need to get this into a 1U. Easily enough space. Wonder why HP doesn’t just build this or a 2U 2A (1A EU) version???


    I am really interested in a 1U build too as it is more practical in a server environment.
    Can you do a build and blog on that? Would be great!


    As there should be a cheaper and more silent way then to go for a Casepro B9ITX ( That is 16 mini-ITX in a 5U case. And with 40mm fans, which are never silent in my experience.

    I would rather go for a cheap and silent way (80mm fans) of 4x mini-ITX in a 2U case with 1 power supply, all connections on the back (so a normal switch in the rack can be used).


    Borne to do it

    Great project. That 5U linked in the comment above is way too big. May as well get a 3U micro cloud that has higher density at that point and simplify power supplies.

    Love the project. Also want to see a 1U or 2U rackmount.


    For those who stay with a desktop case, see here a tower with 14 mini itx, power and switch in 1 case:

    As 1U case is already on the market for 2 mini ITX blades (, but these still use their own PSU and annoying 40mm fans, there should be a cheaper (less PSU redundant) solution.

    Besides 2U cases are cheaper most of the time vs 1U cases, compared to the space you get.


    The SuperMicro boards have a separate 12v input that you can use instead of an ATX power supply. I don’t know if the ASROCK does too – but assuming it does you could always use something like this instead of the three separate bricks: With a little searching around you could probably find one that delivers both 12v and 5v so that you can power the SSDs too. Then you just have to make sure your internal switches run off either 12v or 5v supplies (unfortunately, a lot of then use 9v or 19v inputs…).


    These HDD size AC/DC power switches also works, pick your own base on load:


    1. Efficient work and low temperature
    2. Comes with the functions of overvoltage protection, short circuit protection and overload protection
    3. Constant voltage output to ensure the stability of the power supply


    This is a stupid article, I’m running 96 GB DDR3 1600 overclocked to 1640 on a micro ATX Rampage III Gene with a X5690 @ 3.90 Ghz and it destroys this setup in everything what a ridiculous article!

      Morten Nielsen

      You obiously don’t understand the concept. Doing a serious setup and then put an OC on it speaks volumens. Should go troll somewhere else, which I can see you also do..

    Morten Nielsen

    If you find you don’t need one of the board + ram anymore, your welcome to contact me.

Leave a Response


Newly Reviewed
  • 9.0
    ASRock EPC612D8A-TB Overview
  • Intel DC P3700 400GB PCB Open
  • Hitachi SSD400SB
  • Supermicro E100
  • nas4free install 6 - boot from USB
  • Toshiba PX03SNB160 - ATTO Read Benchmark Comparison