Mid-range DIY Storage Server Buyer’s Guide, December 2010

15

Having recently published the high-end home/ small business December 2010 buyer’s guide, I received a lot of feedback requesting items for the mid and low-end guides. The mid-range I define as a minimum of six drives in the system with a maximum of fourteen drives. Anything more than fourteen drives and getting a 4U storage chassis becomes cost effective. Furthermore with only one add-in or onboard 8-port controller to handle 7-14 drives this seemed like a strong cut-off point.A few points before starting:

  • First, hot-swap drives, either tray-less or with trays/ caddies I consider mandatory on a storage build. No doubt, skipping the trays leads to significant cost savings, however maintaining drives without hot swap trays stinks. Quite a few studies peg drive AFR in the 5% range, and higher for the first year. There is a good chance that several drives will fail in a fourteen drive system in the first three years, and oftentimes at the worst possible time. Simply disengaging a locking mechanism is much easier than removing case panels and unscrewing.
  • Second, at fourteen drives I would start to consider a redundant PSU, for home use. For businesses, I think redundant PSUs are mandatory (as well as hot swap drives.)
  • Third, I am going to have a higher-end mid-range build as well as a lower-end build in this review, both capable of connecting at least 14 drives. The low-end build will focus on 2-6 drive systems.
  • Fourth, I think a mid-range server should be, at minimum, able to run a few operating systems in virtual machines, not as much as the high-end servers, but at least one or two small test virtual machines.
  • Finally, this guide will not include drive selection.  Again, 2TB hard drive prices are now in the $60-100 range and a generational shift is occurring as 3TB drives ship in quantity starting in early 2011.

CPU Selection

The Intel Xeon X3440 is currently my favorite mid-range CPU. With four cores plus hyper-threading, the X3440 is a very strong LGA 1156 CPU that sacrifices some clock speed for a significantly lower cost versus its higher clocked brethren.  Other considerations for this selection were that the slightly less expensive Intel Core i3-530 or i3-540 CPUs work with unbuffered ECC DIMMs, but the Intel rep I contacted about supporting ECC functions said that the i3-530 does not, in fact, support ECC functions. There are conflicting reports on this for sure, and you can get UDIMMS to work with the i3-530/ i3-540, but without the ECC functions enabled, one may as well be using non-ECC DIMMs.

Intel Xeon X3440
Intel Xeon X3440

AMD makes a strong showing in this area but falls short solely because of there being very few, and harder to find socket AM3 motherboards with IPMI 2.0 and KVM-over-IP. One thing I would like to see is a socket AM3 platform with server features that takes advantage of AMD’s inclusion of ECC support on many consumer level CPUs.

Motherboard Selection

Here I will bifurcate the recommendation for one major reason, FreeBSD compatibility. While the LSI SAS 2008 chipset has become a favorite, it is not compatible with the current, stable versions of FreeBSD (although newer versions do support it.) Common elements will be the requirement for IPMI 2.0 and KVM-over-IP. Fourteen drive systems are large enough to create significant vibration and noise to the point where they will oftentimes be locked in an equipment closet making remote management a great feature. For the higher end build I will recommend the Supermicro X8SI6-F. As the review explains, the X8SI6-F combines a LSI SAS 2008 controller, IPMI 2.0/ KVM-over-IP, dual Intel NICs making it a very well integrated server board.

X8SI6-F versus X8SIL-F Sizes
X8SI6-F versus X8SIL-F Sizes

For the lower end midrange build I will recommend the Supermicro X8SIL-F. It was a very close call between this and recommending an AMD-based system, but the lack of a good selection of socket AM3 motherboards with IPMI 2.0/ KVM-over-IP make that recommendation difficult. The X8SI6-F does have some distinct advantages mostly due to its larger size including an additional internal USB header, onboard SAS 6.0gbps, and two extra DIMM slots which can be populated when using RDIMMs and a compatible Xeon CPU.

RAID Controller/ HBA Selection

For the high-end build, with the Supermicro X8SI6-F’s onboard LSI SAS 2008 controller, does not require an additional controller to reach 14 drives of total storage connectivity. The lower-end mid-range build does require a controller and here I would recommend a LSI 1068e based controller.

Intel SASUC8I installed in a Supermicro X8SIL-F and connected to a HP SAS Expander
Intel SASUC8I installed in a Supermicro X8SIL-F and connected to a HP SAS Expander

The main reason for this recommendation is because the LSI 1068e works well with many types of systems, including FreeBSD/ FreeNAS, which is an attractive platform in the mid-range space. If FreeBSD/ FreeNAS are not operating systems that will be run on a system, the SAS 2008 based cards like the LSI 9211-8i are probably the way to go.

Memory Selection

In general, and especially with systems containing more than six drives, I highly recommend using ECC memory.  The ECC price premium for unregistered ECC DIMMs is relatively low compared to total component cost. I have been using Kingston KVR1333D3E9SK2/4G (4GB ECC UDIMM) kits for awhile now and they work well. Both configurations should work well with 8GB of RAM so I will use two kits in the spec. One can always opt for higher capacity kits. On the X8SI6-F, one can also use six 2GB RDIMMs for 12GB which provides extra flexibility when paired with the X3440 and the larger ATX form factor.

Chassis Selection

The 14 drive system space was very difficult to recommend a chassis for. For the higher end mid-range system I had a difficult time choosing between the Supermicro CSE-836TQ-R800B which is a 16 drive 3U rackmount chassis and a SC933T-R760B which is a 15 drive 3U rackmount using a 760w triple redundant PSU. When purchasing a system with a  redundant PSU, it is often cost-effective to purchase the chassis with the PSU. The 836TQ-R800B is a bit over sized for this build since it holds 16 drives (two more than the onboard controllers can handle) and is a rackmount enclosure. The Chenbro RM41416T2-B-650R is a 4U (slightly larger) sixteen hot swap drive case with a 650w redundant power supply that is about $80 less expensive than the Supermicro CSE-836TQ-R800B but well over $150 more than the SC933T-R760B. In the end, the Supermicro SC933T-R760B did get the final selection because it was the cheapest solution that met the requirements (and exceeded them with the triple redundant PSU.)

Supermicro SC933T-R760B rear view with triple redundant PSU
Supermicro SC933T-R760B rear view with triple redundant PSU

For the non-redundant PSU version, I was admittedly a bit lost. My first inclination was to use a large full-tower case and then add hot swap 4-in-3 enclosures which would utilize twelve 5.25″ external bays. The cost ended up being in the $300-350 range. A lot of users go this route on similar systems, and it is attractive option for building a system over time. An eight drive system may expand both in terms of drives and racks over time, spreading purchasing costs. On the other hand, at this point the Norco cases really provide a lot of value. I ended up recommending the RPC-4220 over the RPC-4020 because after owning both for a long time, I will say the RPC-4220 has much easier hot swap mechanisms.

Norco RPC-4220 Front
Norco RPC-4220 Front

Power Supply Selection

For the higher end mid range build, the redundant PSU is included with the chassis. In the high-end buyer’s guide I noted that I have a current non-redundant preference for the Corsair AX series Gold certified PSUs. Here I think the AX750 is a good choice. It should be noted that a quality Seasonic single-rail PSU is a strong alternative here. Also it should be noted that the 750w version is fine with 14-drives and a X3440 so I will not recommend any other versions. Both the Corsair and Seasonic X-750 feature fully modular connectors which is important in non-redundant servers. A fully modular system can be unplugged at the PSU, unit replaced, and cables plugged into the new unit much faster than in non-modular PSUs.

Corsair AX750
Corsair AX750

A consideration for a lot of people will be noise, and the Corsair PSU is much quieter than a triple redundant Supermicro PSU which sounds like a relative of a jet engine. Combined with the Norco RPC-4220 and an optional 120mm fan bracket for that case, a decently quiet server can be built with the non-redundant PSU build. I have a version of this (one can see it in the Intel SASUC8I picture above) and I will say that the decreased volume is welcome in a home setting.

Final Configuration 1 (non-redundant PSU)

Approximate Final Cost (without drives): $1180

Final Configuration 2 (redundant PSUs)

Approximate Final Cost (without drives): $1400

Conclusion

Of the three guides (high-end, mid-range, and low-end), I actually found this one to be the most difficult to write by a large margin. No manufacturer caters to the fourteen (max) drive market most likely because SAS controllers come in multiples of four ports. Furthermore, an eight drive system will likely move very close to the realm of a lower-end system while a fourteen drive system may be overkill for many users. As with any of these buyer’s guides, feel free to modify the build to best suit your needs. Making the design decision to build a bare bones mid-range server can save a lot of money here. One could use a cheap Phenom II X2 or X4 CPU, a consumer level motherboard, supporting ECC functions but not IPMI 2.0, and non-hot swap drives and save four hundred or more dollars over the non-redundant PSU build. Doing so sacrifices serviceability though, which becomes important when a user has 20TB+ of data stored in a system. The bottom line is that in the mid-range there is a lot of flexibility when it comes to components.

15 COMMENTS

  1. Bah, out of range/overkill for home environments. I’d really like to see an article from you Patrict, on a mid/low-end HOME server, nothing as fancy as you’re doing here. For example:
    1. Case: Nothing rack mountable. How often do you see a home with a rack mount in the basement? (rhetorical question)
    2. Hot swap bays should really be optional, I mean how hard is it to open up a screw-less case and pop out a hard drive that’s installed on tool-less rail?
    3. RAID cards: NOT a must. Most homes at best are wired with Cat5e/6, but a lot of times its Cat5. So we don’t need to necessarily worry about speeds >110MB/s. Unless you’re hooking up more than 6 drives (or 8 on some MBs, 6x SATA2, 2x SATA3) and need more SATA ports, a shinny RAID card is overkill.
    4. No advanced features like IPMI 2.0 and KVM over IP.

    Something that’s not overkill, has quality parts and a reasonable price.

  2. Paul,

    Thanks for the feedback. I think the low-end guide will be closer to that. The reason for IPMI 2.0 and hot swap trays is because it saves a lot of time.

    I actually run most of my low-end LSI cards in IT mode (which makes them basic HBAs). If you look at the cost of a Supermicro X8SI6-F, it is basically that of the component parts combined, minus the IPMI 2.0. The option to run RAID 1 at a card level makes life easy if people are using something like Vail and want to keep redundancy. These LSI cards are by no means fancy and made for speed.

    Finally, on the rackmount issue, I have both done builds using standard tower cases and they generally work well with hot swap enclosures. The cost of going that route generally is more than a purpose built enclosure. I spent a lot of time looking at options for towers with hot swap racks and those options either did not accommodate 14 drives or ended up being more expensive doing so.

    I think the low-end guide may be what you are looking for and I think that will address a lot of your comments. Feel free to e-mail me with your suggestions.

  3. I’ve had a Norco RPC-4020 for the last year. While it seems to be the cheapest chassis you can get that supports that many hotswap bays, I think you “get what you pay for”. In other words, it definitely feels cheap to me. To be fair, I haven’t had any problems with mine, although I do baby it, because most parts feel somewhat flimsy to me.

    If you put in a bit more effort and/or get creative, you can get something a bit more sturdy without the massive price jump of the Supermicro/Chenbro chassis. My other chassis is a Compucase/hec RA466A00 4U rackmount case. It was cheap (around $150 US IIRC), but actually fairly solid. What’s interesting is that it has NINE 5.25″ drive bays. Each group of three bays is the perfect size for a SuperMicro CSE-M35T 5-drive hotswap enclosure. I have two of these enclosures, supporting 10 drives. I could easily throw in a 3rd and support 15 drives.

    Those SuperMicro hotswap enclosures are really nice: (1) they use 92mm fans, whereas most competing products use 80mm fans or smaller. You can also easily swap out the fan for something quieter; with 5400 RPM drives, there’s no need for the Sanyo Denki jet engine fan that comes with the the enclosure. Also, (2) we have the 24-bay SuperMicro SC846E1-R900B at work, and it has noticeably flimsier drive trays than the ones in my CSE-M35T enclosures at home.

    Unfortunately, that Compucase/hec RA466A00 doesn’t appear to be available any more. However, the idea is still valid: you can generally fit the CSE-M35T in any case that has three 5.25″ bays stacked in a column. My RA466A00 is nice because it has a steel “wall” separating the drive area from the motherboard area. The wall has cutouts for three 120mm fans. Many server chassis use 92mm fans or smaller in this space. Bigger fans can generally be run more slowly for a given air flow, resulting in quieter operation.

    Finally, a lot of rackmount chassis don’t actually require being mounted in a rack. Some even let you use them as a tower (aka “pedestal”). But, FWIW, I have a StarTech 25U 4-post rack (part number: 4POSTRACK25) in my basement. 🙂

    If anyone’s interested, I did a fairly extensive writeup of my server on Silent PC Review. In the “General Gallery” forums, look for a post titled “Quiet(ish) hotswap file server [lots of pics]” from 12/28/2007. Every component has now changed, but the chassis info is still accurate.

  4. Ipmi is very useful when something goes wrong with windows and it becomes impossible to connect to the server by conventional means, saved a lot a swearing a couple of times.

    I have supermicro cse-m35tq 5into3 modules in my server. I would suspect that the cost of three cse-m35tq modules plus a case would be a similar price to a Norco ….but I wanted something alot smaller than a Norco, plus my other computers use the supermicro 5into3 thereby making drive swapping easier.

    Having to take a case apart to change /add a drive is a pain in the bum, plus one risks catching a cable whilst inside. Modules are essential for any server with a large number of drives.

    Does a server with so few drives need such a beastie psu?

  5. Matt: Excellent points. The RPC-4220’s tray mechanism is much less flimsy than the RPC-4020, to the point where I will only use the 4220 and 4224’s at this point. The $50 difference in the price of the 20-bay models is worth it.

    David: That is precisely why KVM-over-IP rocks. Reboots gone wrong, Windows or any other OS for that matter can utilize the remote management capabilities. It is something I appreciate after building tons of these machines that are located all over the place, especially since I will have over 100k flight miles this year.

    The reason for the beefy PSUs is actually because I have had multi-rail PSU’s in the 650w range not be able to boot a similar configuration (12-13 drives fine, 14 sometimes would not boot, 15+ would not post). Given, you will likely spend the vast majority of time well below the max power output, but I do like the idea of running components under less load in environments where I don’t want things to fail.

  6. Hi Patrick,

    This is the best article I have seen so far about DIY Storage server. Could you publish the low-end guide as well .. I would like to do this project in a low budget , supporting at 15TB of data + hot swappable .

  7. Keerthi: Thank you for the compliment. I am planning a low-end version in the next week which will be focused on 2-6 drives. It is almost done, just busy at work.

  8. I am still delaying the build of a new home storageserver because of new technologies coming out soon. I am especially looking forward to UEFI which makes not only booting much faster, but also supports 2.19TB+ discs. I love to be able to use 3TB discs without additional controllers. Asus and MSI are busy developping UEFI motherboards. Does Supermicro have UEFI plans as well? If yes, when can new SUpermicro UEFI-motherboards be expected? Early Q1 2011 together with the new Intel lineup?

    I am waiting for something (new Intel cpu’s + Supermicro UEFI motherboard + lower prices for 3TB discs) without having any clue when this will actually be reality. Am I waiting for the right reasons?

  9. I have not asked Supermicro about their UEFI plans, but I would suspect they do. The big thing with Sandy Bridge is performance per watt for home servers, but the big thing is getting the IPMI 2.0 like features. Intel has vPro which lets you do KVM-over-IP among other things in the Q57 platform already, but the add-in BMC guys have better solutions at this point. It may be different a year from now though.

    Mass 3TB shipments are expected in Jan-Feb 2011 so prices should go down after that point. Personally, I am excited for 3TB disks, but the need for storage now, and the fact that reliability seems to have gone down with the 2TB (SATA) generation over the 1TB (SATA) generation makes me want to hold off on buying tons of 3TB drives right away. I tend to boot from smaller drives (usually 2.5″ solid state) anyway, so the larger disks are less of an issue.

  10. Thanks for the guide!

    What sort of power would you expect the more expensive of these builds to use when idling?

    S.

  11. Steve: 50-60w is likely depending upon fan speeds and how low of a power state the drives are allowed to get to. My first inclination was on the lower side, but there are still quite a few variables out there.

  12. Patrick

    I am in the process of building a mid-range server using the Supermicro CSE-836TQ-R800B chassis. Do you have a recommendation for a rackmount cabinet, preferably on the smaller side such as 14-15U capacity? I’ve been looking for the Supermicro CSE-RACK14U mini-cabinet but availability seems to be a problem.

  13. Al: Great chassis! On the rack side, at 14-15u if you were looking for something inexpensive, maybe try craigslist (not sure where you are but there are quite a few listings by me.) You can generally see the units on-site and decide then. I ended up with a nice enclosed 27U rack that was brand new/ assembled for $275. A bit larger, but very popular is the SR4POST25 25U rack. Bigger and open frame but <$300 and free Amazon Prime 2 day shipping. That CSE-RACK14U is a nice unit, but you are probably going to hunt for availability.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.