High-End DIY Storage Server Buyer’s Guide, December 2010

16

A constant question I am asked is what is a good NAS build for various usage scenarios. Based largely on my experiences with things like The Big WHS and reviewing other components for this site, I have put together two power-user builds below that provide a strong starting point for someone looking to build a 20+ drive home or small business server. One thing that I learned is that building twice is generally more expensive than purchasing an end-state build up front so the below does not represent the absolute least expensive build possible. Instead, I tried to configure two machines that are cost optimized using quality components.

This guide will be dedicated to building a high-end NAS box that will be able to handle:

  • Single CPU Socket
  • 20+ hard drives
  • Run OpenSolaris based Nexenta, VMWare ESX(i), Linux, Windows Home Server, Windows Server 2008 R2 with Hyper-V and FreeBSD storage variants and using the operating system or onboard RAID 1/10 to manage storage.
  • Host multiple virtual machines so one can consolidate physical servers
  • Not use egregious amounts of power (this is strongly aided by having a single CPU system)

What this guide will NOT cover is hard drives. Frankly, 2TB hard drive prices are now in the $60-100 range and a generational shift is occurring as 3TB drives ship in quantity starting in early 2011. Realistically though, a good server of this size will either be set up with a specific enterprise type of nearline storage, or will be set up to accept commodity disks.

CPU Selection

Intel Xeon W3550 LGA 1366 CPU is a CPU that one might overlook at first because it does have a lofty 130w TDP. At idle, expect this CPU to consume a full 20w+ more than a LGA 1156 based CPU. On the other hand, at approximately $300, and with the ability to support six DDR3 UDIMMs in triple channel mode, the Xeon W3550 has significant advantages over the LGA 1156 based CPUs where memory bandwidth is severely hampered. Furthermore, the LGA 1366 platform has more PCIe lanes available which makes it an inherently more expandable platform. The Xeon W3550 followed its desktop brethren, the Core i7 950, in replacing the Xeon W3530 and i7 930 so price wise, the W3550 is cheaper than the W3540 and slightly faster.

Intel Xeon W3550
Intel Xeon W3550

This build assumes that a user will need 12GB+ of RAM and a fairly quick CPU. AMD makes many strong hex-core offerings, but with Bulldozer so close, and the lack of as many solid motherboard choices, this round will go to Intel.

Motherboard Selection

Supermicro X8ST3-F as reviewed on this site is probably one of the better motherboards in this range. The onboard LSI 1068e is compatible with ESXi, OpenSolaris, and FreeBSD. An ample number of expansion ports, alongside onboard IPMI 2.0 and KVM-over-IP make this a solid motherboard. Frankly, if building a system where one is investing over $1,500 in disks, there is little reason to purchase a consumer motherboard. The X8ST3-F’s onboard components is widely known as being compatible with almost any operating system. One thing the X8ST3-F lacks is many PCIe x16 slots for those looking to build a hybrid GPU compute/ NAS server. Frankly, I have seen this done a few times where rack space is at a premium, but I would strongly advise against going this route as heavy GPU compute puts a lot of strain on the PSU and dumps a lot of heat into the chassis.

Supermicro X8ST3-F Motherboard
Supermicro X8ST3-F Motherboard

RAID Controllers/ SAS Expander Selection

Here I recommend two Intel SASUC8I LSI 1068e based controllers however this is an area where I would argue people will have their own preferences on. One reason why I would suggest two Intel SASUC8I controllers instead of the new SAS 2008 based cards is that FreeBSD support is lacking still for the SAS 2008 based cards.

Intel SASUC8I and LSI SAS3081E-R
Intel SASUC8I and LSI SAS3081E-R

Once that compatibility issue is fixed (or if there is no intention of using a FreeBSD derivative), moving to SAS 2008 makes a lot of sense. Another option would be to purchase a HP SAS Expander and utilize the onboard LSI 1068e controller. In some storage server cases, SAS Expanders are built-into the case. With discrete PCIe cards one can pass a card directly to a given VM which makes OS management of the storage subsystem more reliable. Alternatively, using the onboard LSI 1068e controller plus the HP SAS Expander (or in-case expander) would free up one additional PCIe port or two. The Intel ICH10R based controller is best suited for boot drives and with this configuration there will be some flexibility regarding where the drives reside on the myriad of controllers.

The HP SAS Expander providing more than 24 internal ports
The HP SAS Expander providing more than 24 internal ports

Memory Selection

Both registered and unregistered ECC DIMMs are supported by the W3550, but the X8ST3-F (based on the X58 chipset) only supports UDIMMs. With DDR3 prices falling over the past six months after holding steady for quite awhile, populating 24GB of RAM is not that expensive anymore (in the $600-850 range). 12GB of CL9 ECC UDIMMs can be purchased for $230 or less these days so using 12GB of RAM especially when using ZFS and/ or running multiple virtual machines, is fairly inexpensive. I have been purchasing quite a bit of Kingston KVR1333D3E9SK2/4G Unbuffered ECC DIMMs which are 1.5v 1333MHz CL9  DDR3 DIMMs or the KVR1333D3E9SK3/6G variants which is the triple channel 6GB kit. Frankly, for the current cost of 12GB of UDIMMs, there is absolutely no reason to get non-ECC memory for a server.

Chassis Selection

Two possible chassis define the two high-end builds. The first for a non-redundant PSU build is a Norco RPC-4224 . This chassis is a solid evolution from the RPC-4220, however I can imagine that some users would opt for the RPC-4220 if they were using SSDs instead of 3.5″ drives because the RPC-4220 has additional room for 2.5″ drives.

Installing disks into the Norco RPC-4220 DAS/ SAS Expander Enclosure
Installing disks into the Norco RPC-4220 DAS/ SAS Expander Enclosure

A strong alternative here is purchasing something like a Supermicro SC846E1-R900B (900w redundant PSU 24-bay case with onboard expander which is around $1,200 or basically the cost of an expander, redundant PSU, and large case purchased separately.

Supermicro SC846E1-R900B Picture
Supermicro SC846E1-R900B Picture

This is surely a steep price for a chassis, but on balance, it is not far out of line from purchasing everything separately. Here as a picture of the Supermicro SAS2 backplane:

Supermicro BPN-SAS2-846EL2 Front
Supermicro BPN-SAS2-846EL2 Front
Supermicro BPN-SAS2-846EL2 Back
Supermicro BPN-SAS2-846EL2 Back

As one can see, this drastically lowers the number of cables one needs over a HP SAS Expander, or better yet, individual SATA cables as would be used in the Norco RPC-4020.

Power Supply Selection

This is a major question for any builder. I would offer that at the 40TB+ of raw storage level, one probably wants a redundant PSU, preferably a 2+1 unit. With the expense of redundant units (for a large storage server requiring a 700w PSU) the PSU alone can cost $400-600. This makes purchasing a higher-end case with a redundant PSU an attractive option. Going with a non-redundant PSU, I can say that I have started to move from Seasonic based PSUs to the Corsair Professional Series Gold certified PSUs which come in 750w, 850w, and 1200w flavors (the 1200w version is easily overkill for a home server). The efficiency is good, but the single rail quality and construction is equally as nice. Also, they mount fairly well in the Norco chassis.

Corsair AX750
Corsair AX750

If one wants a redundant PSU, again strong consideration should go to something like the Supermicro SC846E1-R900B since it is oftentimes about the same price to purchase a case and redundant PSU together and cabling tends to be nicer in packages.

Required Cables

To use the onboard LSI 1068e controller with the Norco RPC-4220 or RPC-4224 one will likely need two reverse breakout cables that take four ports on the motherboard and turn them into the SFF-8087 connector needed by the motherboard. Utilizing the Supermicro SC846E1-R900B one does not require additional SFF-8087 to SFF-8087 Cables cables because the chassis has the SAS expander integrated with the backplane.

Optional Additional NIC Selection

Depending on the I/O load of the machine, additional NICs may be added. The motherboard comes with two onboard NICs and many PCIe x4 and x8 slots which can be used to add Intel NICs. This is going to be subjective as the number of required NICs above the two onboard (plus one management NIC) is going to depend on application. Many users do not need more than two GigE links for throughput purposes so I am omitting these from the build.

Final Configuration 1 (non-redundant PSU):

Approximate Final Cost (without drives): $1,800

Final Configuration 2 (redundant PSUs):

Approximate Final Cost (without drives): $2,000 US

Conclusion

As one can see, using redundant PSUs is a major cost impacting item. Most of the substitutions such as using a HP SAS Expander or using different memory kits have a <$50 impact on price. This is of course a system meant for a heavy workload, and is by not means meant for a simple NAS box. The assumption here is that one is running multiple VMs, has the need for a lot of expandability, or has a major application running alongside the NAS function. Realistically, for a single CPU configuration, the above configurations do cost a lot, but at the same time building once is much cheaper than building multiple times. There is, of course room to customize with more NICs, different size power supplies, different cases, more memory, different controllers and etc. With that being said, the above represents a solid base configuration.

In future guides we will look at mid-range and lower-end systems which may be more suited to a lot of users. One thing to remember with the above is that drives are inexpensive, but buying them in quantity will cost quite a bit.

16 COMMENTS

  1. Looking forward for the mid/low-end systems.

    I have a question…
    Lets say I build a 5x 2TB RAID 5 or 6 array, and down the road I need to throw in a few other disks because I’m running out of space. Can a software based RAID 5 expand without destroying the array?

  2. Paulius, that depends on the implementation. Online Capacity Expansion is what you are talking about. Something like RAID-Z cannot easily expand RAID-Z (single parity like RAID 5). Mid/ low end systems will be in the next few days.

  3. I also look forward to reading your thoughts on mid-range systems. Could mid-range infer <= 14 hard drives? That is 6 on-board a motherboard and 8 on a HBA.

    Why have you moved away from seasonic power supplies?

  4. Currently thinking 6-14 drives.

    The only reason for moving from seasonic is just because I have been buying more redundant PSUs (primary reason) and the Corsair AX series Gold PSUs have been fairly solid. I have nothing against seasonic PSUs as they are great, they just lack redundancy.

  5. interesting exercise – but $1800 is a lot for a disk less single socket 12G machine.

    Throw in twenty 1.5TB 7200.11 data, two 500G boot, 40G SSD L2ARC for another $1900. Who knows how much to revamp a closet to attenuate the noise and provide ventilation too.

    Since we’re spending that much, might as well hedge your bets and upgrade the base to a X8DT3-F ( dual 1366 + 16 dimms ), one E5620, and 12G ( 3×4 ) ram for $250 more.

    – Solaris 11 Express supposedly utilizes the 56xx’s AES extensions with zfs encryption – w00t

    – ZFS compression would benefit from dual processors

    – ZFS de-dupe is a non starter with 20TiB. seriously, try zfs compression before de-dupe. With 10TB used, the de-dupe table is anywhere between 22G ( all 128k blocks ) and 640G ( all 4k blocks )

    – dual sockets doubles the cores and max memory.

    On the other hand, $1900 can net you a complete 6.8TiB usable ( raidz2 ) system with 40G SSD L2ARC, AMD six core 3GHz Thuban, and 16GB ecc memory. Bank the 1800 saved and buy a 10TiB raidz2 machine next year or a 13TiB raidz3 in two years.

    Norco RPC-450 – $80
    Corsair CMPSU-750AX – $170
    Asus M4A785T-M – $85
    AMD Phenom II X6 1075T Thuban – $200
    kingston KVR1333D3E9SK2/8G – $131
    Intel EXPI9301CT – $37
    AOC-SAT2-MV8 – $105
    ST31500341AS – $80
    WD5002ABYS – $80
    li-lan BZ-525 – $25
    OCZSSD2-2AGT40G – $105
    =====
    $1879 = 90 + 170 + 85 + 200 + 131 * 2 + 37 + 105 + 80 * 8 + 80 * 2 + 105 + 25

  6. hmmm, my post seems to have gone missing…

    what would your recommendation for a redundant PSU that would fit in a standard ATX-sized hole?

    ideally that doesn’t sound like a jet engine

    a quick google search yielded the iSTAR IS500R8PD8 and IS-550R8P

    is a redundant PSU overkill for a high-end home server user?

  7. oshato: Interesting build. The primary reason for not going that route is that if one is spending that much money, typically people want to have hot swap drives, IPMI/ KVM-over-IP, dual Intel NICs. I do have a Norco RPC-470 but populating hot swap cages in the chassis is expensive. Also the RPC-4224 uses 120mm fans which tend to be less noisy. After building quite a number of these things, spending a bit more up-front for ease of management features makes a big difference.
    Dual CPUs are good, but the price starts going way up when one incorporates them since dual-socket capable CPUs are typically priced at a premium and use more power (unless, of course one opts for costlier lower TDP parts).

    Then again, this is really meant as a starting point for a 24-disk 4U build.

    David: It really depends. Most redundant PSUs are built for rack mount servers destined for a data center and thus are fairly loud. I still have not found a quiet one.

  8. David: the supermicro SC743T-R760B case has very nice decent triple redundant power supply with hot swap sas/sata trays to boot – down side is the $600 price tag and it’s heavy.

    I’ve only seen ( um, heard ) one of these cases outside of a data center environment – hopefully it is a representative sample of the model.

    Personally, I’d stick to a high quality power supply that’s twice the power you need, use a quality surge suppressor ( or ups ) @ the wall, and install a household surge suppressor on your mains ( square d makes a few ).

    common power supply failure modes:

    – excessive load – spinning up 24 drives + motherboard + cpu + sas/sata/expander cards is taxing. I’d use a AX1200 in Patrick’s base unit above.

    – excessive voltage/current – lightning, brownouts, et al. if you see your lights dimming regularly, consider a beefy active dual inversion ups ( apc smart-ups 1kva ) to plug your power supply into.

    – excessive heat – dust, critters, pet hair, smoke, pollen. It doesn’t matter if your hardware is in a data center, or under your desk next to your dog’s fav blanket – inspect and clean your machines frequently.

    – condensation – water kills. If you’re moving houses or just headed to a lan party – seal your electronics in plastic before moving them and wait for them to acclimate before unwrapping them.

  9. I noticed that for Supermicro SC846E1-R900B build you quoted 2 reverse breakout cables, I think you only need one since this case is a single port expander version. It has 3 ports, 2 of them being used for cascading. I read also the 2 port version(not this case) is only used for redundancy, not for bandwidth(unlike the HP SAS expander).

    One question though, would Supermicro X8SI6-F board be a better choice for this case?

  10. “In future guides we will look at mid-range and lower-end systems which may be more suited to a lot of users.”

    when will we get the lower-end guide?

    best regards
    sebastian

  11. I just read your review for X8SI6-F board and the IT firmware guide, does this mean the board has no support as simple HBA out of the box unless I flash it with firmware? In my Previous comment, if using this board with the supermicro case, without flashing, will it not work for software raid?

  12. Sean, you can attach drives and use LSI RAID 0, 1, 10 in the stock IR mode of the LSI SAS 2008. The reason people use the IT firmware is that IT mode turns the RAID card into a HBA which removes a variable when running things like RAID-Z/ ZFS. The flashing process takes a few seconds and you can return the firmware to IR mode if you ever want to go back to LSI RAID.

  13. Hello Patrick.

    first let me give you thanks for the great articles and your website which I have found very helpful. In fact, I built a WHS based on your recommendations (norco 4220, HP SAS expander,Supermicro X8ST3-F and onboard LSI flashed to JBOD configuration.)

    Unfortunately the norco is filling up fast, with 34TB at present. So I’m looking down the road at a future build.

    Couple of questions for you

    1) the supermicro backplane looks like it has two SAS connectors. Do both of thes have to be utilized?

    2) Why do you recommend two Intel SASUC8I LSI 1068e? This controller looks like it has two SAS connectors. I guess one card is not adequate to supply the backplane?

    3) as I recall, you have used areca controllers in the past. why the intel recommendation now? Is there some advantage?

    4) With my WHS build I cannot monitor drive health, and this makes me nervous, especially with 17 drives running. Which controller option is best in this department

    5) What are your thoughts on an OS for someone who was planning to use Vail in there next build…ie an individual who will be using the server primarily for multimedia, family archives, etc?

    thanks again for all your hard work

  14. Thanks for the compliment Roger.

    1) No, you can use only one.
    2) In build 1 there is no SAS Expander, so to get 24x SATA ports one needs onboard plus two 8 port cards (total of 3×8 = 24).
    3) The Areca controllers are great but if you use OS supported software RAID (RAID-Z2 for example) you would put drives in passthrough mode on either the Areca or LSI based cards like the Intel ones. In passthrough mode you are not using on-card RAID functions so a $125 8-port card is as good as a $600 one… and the LSI cards have really good OS support.
    4) I haven’t tried bare metal WHS on the LSI controllers in a long time (1yr +). IIRC you can get SMART info but to be honest, if you have redundancy and hotspares you can usually just swap disks when they fail or start to behave in strange ways.
    5) Vail is still better than a lot give it credit for. Running that in a VM connected to a storage focused VM based on FreeBSD or OpenSolaris seems to be a nice setup… but it is a LOT more intensive to setup and maintain. It may be worth it to look at FlexRAID or to think about an Areca 1880LP and use one expander for the current Norco RPC-4220 and use the 1880LP’s second connector for a JBOD enclosure.

    Hope that helps.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.