Mid-range DIY Storage Server Buyer’s Guide, October 2011

8

This mid-range guide was originally intended to have an AMD Bulldozer based build and an Intel E3 series based build. I have been playing quite a bit with the Bulldozer CPUs and frankly, it is very difficult to do at this point. On the plus side, the desktop AMD FX series Bulldozer CPUs support ECC so long as the motherboard can. Frankly, I wish Intel just did away with the Xeon E3 versus Core i5/ i7 designation and just allowed Core i5 and i7 series CPUs support ECC. AMD’s efforts here are commendable. Now, here is the issue, Windows Home Server 2011, Windows 7, and Windows 2008 Server R2 all seem to have the scheduler that keeps more Bulldozer nodes active than one would want. Microsoft has said that the Windows 8 generation kernel will support the Bulldozer architecture, but Windows 8 generation products are probably a year away.

AMD’s Bulldozer desktop CPUs did not make it into this round’s Buyer’s guide for a few reasons:

  1. Power consumption at load is much higher than the Sandy Bridge architecture
  2. Speed is generally a bit lower than a comparable Sandy Bridge CPU price wise
  3. Nobody is making server boards for Bulldozer
Items one and two people can read about in various sites, and I will get to them soon. On item number three we can see a huge weakness of Bulldozer in the server market: no integrated GPU. Had AMD put something like Intel’s vPro KVM-over-IP on all of its Bulldozer CPUs, they might make for very interesting chips in the server space. Back in the day, 4-8MB ATI graphics was a very common server spec to see. With no onboard GPU, one must reside off-chip, making for another power consuming part (see AMD’s 9-series chipset list.) For the record, server boards with IPMI 2.0 generally have off-chip graphics also, but the extra power consumption is not too great and you get things like KVM-over-IP making it a no-brainer trade-off. Once you plug an external GPU into a desktop Bulldozer part, power consumption goes up another notch. Until the C32 and G34 server parts launch, Bulldozer will not be finding its way into the mid-range buyer’s guides.
With all of that being said, I received feedback that the mid-range servers from last year’s mid-range guide were too high end and too expensive. I hear you readers! This year the goal was to go less expensive, and get more performance. Let’s see that in action!

Processor

Get an Intel Xeon E3-1230. The E3-1220 lacks hyper-threading and is not that much less expensive. Moving

Intel Xeon E3-1230 Chip Shot
E3-1230

up in the range for E3 series CPUs is almost futile in storage server builds as very rarely is one limited based on CPU speed. Spending the extra money is probably not worth it versus putting it into a rainy day fund. Compared to last year, AES-NI instructions help with AES encryption speeds and one gets less power consumption. I think this is THE chip to get right now unless one is building on a C206 based motherboard, in which case the E3-1235 is probably a best bet. I would strongly suggest just getting the retail kit here and using the stock fan. In a decent server chassis, it is more than enough to keep the already cool-running CPU cool enough for 24×7 operation.

 

Motherboard

This one was very difficult to make a decision on. I think best bets are, for Tyan the S5510 (S5510GM3NR)

Tyan S5510 Board Overview
Tyan S5510

and for Supermicro the Supermicro X9SCL+-F. In this case I have to give the Tyan the edge solely because it has two Intel 82574L Gigabit LAN controllers and four PCIe slots. It is also a micro ATX form factor making it easy to work with in a large 4U chassis since the bottom screws do not need to be secured against the 4U chassis left wall. The big feature here is the remote management IPMI 2.0 controls. I do prefer the more heavily customized Supermicro IPMI 2.0 interface, but Tyan’s is still very functional.

 

Storage Controller(s)

For this round I am going with the IBM M1015’s (ebay is probably the best source of IBM M1015‘s). They are

IBM ServeRAID M1015
IBM M1015

eight port LSI SAS2008 based controllers which are relatively simple pass-through devices. With things such as ZFS providing great RAID solutions and others such as FlexRAID and various Windows Home Server/ Small Business Server 2011 drive pooling solutions out there, I think that controllers that are fast, cheap and present many drives to the system at a low cost per unit are a great choice. With three controllers one can use twenty-four drives in addition to the onboard SATA ports, or more than enough for a 24-bay chassis. The best part, the IBM re-brands can be had for under $100 each! For virtualized installations, one can pass the cards to different VMs which many people will like.

Memory

These days, memory is inexpensive. While the Xeon E3 series supports 8GB UDIMMs, there is a huge price premium upgrading from 4GB to 8GB DIMMs so I would stick to 4x 4GB ECC UDIMMs. That gives one 16GB of RAM which is still fairly solid. My advice is to always check either the board’s hardware compatibility list or that of a memory manufacturer before ordering memory for your server.

Chassis

Here I still believe the Norco RPC-4224 is a best bet for a home environment. I will say, after using many

Norco RPC-4220 Front
Norco RPC-4220

Supermicro hot-swap trays, Norco leaves much to be desired in terms of quality. On the other hand, Norco does have a very compelling price point. While many may wonder why I do not recommend a large ATX case with 5-in-3 hotswap mobile racks, simply put, you will spend as much if not more in the long run for something that is not as good. The RPC-4224 and the RPC-4220 has a solid center fan partition which keeps expansion cards, such as the IBM M1015’s cool. Consumer cases do not have this. My advice here is to just buy once. If you upgrade the CPU/ motherboard later it is still a decent alternative.

Final Configuration (“Decked-out”)

Total Cost: $1,320

Final Configuration (“Built to expand”)

Total Cost: $1,050

The “Built to expand” build sacrifices memory, CPU, and SAS/ SATA controllers in order to lower the initial entry costs. If one knows they are expanding over twelve months, this is a very reasonable way to build, especially if one is starting from an eight to twelve drive system and they are planning to expand.

Conclusion

OK so this build ended up being much more expensive than the low-end build, but it represents a solid single processor server using today’s parts. I was very unhappy that I was unable to include AMD’s Bulldozer in this guide, but at the present time, the product does not merit inclusion. One has to remember a $50 difference in a 20-24-drive system is fairly negligible. While these systems cost over $1,000, twenty plus disks will cost one $2,000-$3,000 or more. Realistically, the entry DIY server can be scaled in a larger chassis if one is looking for only a bare minimum size installation. Personally, I think if one does not need 20-24 disks in the next eighteen to twenty-four months, this is going to be overkill. I hope these templates help folks and I always look forward to feedback on sizing the builds. 20+ disks may be too large for a mid-size build so I will be looking at potentially bringing this to a 15-16 drive chassis next time.

8 COMMENTS

  1. Dumb question…I’ve always looked at the areca ARC-1880ix-24-4G-NC to use with WHS 2011. I want a robust RAID card that allows of the potential of two different RAID sets, although I’m just planning on implementing 20 drive RAID 10 for now. I’m a newbie so I’m not sure how I would accomplish that 3 different expanders. This will be a backup and media server using either multiple HTPCs or Dunes’ as front ends. I will be ripping 1:1 DVD and Bluray ISOs. If someone could explain it, I would appreciate it as it would save close to $1000.

  2. I’ve been wondering for a while: would it be possible to passthrough the onboard GPU to a VM? Would it be possible using a C202 or 204 (as opposed to the 206)? I know the physical traces aren’t routed for a video port to work on a non-206 board, but if the Intel GPU is just another attachment to the PCIe bus….?

    I’ve been wondering this since I first learned of the E3 Xeons (and simultaneously, this website). The idea would be to run ESXi, have a SolarisExpress storage VM with the LSI devices passed through, and a Windows 7 blu-ray rip VM using QuickSync(TM) with the blu-ray and HD3000 GPU passed through. I’ve been googling the hell out of the internet since March/April but still can’t find anything.

    Also, reviews on the Tyan board are claiming that there is no remote power on, and that the IPMI interface doesnt work when the system is off. Any reason to recommend this over the Supermicro MBD-X9SCM-F-O, which is also Micro-ATX?

  3. ER_MD for RAID 10 (realistically you do not need 10) the ARC-1880 gives you the potential for BBWC which is good for buffering writes. One thing you could do is to get an expander with an 1880i which will save a bunch of cash and yield more ports. The 1880ix-24 has an onboard expander so fairly similar. Feel free to ask on the forums as I am sure you will get lots of great ideas.

    DOM it is possible to pass through a GPU, however I have never tried the on-die GPUs. Hyper-V actually has a cool version of this for remote desktop.

    The main reason for this over the X9SCM-F is simply the dual NICs are better supported in ESXi. The X9SCM-F (there is a review on this site) has a NIC that requires an extra ESXi driver to be installed and is a bit more painful. Remote power on works fine on the Tyan and the IPMI turns off when there is no power supply power (e.g. you pull the power cable or turn off a power supply switch), but that is the same as with the Supermicro implementation (they are both based on the same 3rd party technology)

  4. Rad, thanks Patrick! It looks like I’ll need to do some experimentation of my own. I’ll definitely let you know what I find out for QuickSync and VMDirectPath. I’ve looked a little bit into HyperV especially for the RDP capabilities, because I like the idea of converting all my non-gaming-application interaction to a home VDI/thin client setup. Two things have prevented me though: a) I know ESX like the back of my hand and b) I’d rather trust a hypervisor company to make a hypervisor than…well, Bill Gates to 😛

    Also, I wanted to let you know that you’re very highly regarded amongst my friends. We watch your feed, and it’s not uncommon to begin a conversation on the assumption that both parties have read an article of yours, even if it’s only been up a few hours. Keep up the good work!

  5. Have you actually used the Corsair AX850 with the Tyan S5510 MB? I have that MB and had a heck of a time finding a power supply it would POST with. Out of 5 different supplies I found 1 that actually would POST … the oldest one I have. Updating BIOS to v103 didn’t change anything. Anyone have a list of working power supplies (other than the one Tyan supplies)?

    Thanx

  6. Dom – I appreciate the kind words. I think ESX is way better but Hyper-V if you are doing all Windows is super easy and has way better hardware support.

    Ed – I did on the review unit as well as my Seasonic X650 and X750 with no issues.

  7. Rad, thanks Patrick! It looks like I’ll need to do some experimentation of my own. I’ll definitely let you know what I find out for QuickSync and VMDirectPath. I’ve looked a little bit into HyperV especially for the RDP capabilities, because I like the idea of converting all my non-gaming-application interaction to a home VDI/thin client setup. Two things have prevented me though: a) I know ESX like the back of my hand and b) I’d rather trust a hypervisor company to make a hypervisor than…well, Bill Gates to
    +1

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.