As one of my very first posts for STH, I submitted a build guide to a small business server with built-in disaster recovery capabilities. That system was DIY assembled, with a mix of consumer and enterprise-class products designed to fit a specific niche. Well, today I am back with another build; this time I am building a pair of servers, and they definitely contain some DIY flair. Also, please note this is not a sponsored post in any capacity, and in fact, none of these parts were even purchased by STH. This build was done outside of my capacity as a STH reviewer, I simply chose to document it in case any of the STH readership found it interesting or informative.
Build Design and Reasoning
This client is upgrading an older server setup with new hardware and software. The old servers are a pair of Dell PowerEdge T320 systems with Sandy Bridge generation Xeon CPUs, 16GB of RAM each, and mechanical storage.
Aside from their age, there actually is nothing wrong with the existing servers, but the client wants to migrate to the newest version of their line-of-business application which comes with new hardware and software requirements, thus necessitating the upgrade.
The new servers were originally set to be ordered from an official Supermicro systems integrator, and would probably have been based around an Intel Xeon E-2300 Series platform. This particular client has performance requirements in excess of what Xeon D or EPYC 3000 series can reasonably provide, but they do not have the budget, nor the need to make the jump to Xeon Scalable or EPYC 7000.
Unfortunately, lead times from my vendor were going to be 8+ weeks for system delivery, and even that was a “guesstimate” for an ETA. That amount of time is simply too long, so I was forced to explore other options. As I considered my options, the idea to do a DIY build started to seem appealing.
Bill of Materials
First up, here is the bill of materials used in this build. Please note, I purchased *two* servers, so everything here was purchased in double quantity. The combined hardware cost of both servers is a bit under $6000.
Chassis: Antec VSK 10 $200 (after PSU and fans)
The Antec VSK 10 was chosen for this server because it satisfies a small list of requirements. It had to be mATX, inexpensive, and fit the cooling setup. It also needed to not look like a “gamer” case; no tempered glass side panels please. My first choice chassis was the Fractal Design Core 1000, but that case did not end up being compatible with the chosen CPU cooler. The VSK 10 was purchased to replace the Core 1000 and is working well. Along with this case, an EVGA 550W 80+ Gold modular power supply was purchased, and I already had a few extra case fans on hand for better airflow. Nothing on this setup is hot-swap, but in the context of this particular client that was deemed acceptable.
Motherboard: ASRock Rack X470D4U2-2T $430
I have reviewed this board before and I liked it. This was once again my second choice; originally I purchased some much newer ASRock Rack B550D4U motherboards, but I ran into some compatibility problems with my add-in 10 GbE NICs; more on that later.
After running into that stumbling block, I located the X470D4U2-2T boards on eBay and they worked perfectly. I was not planning on utilizing the PCIe Gen4 capabilities of the B550D4U anyways, so there was no great loss in swapping them out. Additionally, since this board includes onboard 10 GbE networking, I was also able to return the 10 GbE add-in cards I originally purchased.
CPU: AMD Ryzen 9 5950X $585 (with HSF)
This CPU choice is simply the top-end SKU for the socket. For a modern server in 2022 only having 16 cores is not much, but this client is upgrading from two old Dell systems, one with a single Xeon E5-2407 installed and the other with a single Xeon E5-1410.
Compared to the old servers, this Ryzen 5950X will absolutely blow both of those systems out of the water without even breaking a sweat. A lower-performance SKU could probably have been used, something like the 5900X or 5800X, but the expense to upgrade to the 5950X was relatively minor in the context of the overall project cost and so the best CPU for the socket was selected. Since this CPU obviously needs a cooler, I picked up a Noctua NH-U12S Redux.
Memory: 4x Crucial 32GB DDR4 3200 (running at 2666) $480
This system was configured with 128GB of DDR4 memory running at 2666 MHz.
The chosen memory speed and capacity are dictated by the platform in this case. DDR4 3200 DIMMs were purchased because they were the least expensive option, but the motherboard is restricted to 2666 MHz for the operating speed. 128GB is the maximum capacity allowed on this platform as well. The two existing servers have a combined memory capacity of 32GB, so moving to 128GB is still a huge upgrade.
RAID Controller: Highpoint SSD6204 $336
I always prefer some kind of drive redundancy, and my favorite RAID cards are currently prohibitively expensive as well as out of stock. As a result, I went looking for alternatives and landed on the SSD6204.
This is a 4-port M.2 NVMe RAID card capable of RAID 1 and compatible with ESXi, which is all I was looking for. This card does not require bifurcation on the PCIe slot and the RAID functionality is handled in hardware. The 4-port SSD6204 was chosen over the 2-port SSD6202 to allow future storage expansion if necessary. I am not 100% happy with this purchase, but it was relatively inexpensive and combined with other forms of redundancy and backup should be more than sufficient for my needs. My biggest gripe with this card is that there is no audible beeper in the case of a drive failure, which leaves open the possibility that one of the drives could fail silently.
Data SSDs: 2x Seagate FireCuda 530 2TB $650
Anyone who read my review of the Seagate FireCuda 530 1TB knows that I came away very impressed. I chose a pair of the 2TB FireCuda 530 drives in a RAID 1 array for my primary data store. 2TB might not seem like much capacity, but right now the old servers have less than 500GB of data on them; 2TB should provide ample room to grow.
The SSD6204 is not Gen 4 which means these drives will be running well below their maximum potential, but the performance requirements for this server will not present a problem when sticking to Gen 3 performance. The SSD6204 has a big heatsink which should help keep these drives cool in operation.
Boot SSD: Samsung PM981A 256GB $37
I simply need something to install ESXi on, and this is what I chose. An alternative solution would have been to boot from USB, any number of other model M.2 SSDs, or even a SATA DOM. The PM981a was chosen because it was inexpensive and had same-day delivery on Amazon. That last bit mattered because, honestly speaking here, I forgot to order the boot SSDs until I had the rest of the parts already in hand; oops!
Backup HDDs: Toshiba N300 6TB 7200RPM $280 (Backup only)
The second server is intended to function as the backup. That second system includes identical hardware to the primary server so that VMs can be replicated and boot up with essentially identical performance in the case of a failure.
For longer-term backups, a pair of 6TB mechanical hard drives have been added to the second server. The presence of these disks is the only differentiating factor between the two physical servers.
I am not going to get into the software licensing costs as part of this article. With that said, a paid copy of VMWare ESXi Essentials was purchased at around $580. This will be combined with Veeam to handle backups. The largest single cost in this project is actually software, specifically the Microsoft licensing. This infrastructure project was designed around a proprietary piece of software that requires three VMs, all running Windows; Domain Controller, Remote Desktop Services, and SQL services.
Adventures in DIY
As you might have gleaned from my bill of materials, not everything went smoothly in this build. Some of this is my fault and could have been caught with research ahead of time; other problems were entirely unforeseen and have me stumped even now.
The very first stumbling block I ran across was with my original chassis, the Fractal Design Core 1000.
This case was originally chosen exclusively for its exceptionally low cost. Once I had it in hand, however, I did not find myself particularly impressed and did not like the airflow setup. The mesh front gives the impression that it is well ventilated, but the top half of the mesh has solid sheet metal behind it preventing airflow. I soldiered onward, but then immediately discovered that the Noctua NH-U12S was too tall to fit into this case; the side panel would not go back on. In truth, I was somewhat relieved to have an excuse to swap this out. My second choice was the Antec VSK 10. This case had a moderately higher price tag, but everything fit inside the case much better and it did not suffer the constrained airflow problem of the Core 1000.
The next problem was the real head-scratcher of the build. As mentioned in the BOM, my original motherboard selection was the ASRock Rack B550D4U. This board was selected because it was available from my vendor brand new and it would be Ryzen 5000 ready out of the box. It was also mATX; this client does not have much physical space in their server room so size is a premium. 10 GbE networking was on the list of requirements, but the X570D4U-2L2T was both out of stock and cost more money than the B550D4U + a 2-port 10 GbE NIC, so I chose the B550D4U.
Unfortunately, I ran into problems almost immediately. For some reason, when both the main PCIe slot (PCIE6) and the secondary slot (PCIE4) were occupied, whatever was installed into PCIE4 would not actually work. The card in PCIE4 would show up in lspci in VMware, but drivers would not be loaded nor would the device be assigned an ID. If the SSD6204 RAID card was in PCIE6 and the X550-T2 NIC was in PCIE4 then the NIC would not work; if I swapped them then the RAID card would not work. I had two motherboards and four NICs (two different models) to test with so I was sure the problem was not isolated to a single piece of equipment.
After reaching out to ASRock Rack support for assistance but not solving the problem, I pulled the trigger on an eBay listing for some X470D4U2-2T boards that were brand new. These fixed the problem in two ways; since they come with onboard 10 GbE networking I no longer need the add-in cards. Second, I did go ahead and test the 10 GbE cards temporarily and they do work on the X470D4U2-2T, in case I ever need that PCIe slot down the line. ASRock Rack has promised to keep me up to date if they can figure out why my 10 GbE cards were not working on the B550D4U, but for now I am happy with the X470 boards.
One additional note regarding the X470D4U2-2T is that my boards required a BIOS update to accept the Ryzen 9 5950X CPU. Thankfully I was able to perform this update via the BMC, which meant that I did not need to find a temporary Ryzen 2000/3000-class CPU to use for the update. I also took the opportunity to install the firmware update for the BMC itself, which enabled BMC fan control settings.
Once all the incompatibilities and parts selection was completed, the system looks fairly standard and mostly resembles a standard tower PC build.
One peculiarity users will note is the orientation of the cooler. On the X470D4U2-2T the processor socket is rotated in comparison to the standard orientation on a consumer motherboard. As a result, the Noctua NH-U12S Redux cooler ends up in a bit of an odd orientation.
I elected to install the fan on the underside of the cooler and blowing upwards, with an additional 120mm fan mounted at the top of the case also blowing up as an exhaust.
The two fans at the front are configured as intakes to provide fresh air, and in my burn-in testing this configuration has proven stable with processor temperatures peaking at around 85C under full load. 85C is hot, but not unreasonable given we are talking a 5950X at 100% load under air cooling. Just in case, I enabled a 100% fan duty cycle in the BMC and under light to moderate load the CPU temperatures stay well below 50C.
Once physical assembly was complete, things proceeded fairly normally for a server build. VMware ESXi was installed:
On the primary server, three VMs were created and appropriate software has been installed. These systems have not yet been delivered to the client so they are not fully configured on the software side of things, but they were built up enough to perform some basic burn-in testing.
Dual System Design
With two nearly-identical servers at my disposal, I have the opportunity to set up some fairly resilient failover scenarios. Each system has a 2TB SSD array, and on the primary server that is where the trio of VMs will live. Veeam will be used and configured to replicate those three VMs to the 2TB SSD array on the backup server. They will sit on that backup server powered off in case they are ever needed.
Additionally, a workstation VM will be installed on the backup server and allocated space from the 6TB mechanical disks. Veeam will use that disk space to store hourly backups going back at least a month.
The entire second server will exist on a separate network, separated by a VLAN and firewall, from the rest of the primary network. With that isolation in place, if a workstation or server becomes infected on the primary network it will have no path to the server where backups are stored. As a final layer of defense, the backups stored on this server will also be sent offsite.
Aside from the use of some desktop-class parts, this is a remarkably traditional, or perhaps even old-fashioned server design. There are no docker containers, no hybrid cloud integration, almost nothing fancy at all. In this case a traditional environment is what satisfied the requirements for the chosen line-of-business software, and so that is exactly what was delivered. I hope you enjoyed coming along with me for this pseudo build log!