AIC HA401-TU High-Availability SAS Storage Server Review

5
AIC HA401 TU HA Rear With PSUs And Nodes
AIC HA401 TU HA Rear With PSUs And Nodes

The AIC HA401-TU is going to be one of the most unique servers you will see on STH this year. We have seen two-node systems before. This one, however, comes with two big twists. First, it is a 4U server. Second, the system is set up for a high-availability SAS storage setup where both nodes have access to the SAS SSDs onboard. This is going to be very cool, so let us get to it.

AIC HA401-TU External Chassis Overview

Looking at the front of the 4U chassis, we have 24x 3.5″ bays. These bays are designed not just for SATA, but also for SAS as well. Something a bit different with this chassis versus many that we have seen is that this is designed as a SAS-first system.

AIC HA401 TU 4U HA Server Front
AIC HA401 TU 4U HA Server Front

The chassis itself is not particularly deep at 27.7 inches/ 705mm. While it is by no means a short-depth chassis. It will still fit in the majority of racks. Further, since the PSUs and nodes are designed to pop out of the chassis from the rear and the drives from the front, the only reason to remove the chassis once it is installed is to service the midplane.

AIC HA401 TU HA Storage Backplane
AIC HA401 TU HA Storage Backplane

We ended up trying a variety of drives in it, but you can put 2.5″ SATA SSDs like these Micron 5400 Pro SSDs if you want. Still, for this system, we are generally going to recommend SAS3 drives.

AIC HA401 TU HA 2.5 In In 3.5 In Trays
AIC HA401 TU HA 2.5 In In 3.5 In Trays

Moving to the rear of the system, we can see two sets of components. First, we have top and bottom power supplies. Then, we have top and bottom controller nodes.

AIC HA401 TU HA Rear With PSUs And Nodes
AIC HA401 TU HA Rear With PSUs And Nodes

Each of the two nodes has a cover for the OCP NIC 3.0 slot we will see inside. There is only a COM port, two USB 3 ports, and an out-of-band management port on the main I/O panel. The VGA port is all the way to the left where the rear 2.5″ SSD bays are located. We are going to look at the nodes in more detail in the next section.

The power supplies are 1.2kW 80Plus Platinum units. Since this is a 3rd Generation Intel Xeon Scalable platform, not a 4th Gen, we still have plenty of performance for storage, but we can do so at a lower power level.

AIC HA401 TU HA PSUs
AIC HA401 TU HA PSUs

Taking a quick look inside the chassis, the midplane has high-density connectors for power and data connections instead of cabled connectors. Instead of seeing a backplane with a Broadcom SAS expander on it with many cabled connections, AIC is doing something different that allows the nodes to be swapped without having to deal with a mess of cables.

AIC HA401 TU HA Press Fit Connectors
AIC HA401 TU HA Press Fit Connectors

This is an ultra-important capability. By building the midplane in this manner, AIC ensures that even if a controller needs to be serviced, or a drive needs to be serviced, the entire chassis can stay inside the rack and still can serve data. As a high-availability design, having redundant controllers, drives, and power supplies with a chassis that does not have to be moved for service is important for long-term operation.

AIC HA401 TU 4U HA Server Internal Node Connections
AIC HA401 TU 4U HA Server Internal Node Connections

Next, we are going to take a look at the compute nodes to figure out how AIC accomplishes this.

5 COMMENTS

  1. Being somewhat naive about high availability servers, I somehow imagine they are designed to continue running in the event of certain hardware faults. Is there anyway to test failover modes in a review like this?

    Somehow the “awesome board stack on the front of the node” makes me wonder whether the additional complexity improves or hinders availability.

    Are there unexpected single points of failure? How well does the vendor support the software needed for high availability?

  2. Nice to see a new take on a Storage Bridge Bay (SSB). The industri has moved towards software defined storage on isolated serveres. Here the choice is either to have huge failure domains or more servers. Good for sales, bad for costumers.

    What we really need is similar NVMe solutions. Especially now, where NVMe is caching up on spinning rust for capacity. CXL might take os there.

  3. This reminds me of the Dell VRTX in a good way- those worked great as edge / SMB VMware hosts providing many of the redundancies of a true HA cluster but at a much lower overall platform cost, due to avoiding the cost of iSCSI or FC switching and even more significantly- the SAN vendor premiums for SSD storage.

    “What we really need is similar NVMe solutions. Especially now, where NVMe is caching up on spinning rust for capacity. CXL might take os there.”

    Totally agree- honestly at this point it seems that a single host with NVMe is sufficient for many organizations needs for remote office / SMB workloads. The biggest pain is OS updates and other planned downtime events. Dual compute modules with access to shared NVMe storage would be a dream solution.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.