Perhaps one of the most interesting trends in servers over the past few years has been towards the 1U storage appliances. Meeting this trend head-on we have the Tyan Thunder SX GT90-B7113 system which is a 1U server that offers capacity for twelve 3.5″ hard drives and four U.2 NVMe SSDs. Adding to the allure of this particular configuration is that the system houses dual Intel Xeon Scalable (first and second generations) CPUs with all of this storage in a chassis that is only 35.43″ or 900mm deep. That takes a lot to make a configuration like this work.
Since this is a more complex system, we are first going to look at the system chassis. We are then going to focus our discussion on the design of the internals. We also have a video version of this review for those who prefer to listen along. Our advice is to open it in another YouTube tab and listen along while you go through the review.
Since our full review text is thousands of words as we are adding this video, this review has more detail. Still, we know some prefer to consume content in different ways so we are adding this option.
Tyan Thunder SX GT90-B7113 Hardware Overview
The Tyan Thunder SX GT90-B7113 has two main variants. One model is the B7113G90U12E4HR which we are reviewing today. The other is called the B7113G90V12E4HR. The big difference between the two models is that we are reviewing the “U12” version instead of the “V12” version which means we get SAS connectivity. With such a robust model name, we expect plenty of features. As a result, we are going to split our hardware overview into two sections. The first is the external overview, then we will delve inside to see the internal overview.
Tyan Thunder SX GT90-B7113 External Overview
The Tyan Thunder SX GT90-B7113 is a 1U server that is designed to fit normal depth racks. On the front of the server, we find normal power and status buttons and LEDs. We also see four 2.5″ drive bays. These are U.2 NVMe drive bays that, in this version of the server, can also support SATA. To fit the hard drive array, these bays only support 7mm drives instead of full 15mm that we see on typical 2.5″ systems. For this server, that makes sense and it is actually a great feature considering a standard 3.5″ 1U server normally has only 4x 3.5″ bays and no additional SSD bays.
On top of the system, we see a giant yellow warning sticker. This tells us that the system is designed to use a L-shaped rail kit instead of standard rails.
This sticker is important because the mechanism to access the 12x 3.5″ HDD bays is by sliding out a drawer instead of sliding the entire chassis on rails. This drawer is a key innovation of the platform.
Continuing with the rails, these are the L-shaped left and right rails. Those who work with chassis servers are familiar with this concept. Since the chassis remains in the rack, it does not need to slide out and therefore we have very simple rail designs.
One of the other great features is that the drives themselves mount in tool-less carriers. These carriers snap around the drive then latch into place. We have a demonstration of this in the video above.
The rear of the system shows just how purpose-built this is. There are no traditional PCIe expansion slots save cutouts for an OCP NIC 2.0 to be added.
Port wise we get an out-of-band management NIC which we will discuss more in our management section. There are two USB 3.0 ports as well as legacy VGA and serial ports.
Aside from the lack of PCIe expansion slots, we also do not have any standard onboard networking. This system is heavily optimized for scale-out storage. As a result, one must install an OCP NIC 2.0 which we will show in our internal overview.
Next, we are going to take a look at the inside of the system before we get further into our review.
Considering the rail design. and warning sticker, does the server feel like it might tip forward with a fully loaded drive sled pulled out all the way, if there’s nothing above it?
This design does look cool, especially with the 900mm depth. I’m a little shocked you didn’t mention the lack of hot-swap fans (and how that’s fine.)
That server is an impressive piece of work. It would be more impressive with a single EPYC socket instead of two Xeon$ocket$.