This is going to be a fun one for many STH readers. We have reviewed 8x and 10x GPU servers (PCIe) for many years on STH. The ASUS ESC8000A-E11 is built a bit differently than other options, which makes it a fascinating design to look at. This server has space for two double-width PCIe Gen4 accelerators, two AMD EPYC CPUs, and additional expansion. This server was also just announced this week at NVIDIA GTC 2021 so this is absolutely a brand new server on the market (we did not cover that announcement since this review was forthcoming.) Let us get to our review so we can start taking a look at how these systems are made.
ASUS ESC8000A-E11 Hardware Overview
For this review, we are going to take a look at the exterior of the server, then the interior. As a fun note here, I got a new lens the day before this review was published so all of these photos were re-shot on the Canon 50mm f/1.2L. Years ago STH photos were all done on a 50mm lens, and I wanted to try a review using the same focal length.
ASUS ESC8000A-E11 External Hardware Overview
The front of the server is nothing short of fascinating. This is a 4U system, and there are some distinctive ASUS features here. The top is an area designed for expansion and customization. The bottom has I/O and is designed for airflow. The middle incorporates 8x 3.5″ drive bays.
The 3.5″ drive bays can be SAS, SATA, or NVMe depending on how the backplane is configured. One can also use an adapter to fit 2.5″ drives into this 3.5″ drive tray.
Another feature that is really interesting is that we have normal power and reset buttons on the front. What is a bit more unique is that we have the Q Code LCD that shows the POST status codes. There is also a USB and a VGA port. This system is designed to be serviced by a technician through a cold aisle. In racks where there are 10 of these systems each using 3kW+ it is usually much more pleasant to service from the cold aisle versus the hot aisle.
Another feature you may have spotted is that there are two sets of two fans. These fans are specifically there to cool the AMD EPYC CPUs that we will discuss more during our internal overview.
The top section in our test system has a PIKE II card providing SAS connectivity. One can see that the system has additional mounting points along the front for different functions that we do not have, but we will show an example of a M.2 carrier in our internal overview.
The expansion slot on the front is a PCIe Gen4 x16 slot with an x8 link. This matches well with a HBA or RAID controller like the PIKE SAS 3008-8i solution.
This system is very rear-heavy when configured, and this shot shows why. The rear of this system is dedicated to expansion slots and power supplies.
There are four power supplies. Each is an 80Plus Titanium unit that can put out up to 3kW in the 220V-240V range. This design gives a great level of power redundancy to the system.
We have recently gotten some requests to show how these PSUs are connected. Here is a look into an empty PSU slot so you can see the connector as well as the perforated backplane, designed to allow air to pass through the PCB and through the PSU.
Here is a quick shot of the power distribution board.
Aside from power, there are 10x PCIe expansion slots in the rear of the system to go along with the PIKE slot on the front. Eight of these PCIe Gen4 x16 slots are double-width for GPUs.
On the bottom left of this area we have a NIC in a low-profile slot along with a management port.
One can see this assembly is relatively easy to service by unscrewing the thumb screw and pulling the assembly out from the chassis. The management NIC is on another PCB on this tray as well.
The other slot is a full-height expansion slot.
This extends for another PCIe Gen4 x16 full-height slot.
Of course, the real magic happens inside, and so let us get to the internal hardware overview.