GIGABYTE G383-R80-AAP1 Internal Hardware Overview
We are going to start from the front, and get into the server.

First, we found the ASPEED AST2600 BMC underneath the NVMe drive bays.

The entire board utilizes PCIe cables to connect all of the I/O throughout the system to the motherboard with the four AMD Instinct MI300A APUs.

If you have never seen the AMD Instinct MI300A, this is what they look like.

They are socketed and look a lot like a modern AMD EPYC server CPU and socket. These are AMD SH5 sockets whereas the current EPYC processors use Socket SP5.

Four of these are in the system. We will show this a bit later, but this is actually a 4-way server so you get four 24-core CPU in each section, 4x 128GB HBM3 segments, and four GPU portions, one set in each socket.

Each APU uses between 550W adn 750W depending on the configuration, so there are massive heatsinks that also cover the power delivery components.

These heatsinks are absolutely massive. We showed this off earlier in This is the Massive AMD Instinct MI300A Heatsink in the Gigabyte G383-R80-AAP1.

The APUs are installed one in front of the other so the cooling needs are fairly massive.

That is why there is another set of fans in another partition.

We often see one set of fans. Sometimes we see two sets of fans cooling offset areas. Rarely do we see two sets of fans moving air through the same components.

That second fan partition has another function. It is also designed to cool the other PCIe cards at the rear of the chassis. This is where one would expect to find the high-speed NICs, but we have heard of some other neat uses.

Next, let us get to the topology and block diagram.
From the first page,
“Compared to the big 8x GPU servers, this is a relatively lower power machine which is fun to think about.”
Something else that’s fun to think about is a single one of these chips in HPC-dev workstations. Any indication that we might see something like a DGX Station (GB300 Grace Blackwell) from AMD?
Has the time come once again for serious fp64?
They can do this, but can’t make a high end gaming card. ♂️
I appreciate this article, but this is fun to think about instead, the SuperMicro GPU A+ Server AS -4126GS-NMR-LCC with 8 Instinct MI350 (8 x 288GB of HBM3E mem) and 2 EPYC 9005 series supporting 24 DIMM 6TB memory in 4U.