Gigabyte B343-C40-AAJ1 Node Overview
In the chassis there are ten nodes, each one is a single socket server.

Here is a node with an AMD EPYC CPU, DDR5 memory, and a M.2 SSD installed.

Starting with the rear of the nodes, here are the two 2.5″ drive slots for each node.

Here are two 2.5″ SATA SSDs installed.

Here is the connectors side for the chassis.

After the storage, there is the rest of the node.

First we get a socket AM5. This can support either AMD EPYC or Ryzen 9000 series CPUs.

We also get four DDR5 slots for unbuffered memory. This is a two channel 2DPC design.

Here we have DDR5 ECC UDIMMs installed.

Here is the AMD EPYC 4585PX CPU installed. This is a 16-core CPU with 3D V-Cache. These servers can take up to 170W TDP CPUs.

x

Next to the ASPEED AST2600 BCM, we get a M.2 slot. This supports M.2 22110 or 110mm SSDs. That is a small but nice feature because 110mm drives often support power-loss-protection or PLP.

Nedxt to that we have an OCP style-slot with the rear I/O.

This includes the Intel i350 dual port 1GbE NIC.

Here is a look at the rear of the node.

Here is the rear I/O. Instead of a VGA port, we get a mini DP port.

We then get two USB 3 Type-A ports, two 1GbE ports, and then a management port.

We also get a PCIe Gen5 x16 slot on a riser.

Next, let us get to the block diagram.



I can totally see a server like this being filled with 9800X3D’s for hosting top-performance Minecraft servers.
How do the rear OCP slots work shared with the nodes?
I am wondering both physically and logically? If each one of those needs 4x 4X PCI-E cables for full bandwidth, how do these get connected?
@George these will require OCP NIC from the 3.0 standard with multi host capabilities like a Connect-X 6, you get one or two physical connections into the NIC and then the NIC has vSwitch/vRouter capabilities that present a separate NIC to the intrrnal hosts, typically 4 per card (that’s why you need 3 for 10 nodes)
Notably, each node seems to only have a Gen4 x4 connection to the multi-host NICs, and it’s from the B650 chipset, not direct from CPU. That means you’re capped at 60 Gbps, and it’s shared with all the other chipset functions. I also wonder if it will impact things like SR-IOV & VFIO (because of iommu groups).
Ten individual separate PCs in one box. Each PC can be leased (rented?) to a user who logs in to an online cloud hosting service and remotes into their own online cloud PC. Ten users at once, each to their own respective PC. The one box requires less power to operate than ten separate boxes, each with its own PSU and corresponding PDU socket to do the same job. Makes sense to me. I like it!
Also, it’s a physical hardware PC, so fewer issues with setting up virtualization (although that’s possible here too). Very neat server design. I can see this being useful for businesses running their own federated cloud PC services to users who need to have a separate work computer but maybe can’t afford to buy a whole new system on their dime.