The 8x PCIe GPU AI server is changing in a big way. During the NVIDIA Computex 2025 keynote, we heard about Enterprise AI Factories, and a key component was dropped by NVIDIA to OEMs just before the show. That is the new NVIDIA MGX PCIe Switch Board with ConnectX-8. This board replaces the traditional PCIe switch board used in 8x GPU servers with bundled NVIDIA networking.
NVIDIA MGX PCIe Switch Board with ConnectX-8
Looking at the bottom of the board, we have optical cages as well as the four NVIDIA ConnectX-8 NICs. With the ConnectX-8 NICs, we get a built-in PCIe switch, increasing from 32 lanes to 48 lanes in this generation. These are also PCIe Gen6 NICs capable of 800Gbps networking.

On the top we get the MCIO connections to the host as well as the PCIe x16 slots. Two GPU slots are assigned to each ConnectX-8 NIC on the other side.

In case you were wondering, that ninth slot is labled as being for “Management” and typically we saw BlueField-3 DPUs installed in those slots.

You can see from the MSI server shot above that this is a big change. Instead of the Chenbro and Chenbro-like NVIDIA MGX platforms where there were four NIC slots, two for each GPU, below the main GPU area, the new design adds the optical cages below the PCIe slots removing the need to add NICs to the system other than perhaps that BlueField-3 DPU slot.
Final Words
Aside from changing the architecutre in the space for the first time in a decade, NVIDIA gets a PCIe Gen6 switched architecture before PCIe Gen6 CPUs are out. Furthermore, it allows the pairs of GPUs to communicate over PCIe Gen6 x16 links, and then access high-speed networking for scale-out as well. NVIDIA does not need to wait for Intel or AMD’s next-generation parts to move to PCIe Gen6. Instead, it can move the GPUs to Gen6 and 800Gbps networking without new generations of processors.
We also covered some thoughts on the BOM impacts of this board in the Substack:
What does the topology look like in this situation? I assume that there would simply be no alternative to most of the traffic being directly between GPUs and NICs on the switch, not through a root complex; but is it still technically a classic single root arrangement dangling off the management card’s root, just not moving much traffic through it; or do the connectx8s act as roots for the GPUs or vice-versa?
“No touch”
Proceeds to touch everything…
Do you know what type of connector the 8 ports are? ex. OSFP 400G? if so, it is a bit odd that they went from 8 ICs of cx7 400G mapped to 4 ports of 800G for the dgx h100 systems to the current setup of 4 ports of 800G to 8 ports of 400G. The cx8s are pci gen6 but need an aux kit (via a cable and card or just cable) to enable the full speed in pci gen5 servers. I take it from the article above that the board is made with pci gen6 which would alleviate the need for the aux kit. There is a lot of potential power savings by reducing the # of ports in AI environments and this seems to reverse that. Is there any nvlink involved with the systems above? It still seems like a little bit of a bottleneck with each gpu only getting 400Gbps.
I believe they are OSFP. 800Gbps is for Infiniband, 400Gbps is for Ethernet IIRC.
THIS is STH :-)
What are the dimensions of this board.