AMD’s Pensando came with some codenames that I really like. The AMD Pollara 400 AI NIC is shipping, and the AMD Vulcano 800G NIC is preparing for a next-generation launch. AMD is focusing on UltraEthernet as well as UALink for its scale out and scale up stories, and the NICs matter for those applications.
AMD Pollara 400 AI NIC, UltraEthernet, and UALink Updates
The first announcement is the AMD Pensando Pollara 400 UltraEthernet RDMA NIC that was announced at the last Advancing AI event is now shipping.

The AMD Pollara 400 is the 400G device (really for PCIe Gen5 platforms) that incorporates UltraEthernet. When using AMD’s version of NVIDIA’s NCCL, called RCCL for scale-out collective communication, AMD says that it is faster. That means faster than NVIDIA ConnectX-7 by around 10% and around 20% on the Broadcom Thor2. This is a big deal since that communication can cause GPUs to idle if it is inefficient, making the overall workloads run slower. NVIDIA would be quick to highlight its Spectrum-X benefits with its GPUs, but this is the AMD ecosystem.

AMD says that UEC features can yield much higher performance at at cluster level due to the new features like congestion control and load balancing. Again, as some perspective, NVIDIA would say it already has some features UEC is bringing in Spectrum-X deployed at scales in AI training clusters.

At the cluster scale, GPUs and HBM are often failure points, but the network is another point of failure. As a result, networking reliability is a key feature of cluster-level design. Modern AI clusters go well-beyond single node capabilities and reliability.

AMD also highlights that it can use lower-cost “Generic” UEC switches and operate clusters at larger scales.

Here is the partnerships slide. UEC is a huge force in the industry. As we go forward expect that UEC will be adopted by all or almost all high-end Ethernet products, especially as we look at 800G server and accelerator infrastructure.

Next-gen, however, si where things get more exciting, with the AMD “Helios” Rack-Scale Architecture.

AMD is going to use UALink 1.0 to handle its scale-up. This is the open alternative to NVIDIA NVLink 5.0 and AMD says that it can scale almost twice what NVIDIA can but also integrate components from mutliple vendors.

One other neat announcement is that AMD has a vision of bringing the Fabric Manager into ROCm. That may seem like a small feature, but if you think of scaling to hundreds of thousands of accelerators with 800G connections, it involves moving more data at a higher rate than the entire public Internet. Managing the high-peformance fabric is a big deal so it is cool that is coming to ROCm.

Still, there is more for 2026 including the Vulcano.
AMD Pensando Vulcano 800G AI NIC
The AMD Pensando Vulcano is a 800G NIC for next-generation PCIe Gen6 clusters and both the UALink and UltraEthernet eras.

NVIDIA is already shipping ConnectX-8, but AMD having an alternative focused on an open ecosystem is a big deal. We recenly covered how NVIDIA is working to lock non-NVIDIA NICs and Braodcom PCIe switches out of its ecosystem in our Substack.
Final Words
At the end of the day, if you want to play in 2026 AI clusters, you need not just AI chips, but also the ability to scale up and scale out. AMD having a NIC may sound a lot like NVIDIA’s playbook because that is needed. On the other hand supporting open standards is very different from what NVIDIA is doing by leaning into multi-vendor and open standards.