AMD Vulcano 800G NIC Coming As AMD Outlines its UALink and UEC Scale Plans

0
AMD Pensando Vulcano 800G Networking
AMD Pensando Vulcano 800G Networking

AMD’s Pensando came with some codenames that I really like. The AMD Pollara 400 AI NIC is shipping, and the AMD Vulcano 800G NIC is preparing for a next-generation launch. AMD is focusing on UltraEthernet as well as UALink for its scale out and scale up stories, and the NICs matter for those applications.

AMD Pollara 400 AI NIC, UltraEthernet, and UALink Updates

The first announcement is the AMD Pensando Pollara 400 UltraEthernet RDMA NIC that was announced at the last Advancing AI event is now shipping.

AMD Pollara 400 AI NIC Shipping
AMD Pollara 400 AI NIC Shipping

The AMD Pollara 400 is the 400G device (really for PCIe Gen5 platforms) that incorporates UltraEthernet. When using AMD’s version of NVIDIA’s NCCL, called RCCL for scale-out collective communication, AMD says that it is faster. That means faster than NVIDIA ConnectX-7 by around 10% and around 20% on the Broadcom Thor2. This is a big deal since that communication can cause GPUs to idle if it is inefficient, making the overall workloads run slower. NVIDIA would be quick to highlight its Spectrum-X benefits with its GPUs, but this is the AMD ecosystem.

AMD Pollara 400 AI NIC RCCL Versus NVIDIA ConnectX 7 And Broadcom Thor2
AMD Pollara 400 AI NIC RCCL Versus NVIDIA ConnectX 7 And Broadcom Thor2

AMD says that UEC features can yield much higher performance at at cluster level due to the new features like congestion control and load balancing. Again, as some perspective, NVIDIA would say it already has some features UEC is bringing in Spectrum-X deployed at scales in AI training clusters.

AMD UEC Versus RoCEv2 Performance
AMD UEC Versus RoCEv2 Performance

At the cluster scale, GPUs and HBM are often failure points, but the network is another point of failure. As a result, networking reliability is a key feature of cluster-level design. Modern AI clusters go well-beyond single node capabilities and reliability.

AMD AI NIC For Cluster Uptime
AMD AI NIC For Cluster Uptime

AMD also highlights that it can use lower-cost “Generic” UEC switches and operate clusters at larger scales.

AMD Higher Scale At Lower Cost Using UEC
AMD Higher Scale At Lower Cost Using UEC

Here is the partnerships slide. UEC is a huge force in the industry. As we go forward expect that UEC will be adopted by all or almost all high-end Ethernet products, especially as we look at 800G server and accelerator infrastructure.

AMD UEC Partners
AMD UEC Partners

Next-gen, however, si where things get more exciting, with the AMD “Helios” Rack-Scale Architecture.

AMD Helios Rack Scale Architecture Coming 2026
AMD Helios Rack Scale Architecture Coming 2026

AMD is going to use UALink 1.0 to handle its scale-up. This is the open alternative to NVIDIA NVLink 5.0 and AMD says that it can scale almost twice what NVIDIA can but also integrate components from mutliple vendors.

AMD UALink 1.0 Versus NVIDIA NVLink 5.0 Advancing AI 2025
AMD UALink 1.0 Versus NVIDIA NVLink 5.0 Advancing AI 2025

One other neat announcement is that AMD has a vision of bringing the Fabric Manager into ROCm. That may seem like a small feature, but if you think of scaling to hundreds of thousands of accelerators with 800G connections, it involves moving more data at a higher rate than the entire public Internet. Managing the high-peformance fabric is a big deal so it is cool that is coming to ROCm.

AMD ROCm AI Lifecycle Management Includes A Fabric Manager In 2026
AMD ROCm AI Lifecycle Management Includes A Fabric Manager In 2026

Still, there is more for 2026 including the Vulcano.

AMD Pensando Vulcano 800G AI NIC

The AMD Pensando Vulcano is a 800G NIC for next-generation PCIe Gen6 clusters and both the UALink and UltraEthernet eras.

AMD Pensando Vulcano 800G Networking
AMD Pensando Vulcano 800G Networking

NVIDIA is already shipping ConnectX-8, but AMD having an alternative focused on an open ecosystem is a big deal. We recenly covered how NVIDIA is working to lock non-NVIDIA NICs and Braodcom PCIe switches out of its ecosystem in our Substack.

NVIDIA Just Ended Broadcom’s Chances in a Traditional AI Segment with This by Patrick Kennedy

$4000+ of BOM Opportunity Locked Out

Read on Substack

Final Words

At the end of the day, if you want to play in 2026 AI clusters, you need not just AI chips, but also the ability to scale up and scale out. AMD having a NIC may sound a lot like NVIDIA’s playbook because that is needed. On the other hand supporting open standards is very different from what NVIDIA is doing by leaning into multi-vendor and open standards.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.