Something that we have been working on in the lab is the Mellanox NVIDIA BlueField-2 DPU series. We have several models, and there is a bit of a misconception that we keep hearing in the market. Specifically, that these NICs are very similar to the typical ConnextX-6 offload NICs that one may use for Ethernet or Infiniband (NVIDIA has VPI DPUs that can run as Ethernet and/ or Infiniband like some of its offload adapters.) We wanted to just show a different view of the NIC to show a key differentiator.
Logging Into a Mellanox NVIDIA BlueField-2 DPU
First off, the 25GbE and 100GbE BlueField-2 DPUs that we have do not just have high-speed ports. There is also another port which is a 1GbE port. The interesting mental model one can use is of a standard Xeon D or Atom-based server where we have primary network ports plus an out-of-band management port. The 1GbE port is that management port.
When we plug the NIC into a system, we can actually see the SoC management interface on the NIC enumerated. Aside from the OOB management port, we can also access the NIC via a rshim driver over PCIe as well as over a serial console interface.
In the host system, we have tools to flash different OSes onto the NIC, but since the default OS is Ubuntu 20.04 LTS, that is what we wanted. Here is a quick look (the default login is ubuntu/ ubuntu on the NICs:
As you can see, we have 16GB of memory on the 100GbE card and 8x Arm Cortex-A72 cores.
Something that we were not expecting, but makes sense, is that we could use sudo docker ps and docker was already installed on the base image. We can also see the various network interfaces.
Here we can see the high-speed ports (down in this screenshot) but also two other ports that are important:
- oob_net0 is the out-of-band management port. There is not a web GUI, but this is similar to an out of band IPMI port where one can access the SoC outside of the main NIC port data path.
- tmfifo_net0 is the port we use to connect to the DPU over the host’s PCIe link
- docker0 is the network for the DPU’s docker installation, not the host’s docker installation
What is interesting here is that the higher-speed networking ports are being shared with the host, but the NIC has its own OS and networking stack.
Overall, the key that some have as a misconception is that the BlueField-2 is a re-brand of a ConnectX-6 NIC. Hopefully this shows some of the features that this is effectively its own server. There is high-speed networking, out-of-band management, an 8-core CPU, 16GB of RAM, 16GB of on-device storage, and ways to get to a console outside of the OOB/ host ports. These cards are low-power CPU/ low RAM servers themselves. That is why one of the key applications for the BlueField-2’s early adopters is as a firewall solution.
More on DPUs in the very near future, but we have many of these so there will be more coming over the next few weeks on STH.