In 2022 we are going to see a bigger push towards 100GbE networking. 25GbE is still strong, but with the 400GbE and faster switches, such as the Marvell/Innovium Teralynx 7-based 32x 400GbE switch and the 800GbE generation of switches arrive, it will become more commonplace to see 100GbE going to individual servers. With that trend starting, we wanted to get ahead of it and look at a new 100GbE option from Intel, the E810-CQDA2.
Intel E810-CQDA2 Dual-Port 100GbE NIC
The Intel E810-CQDA2 is a low-profile card and one can tell this is a newer generation part with the massive attention to heat dissipation. There is a large heatsink along with small fins on the QSFP28 cages.
The other side of the card is relatively barren. We can, however, see that this is a PCIe Gen4 x16 card.
We are showing the full-height bracket in these photos, but the NIC also has a low-profile bracket option. One can see the two QSFP28 cages on the bracket.
The physical attributes are only a part of the story with this NIC. While that is a view more of the hardware, let us get into some of the big changes with these new NICs. Specifically, this is part of Intel’s foundational NIC series. That definition is changing as the series evolves.
As you can see here, Intel is showing a higher-level feature set than it did in previous generations. This is not a DPU/ IPU like the Intel Mount Evans DPU IPU. Instead, it is a more basic connectivity option for adding networking to a server.
Intel Ethernet 810 Series Functionality
As we move into the 100GbE generation, NICs require more offload functionality. Dual 100GbE is not too far off from a PCIe Gen4 x16 slots bandwidth so offloads are important to keep CPU cores free. Without them, one can see eight cores in a modern system used to just push network traffic, using valuable resources. With the Ethernet 800 series, Intel needed to increase feature sets to allow systems to handle higher speeds and also to stay competitive. To do so, they primarily have three new technologies ADQ, NVMeoF, and DDP. We are going to discuss each.
Application Device Queues (ADQ) are important on 100Gbps links. At 100GbE speeds, there are likely different types of traffic on the link. For example, there may be an internal management UI application that is OK with a 1ms delay every so often, but there can be a critical sensor network or web front-end application that needs a predictable SLA. That is the differentiated treatment that ADQ is trying to address.
Effectively, with ADQ, Intel NICs are able to prioritize network traffic based on the application.
When we looked into ADQ, one of the important aspects is that prioritization needs to be defined. That is an extra step so this is not necessarily a “free” feature since there is likely some development work. Intel has some great examples with Memcached for example, but in one server Memcached may be a primary application, and in another, it may be an ancillary function which means that prioritization needs to happen at the customer/ solution level. Intel is making this relatively easy, but it is an extra step.
NVMeoF is another area where there is a huge upgrade. In the Intel Ethernet 700 series, Intel focused on iWARP for its NVMeoF efforts. At the same time, some of its competitors bet on RoCE. Today, RoCEv2 has become extremely popular. Intel is supporting both iWARP and RoCEv2 in the Ethernet 800 series.
The NVMeoF feature is important since that is a major application area for 100GbE NICs. A PCIe Gen3 x4 NVMe SSD is roughly equivalent to a 25GbE port worth of bandwidth (as we saw in the Kioxia EM6 25GbE NVMe-oF SSD, so a dual 100GbE NIC provides roughly as much bandwidth as 8x NVMe SSDs in the Gen3 era. By increasing support for NVMeoF, the Intel 800 series Ethernet NICs such as the E810 series become more useful.
What is more, one can combine NVMe/TCP and ADQ to get closer to some of the iWARP and RoCEv2 performance figures.
Dynamic Device Personalization or DDP is perhaps the other big feature of this NIC. Part of Intel’s vision for its foundational NIC series is that the costs are relatively low. As such, there is only so big of an ASIC one can build to keep costs reasonable. While Mellanox tends to just add more acceleration/ offload in each generation, Intel built some logic that is customizable.
This is not a new technology. The Ethernet 700 series of Fortville adapters had the feature, however, it was limited in scope. Not only were there fewer options, but the customization was effectively limited to adding a single DDP protocol given the limited ASIC capacity.
With the Intel Ethernet 800 series, we get more capacity to load custom protocol packages in the NIC. Aside from the default package, the DDP for communications package was a very early package that was freely available from early in the process.
Here is a table of what one gets for protocols and packet types both by default, and added by the Comms DDP package:
As you can see, we get features such as MPLS processing added with the Comms package. These DDP portions can be customized as well so one can use the set of protocols that matter and have them load at boot time while trimming extraneous functionality.
Next, we are going to take a quick look at some of our experience with the NIC including driver, performance, and power consumption before getting to our final words.