At SC24, we found a module we have discussed on STH, but we did not have pictures of. The NVIDIA Cedar module is perhaps best known as the quad NVIDIA NIC module found in NVIDIA DGX H100 systems. Most OEMs we have seen with HGX H100 platforms have slotted NICs instead. Finally, at SC24, we found the quad NIC modules.
Eviden Shows the Quad NVIDIA ConnectX-7 Cedar Module
We walked by the Eviden BullSequana XH3515-HMQ blade at SC24. This is a liquid cooled 8-node NVIDIA GH200 server. So there are eight NVIDIA Grace 72 core CPUs and eight Hopper GPUs per module. Eviden says that there is 480GB of LPDDR memory and 360GB of HBM usable per node. One has to wonder if these quad node system are 120GB/ 96GB GH200 models designed for higher CPU memory bandwidth.
On the left side of the above picture, there are two NVIDIA Cedar modules. The top one is covered by a liquid cooling block. The bottom one, however, shows us the four ConnectX-7 NICs on the module.
x
Here is another angle of the four NICs on the module
Final Words
A funny story here is that a prominent industry analyst read our NVIDIA Cedar Fever 1.6Tbps Modules Used in the DGX H100 piece and started calling the module “Cedar Fever.” There are folks in the industry that are still saying Cedar Fever today. The advantage of this module is massive space savings. The disadvantage is that it limits one to only using NVIDIA ConnectX-7 NICs instead of having options for other NICs like the BlueField-3 SuperNIC or one from Broadcom or AMD. Still, it is cool to see the custom hardware. We recently found a new dual NIC NVIDIA module that looks like it is going to be delayed. Cedar, in contrast, has been shipping for some time.