For a fun weekend piece, we thought we would show folks what a higher-end passive DAC looks like. This is the massive NVIDIA 800G OSFP to two 400G QSFP112 passive splitter DAC cable that we need to connect our NVIDIA SN5610 64-port 800GbE switch to ConnectX-8 NICs (the C8240 versions.)
NVIDIA 800G OSFP to 2x 400G QSFP112 Passive Splitter DAC Cable
The NVIDIA part number 980-9I80Q-00N02A (legacy MCP7Y10-N02A) is a 2.5M DAC that is huge because it is for 112G PAM4 signaling.

On one end, we have the 800G OSFP (Octal Small Formfactor Pluggable) connector that is finned. A few weeks ago we had OSFP Finned and Flat Top The 400G and 800G Experience, and this is an example of a finned OSFP connector. You can see the cooling fins on the top of the connector. Instead of a switch having cooling on the OSFP cages, the cooling is integrated into the DAC/ optic connector.

This is important because we have eight lanes of 112G PAM4 for effectively 8x 100Gbps channels. Using 112G SerDes means that 51.2Tbps switches with 64-ports of 800GbE are practical.

Another use of those ports is to split out the 800G ports to two 400G ports. Since we have 400G using 112G PAM4 this is actually straightforward since it is effectively splitting 8x 112G into 2 x 4x 112G. As you can see on the other end of the cable, we have dual QSFP112 connectors. Each has four 112G PAM4 lanes giving us 4x 100Gbps communication channels. These are red and blue on this cable to make it easier to see which channel they are.

The other difference is that these are QSFP112. That may not seem like a big deal, but QSFP28 is 4x 28G NRZ for 4x 25Gbps links or 100GbE. QSFP56, like we saw on our recent MikroTik CRS812-8DS-2DQ-2DDQ-RM Review is 4x 56G PAM4 for 4x 50Gbps links or 200GbE. QSFP112 is 4x 1112G PAM4 for 4x 100Gbps links for 400GbE.

This matters because with four channels, we are able to drive 400GbE. On that MikroTik CRS812, and also other QSFP56-DD switches, we only have 56G PAM4 so 400GbE requires eight channels.
Final Words
While DACs are commonly used, and we have been using breakout DACs for years, the 800Gbps/ 51.2Tbps switch generation is a lot more complex. Since we just showed Cheap QSFP56-DD 400G DR4 Intel Silicon Photonics Optics as well this week, we thought it would be worth highlighting that at 400GbE speeds, it is common to see OSFP (finned and flat top), QSFP56-DD, and QSFP112 optics and DACs. While it may seem like 400GbE is a straightforward progression from 100GbE, there is a bit more complexity there. This NVIDIA cable is a good example of having to align the 112G PAM4 signaling and then matching the connectors to ensure you can connect a NVIDIA ConnectX-8 NIC to a NVIDIA Spectrum-X 51.2T switch. We received a few questions after yesterday’s cheap used optics piece, and the reason those are inexpensive is that they were used more in the previous generation of 400GbE components at hyper-scalers like Meta.





STH team, in the future if you have capability to measure or get information through stats/telemetry it would be nice to have power consumption comparison of different standard DACs. For optical modules it more or less clear, because most of power is used by optical components, but for DACs it is a little bit gray area.
Thank you very much for educating about different standards – QSFP56-DD, QSFP-112, QSFP56 etc. it is very useful. Article about backward compatibility and availability of breakout combinations (if they exist) is another topic :)
Nothing personal but I really dislike the photos on this site. I can never figure out what I am looking at and there are too many garbage photos of side of devices that have nothing of interest. My suggestion is a lot less photos and annotations on the ones you use to point out the areas of interest.
“Flat-top” is officially called OSFP-RHS. RHS means Riding Heat Sink. It’s important to use the correct names for these components in case your readers would like to learn more about them from other sources.
Link to standard: https://osfpmsa.org/assets/pdf/OSFP_Module_Specification_Rev5_22.pdf
@Mike glad to see I am not the only one. The topics are cool, but most articles are infuriatingly short on details.
ummm this is a DAC so you’ve got 3 connectors and a cable in the middle. It’s not that hard to know what’s going on since there’s 2 on 1 side and 1 on the other side. Maybe you’d want to see cm by cm photos of the cable? I’d like to see inside, but these types of cables are hundreds of dollars so if they’re using them again, I’d understand why they don’t show inside.
@Bill if you’re in NVIDIA world they actually call them finned and flat. You might not know this but that’s what they call them, so that’s what everyone in that ecosystem calls them. Here’s a great link where this is a Fin to Flat cable. https://docs.nvidia.com/networking/display/bluefield2firmwarev24421000/validated+and+supported+cables+and+modules
NVIDIA’s so big they’re calling stuff whatever they want.
@Neal, thanks, good to know. I don’t operate in the networking space that directly connects to the GPUs so my knowledge gap there makes sense. It appears QSFP and OSFP-RHS are being referred to as “flat”. I’m sure that causes confusion for no one /s