Times are changing. With the new generation of servers that support PCIe Gen5, it is now possible to have 400GbE NICs serviced by a single PCIe slot. As a result, the demands for server and storage bandwidth are going to increase, if even to just multiple 200GbE links per slot. That is why we are taking a look at the FS 400Gbase-SR8 optical transceiver today. Even if your network stack is not designed for 400Gbps nodes, aggregating more traffic onto a single switch port is going to be a key theme in 2023.
FS 400Gbase-SR8 QSFP-DD 400GbE Overview
The new 400Gbps are in a new form factor, the QSFP-DD form factor. These are the modules.
The modules utilize the Inphi 850nm chip. At 400Gbps speeds, the DSPs being used become a big deal. Inphi is a company with awesome optical PHYs that was acquired by Marvell.
These are the “lower-cost” 400GbE optics. Currently, they sell for around $499 each. That is more than the 100Gbps and 200Gbps generation optics. At the same time, these are 2-4x the density, so they cost more and use more power.
Using MPO/MTP-16, these transceivers have a range of 70m over OM3 and 100m over OM4 multi-mode fiber.
We were able to get 400Gbps speeds over these optics, but it was a bit of a challenge. Perhaps the biggest challenge was having test gear to be able to push 400Gbps over the pair. For that, FS let us borrow the N9510-64D. This is the company’s 64-port 400GbE switch.
Something that is really striking about the N9510-64D platform with these optics is just how much power is used by a modern switch. With 64x of even these short-range optics, that is 64x 10W per 400Gbase-SR8 module for 640W of power, and that does not include the switch itself.
We will have more on the N9510-64D in the next few weeks when we have our review of that platform. Stay tuned!