Why We Use 100GbE Switches and QSFP28 to 4x SFP28 DACs for 25GbE

2
FS DACs QSFP28 To 4x SFP28
FS DACs QSFP28 To 4x SFP28

As a quick weekend piece, we wanted to address a question we get sometimes when doing the 32x 100GbE QSFP28 switch reviews, and that is why we use 100GbE port switches, even for our 25GbE networking in the lab. The reason is fairly simple, most 100GbE switches allow for using optics, DACs, and even breakout DACs that allow us to service 25GbE and 100GbE gear from 100GbE ports.

Why We Use 100GbE Switches and QSFP28 to 4x SFP28 DACs for 25GbE

A few months ago, we did our, What is a Direct Attach Copper (DAC) Cable? article. One of the things that we mentioned was that using DACs over optics is often less expensive and uses less power. The trade-off is the reach. Many switches are designed with this in mind, and actually one can see in theĀ FS S5860-20SQ switch we reviewed almost a year ago that the QSFP+ ports are labeled as “40G Breakout” ports. This is specifically to note that the ports can operate in 4x 10GbE mode. The Q in the 10/40Gbps era QSFP+ or in the 25/100Gbps era QSFP28 stands for quad, or four.

FS S5860 20SQ FS DACs And Optics 1
FS S5860 20SQ FS DACs And Optics 1

We had a few cables in the lab from recent pieces from FS.com and so we figured we would just show what some of these look like. Here is a QSFP28 to QSFP28 cable that can handle 100GbE between two 100GbE QSFP28 ports.

FS DACs QSFP28 To QSFP28
FS DACs QSFP28 To QSFP28

These DACs are in some ways the copper cable equivalent of the MPO/MTP cables. When we did the recent FS QSFP28-100G-SR4 v. QSFP28-100G-IR4 Differences piece we noted that the 100G-SR4 optics use 8 of the 12 fibers in MPO/ MTP cables. These 8 fibers are four pairs in each direction. That contrasts to the 100G-IR4 which puts four channels over each of two fibers for CWDM optics. That 100G-SR4 model where there are four lanes is similar to why we can break out to 25GbE links using QSFP28 DACs.

FS DACs QSFP28 DAC MTP 12 Pro Fiber On Top
FS DACs QSFP28 DAC MTP 12 Pro Fiber On Top

As the name suggests, the QSFP28 (remember “Q” is for quad) combines four SFP28 lanes. 28 is for the max throughput of the lanes, but that manifests itself as 25GbE. As such, one can think of this as 4x SFP28 connectors combine into one QSFP28 port. Or 4x 25GbE go into 1x 100GbE. This is not a coincidence and we use these cables for many of the devices that are 25GbE (and backward compatible 10GbE) devices in the labs.

FS DACs QSFP28 To 4x SFP28
FS DACs QSFP28 To 4x SFP28

The reason we usually have SFP+/SFP28 DACs, QSFP28 breakout DACs, and QSFP28 DACs is that they tend to cost much less than doing fiber. Also, DACs tend to be a bit more permissive with vendor coding because vendors know that each end of a DAC is fixed so they cannot be easily customized like with optics. Between the lower power, lower cost, and ease of switching between devices, we tend to use a lot of DACs in the lab.

FS DACs SFP28 DAC 4x SFP28 To QSFP28 Breakout QSFP28 DAC
FS DACs SFP28 DAC 4x SFP28 To QSFP28 Breakout QSFP28 DAC

The other key benefit is that these are lower power so that helps a bit with the cooling in racks. The downside, of course, is that we tend to only be able to use DACs within racks due to the shorter reach. With the QSFP28 100Gbps generation DACs have become much thicker and are on track to get even thicker as we move to the 400Gbps generation and beyond.

Final Words

There are many benefits to using optics for networking, and we still use optical networking either Active Optical Cables (AOC) or traditional pluggable optics to go from rack to rack. For our 100GbE generation, since about 2018 we have been using mostly 32x 100GbE switches in our lab and then using in-rack DACs to service 100GbE or 25GbE gear. We do get some oddball 50GbE devices, but most are 25GbE or 100GbE these days.

One of the biggest challenges moving forward to the 400GbE and 800GbE generations is that the noise and signal processing on DACs is becoming a bigger challenge. That means thicker cables. The benefit, of course, is lower power and cooling needs. That is why we expect to use DACs until we can no longer do so. While we have reviewed one 400GbE switch already, we are excited about the prospect of being able to get a 32x 400GbE switch in the lab and breaking out to 100GbE links for each node.

2 COMMENTS

  1. Been a DAC person for quite sometime for work and homelab. On top of heat lower heat production, DAC also has a lower latency and good for short distance interconnect. Some will argue with using optics, unless you need the distance, DAC works fine and it just works. Had a CRS309 in my homelab, the unit is as cool as a cucumber with all ports connected using DACs.

  2. We are using DAC since quite a while to connect our servers to the 10/25 gbit/s TOR switches. Failures are very race, especially with Flexoptics ones.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.