Today we are taking a look at a card that we are putting to good use in a high-end system. The NVIDIA ConnectX-7 quad-port 50GbE card offers a lot of flexibility in a familiar form factor. This is not the fastest ConnectX-7 out there, but it is easy to connect and ready for the coming generations of SFP56 50GbE networking.
NVIDIA ConnectX-7 Quad Port 50GbE SFP56 NIC Overview
The model we are looking at is the NVIDIA ConnectX-7 (MCX713104AS-ADAT.) There are a number of different variants of the ConnectX-7 NIC; this is just one. ConnectX-7 can handle 400Gbps of throughput, so this is only a 200Gbps card.

Here is a quick look at the back of the card.

The big feature is the interface, with four SFP56 cages. These will also operate at lower speeds, but the reason to get a card like this is because of the new 50GbE interface. Instead of needing larger QSFP optics or DACs, 50Gbps is now possible in the smaller SFP form factor so we can get four ports even in a lower profile card.

Here is a quick look at the airflow view of the card.

Next let us get the card installed and take a look.




I really like what they’ve done with the cooling here; minimize fin count and surface area to minimize heat dissipation!
Seriously? no power draw numbers at idle and load in the “review”?
I don’t think most cards other than DC GPU’s support power monitoring onboard. Servers don’t provide power on a per slot basis. So you’d have to put the card into a power testing rig, but that’d change the data connection between the CPU and the card, so it’d introduce that inaccuracy. You might be able to do power deltas for the server as a whole, but then it’d only be relevant for the server since you’d get cooling too. That’s why nobody does power on PCIe cards. I thought that’s obvious?
Oh, maybe I just got trolled by this James feller. I fell for it.
does it ship with a low profile PCIe backet? Curious if this can be used in a 2U server
@Farhad R
Power measurement interposer’s effect on data lanes is negligible because the additional distance is small. If that mattered you’d see different performance between slots closest and furthest from the CPU, and that’s definitely not the case. Nor are PCIe traces length-matched between slots on motherboards.
Does this mean that Cx-7 is targeting mere mortals with their 50GbE stuff and that prices will be reasonable ?
Small cooler does imply miniscule power, even at datacenter airflows…
Why do all these things in ConnectX family default to slower PCIe with models for lower speeds ( 25GbE/50GbE/100GbE) ?
If they didn’t they’d be awesome for smaller hosts, since they wouldn’t waste precious PCIe5 lanes.
What’s the point of paying for PCIe5 if you get to actually use it so rarely ?