At HPE Discover 2025, I was walking through the Intel booth and stopped cold when I saw a few products. One series shown was its newest 10GbE and 2.5GbE NICs based on the latest Intel E610 chipset. This is a new lower-end networking solution that will bring lower power to some key markets.
Intel E610 NICs Shown for Low Power 10Gbase-T and 2.5GbE
The first adapter did not look like anything too exciting, until I realized what it was. This is a PCIe Gen4 x4 adapter for dual 10Gbase-T connectivity. It is based on the new Intel E610 chipset.

The dual ports support 10GbE, 5GbE, 2.5GbE, and 1GbE. At full dual 10Gbase-T speeds, this card is rated at only 5.1W! That is awesome.

Also at the show, we saw the Intel E610-IT4 in an OCP NIC 3.0 form factor (SFF with pull tab in this case.)

This card has quad 2.5GbE ports. Interestingly, this can use up to 8.6W with 7.2W typical power. For an OCP NIC 3.0 SFF slot, this is not much to cool, but it is also just a neat solution.

A lot of folks who might have used the Intel i350 4-port 1GbE NICs are probably looking for a 2.5GbE solution these days, and Intel now has one. That is pretty neat.
Final Words
The somewhat crazy part about this is that these cards were launched by Intel with very little fanfare. I spoke to an Intel product rep at the booth who is a STH reader (hello if you are reading this.) I simply said it is a bit strange that Intel would release new chips and cards in this segment and I did not hear about it. There is a lot of change at Intel these days. At the same time, I know a lot of folks would be interested in these cards. I certainly am. Hopefully we can get some to show-off soon.
Hopefully the E610 will escape the teething issues that the i225/226 had. I’d be delighted to see the de-facto entry level speeds of Intel NICs drift up a bit; but anything that is aiming for the role of basic copper NIC is not something you are going to want to be fighting with; that’s the port you want to be able to assume will Just Work with more or less whatever you throw at it.
What’s the use of putting a PCIe Gen4 x4 interface on the E610-XT2? The Ethernet interfaces generates 20Gbps max, which even with overhead is way less than the ~64Gbps provided by PCIe4 x4. PCIe4 x2 or PCIe5 x1 would be sufficient, saving lanes for other uses. When integrated on mainboards, this also reduces the number of traces that need to be routed.
Wait…What? What do you mean they released new cards and chips and you didn’t hear about it? STH reported on them in March.
@Hans: dual 10GbE will generate up to 40Gbps since it’s full-duplex so PCIe 4.0 x2 is not enough, not to mention that x2 slots are almost unheard of in the wild. At least in the current mainstream generation PCIe 5.0 x1 is also very rare. Neither Intel nor AMD chipsets support 5.0 so it has to come from the CPU.
@fuzzyfuzzyfungus: i225/226 was so bad that it seems like Intel abandoned development for the series. There’s no 5GbE chip available from them, so for their Intel Killer E5000 NIC is using… Realtek 8126.
Personally I’ve had problems with every modern Intel NIC generation.
I know this is crazy, but if I do not physically see parts, nobody puts them in a server we review, and I am not doing the piece, I do not remember everything that gets announced.
Also, have been knocked out (COVID) all week so that is not helping. :-)
@Patrick Kennedy: Okay. That is fair and I respect that as the newest STH member is still extremely young, that would also take up a lot of attention :)
Is this card support Energy Efficient Ethernet?
Supported lowest connection speed is 100Base-TX? 10Base-T?
This should work at full speed at 3.0 x 4. It does not need PCIE 4. For instance, the x550-T2 from Intel is 3.0 x 4 and has 2 ports that will both go to 10gbe.
The PCIE 4 choice was probably just the metal catching up with the spec.
Another Intel example is the x710-T2L and T4L. The x8 is needed for the 4 port because it is PCIE 3.0. It does not need it for the T2L as shown by the x550-T2 config. Those X710 cards probably used the same board because it is cheaper to manufacture.
I have had a QNAP branded Intel based 4 port 2.5Gbps card for several years. Yes, they do exist. I even emailed a firewall vendor to update their kernel driver for it and a patch came out in 2 weeks. Has worked great ever since. Great for multi-WAN setups.
My new firewall uses the QNAP branded 4 port 5Gbps card with Marvell/Aquantia. I had the card for several years but finally put it into production after I gave up on real FreeBSD support.
The E610 apparently uses aquantia PHYs with intel MAC
These NICs are already being integrated into Dell’s 17th generation servers like the PowerEdge R770 and R7725.
Unfortunately, driver maturity seems to be an issue at this stage — we’re seeing critical limitations under Hyper-V 2025, such as a 4 queue pair per vPort cap (Event ID 280), and poor communication with the vSwitch stack (Event ID 285).
@Kyle
PCIe is full duplex (or dual simplex to be precise), just like Ethernet. Thus my argument stands.
Bradley is probably correct in his assumption. x4 is also more robust mechanically vs. x2.
I would very much like to see:
1) solid, robust drivers across OS’s. (Looking at you FreeBSD)
2) This chipset on every motherboard around for NAS/firewall appliance creation of step 1 is done properly.
I’m still riding an old 11th gen Intel NUC with dual 2.5GbE in my firewall, because anything even remotely close to reasonable on power draw these days with 10GbE uses the Marvell/Aquantia chipsets, and those are dead in FreeBSD OPNSense/PFSense based firewalls.
Would love to upgrade and finish that particular step of my 10GbE migration in my home network.
Seriously, I would love to see a return of a stable Intel NIC that has universal support.
Any recommendations on a small low power board where I can use one of these 10gb x 2 cards and nvme x 4 card?
@Hans:
NICs always overprovision PCIe bandwidth so that it can be used also on older PCIe generations.
Also, they dimensioned it for 4-port version have 1 design for all variants and save on R&D costs.
I like both branches – E610 and E830, each in its own right.
Since 10GbE doesn’t make sense for main networking anymore (25GbE looks far better), its place could be for legacy network and ISP connection, as it offers maximum of what RJ-45 can do.
AS ISP speeds gets higher, it is nice to have NIC that is future-proof.
And E610 gets it on modern PCIe4 interface, with great power efficiency and plenty of other goodies.
I hear it has DDP, so it could do on-card firewalling etc.
E830 is great for similar reasons – PCIe4 (PCIe5 on faster version), low-power, 200GbE speeds, fixed quirks of E810 etc etc.
Wow, only 10 years late to the NBase-T party,
congratulations Intel!
Been running Aquantia for about as long now, I hate Intel creeping back in, when they clearly got nothing special to offer, except driver chaos.
Just tried an older Aquantia TB3 adapter from Sabrent on an AMD Hawk Point APU system via USB4: just works full speed and saves a slot. Pricy for the adapter, through, I got mine for nearly half of what they ask today.
Now that a single PCIe v4 lane should do just fine for 10Gbit/s, all those x1 slots are gone from mainboards! Nor does anyone sell x1 NICs: crazy!
I hear they got 10Gbit fiber to the home in Switzerland, next country north 2.5GBit is still the residential upper limit, while I cheap out at 1Gbit/s. So BSD guys boycotting Aquantia isn’t an issue for me. I’m thinking about moving my physical pfSense to a Proxmox VM with passthrough Intel NIC, the only way to make those Intel 2.5 ports useful.
@abufrejoval
You don’t seem to have a clue. Acquantia is pedestrian stuff, compared to this.
Acquantia doesn’t have DDP, amongst other things.
And you can always use it in PCIe4x1, if you want to.
@Lini Ban, perhaps you should care to elaborate beyond throwing out an opinion?
I remember VMware getting really tired of all those 10Gbit ASIC, terrible drivers for all those offloading features, bypassing all of that and going fully software for their virtual firewalls: they quoted low single digit CPU overheads a decade ago and that was long before CPUs had 192 cores.
At 400Gbit SmartNICs or true DPUs have their uses, but at 10Gbit, a primitive 1Gbit equivalent will satisfy 99% of the market.
Intel, nobody needs your stuff in this space.
Better than nothing and likely will be widely available in the server market.
Power consumption wise, the AQC113 is already lower and the upcoming RTL8127 will be a fraction of that.
Is realtek actually considered an acceptable solution for 10GbE? Every 1GbE and 2.5GbE Realtek “consumer” grade NIC onboard motherboards that I’ve tried has always been extremely unstable. Intel 1GbE and 2.5GbE NICs have been rock solid.
That’s what I’m hoping for. Intel stability, but at nBaseT speeds. (10/5/2.5/1/100)
Surprised to see that Intel is just overlaying the AQ phys in one of the previous comments though.
Once again when I try to write a comment here, it is eaten and not displayed; even with a message about waiting for moderation, so I posted it with the other half dozen comments in the forum for this article.