HPE ProLiant MicroServer Gen10 Plus NIC Options
Standard, the HPE ProLiant MicroServer Gen10 Plus comes with a high-quality 4-port Intel i350-based 1GbE NIC. While this is a high-end 1GbE solution, sometimes one needs higher network speeds. We are going to discuss the 2.5GbE, 10GbE, 25GbE, and some of the higher
Inexpensive 2.5GbE Solutions
If you simply want a low-cost higher-speed port, 2.5GbE networking is low power, uses existing cabling, and is very inexpensive. We reviewed a number of NICs such as the Syba Dual 2.5 Gigabit Ethernet Adapter and the TRENDnet 2.5Gbase-T PCIe Adapter as internal options.
We also specifically tried the inexpensive CableCreation USB 3 Type-A 2.5GbE Adapter with the MicroServer Gen10.
We also tried the Plugable 2.5GbE adapter. You can read our full review here, but we would advise not to get this for the MSG10+ if it is more expensive than the CableCreation unit since it requires a Type-C to Type-A adapter.
Overall, we wish that the MicroServer Gen10 had 2.5GbE built-in, but the solutions to upgrade in this class are extremely inexpensive.
If you are going 10Gbase-T there are a few items to keep in mind. The first is power and heat. If you are using the VMware HCL as your guide, we suggest looking at an actively cooled 10Gbase-T NIC. If you are more flexible and using Windows or Linux, the Aquantia-based cards may be the best option as they offer relatively low power and heat operation. Do not get anything older than an Intel X540-T2 NIC for 10Gbase-T.
When it comes to SFP+, there are tons of great options with cards that use under 10W each. It seems as though <10-12W cards tend to fare better with the airflow provided by the HPE ProLiant MSG10+.
One can also add the HPE 867707-B21 dual-port SFP+ solution which is an official option. Beyond that, most 10GbE network solutions work well. For quad-port NICs, we like the Intel X710-da4 for its low heat and compatibility with a number of solutions. Also, adding a quad-port NIC allows one to do direct networking of up to four nodes in some edge clustering cases thereby avoiding using a switch.
We suggest getting SFP+ solutions here as they offer a low-cost and low power path to get 10GbE speeds. if you need 10Gbase-T, you can utilize SFP+ to 10Gbase-T adapter modules as necessary to accomplish the media conversion.
25GbE is more interesting. One gets a balance of performance and power. While 25GbE is the current data center trend, we must remember that the MSG10+ is designed as an edge device. Frankly, in a server designed to utilize four rotating hard drives, 25GbE is likely too much network bandwidth. Perhaps the most used option here is the Mellanox ConnectX-4 Lx card. These are everywhere. You will want the low-profile bracket with these cards.
We also tested compatibility a bit broader including Broadcom-based 25GbE adapters.
The Intel XXV710 25GbE solutions worked great as well.
We also tried a HPE ProLiant Gen8/9 quad 25GbE solution based on QLogic NIC IP. This is a really interesting solution since it offsets the higher-power of the NIC with an active fan to aid in cooling.
25GbE is completely doable in the HPE ProLiant MicroServer Gen10 Plus, however, realistically, most installations are not going to have the disk throughput to surpass 10GbE speeds. Therefore, it may make sense to use lower-power and lower-cost NICs.
A Word on 40/100GbE
If one wants to know whether you can use 40GbE and 100GbE we tried a number of NICs. Heading to the higher-end NICs often means 15W or higher power consumption. For these cards, they can heat up. From our old Intel Fortville 40GbE Lower Power Consumption and Heat article, the XL710 NICs are likely one of the better options for 40GbE, but these cards can generate a lot of heat.
At 100GbE, we saw the chassis fan hit higher-power modes struggling to keep the NIC cool even without passing traffic using Mellanox ConnectX-5 VPI 100GbE and EDR InfiniBand cards. Our advice is that while 100GbE can be done, this is probably not the best option for passively cooled NICs. One of the advantages that HPE / QLogic quad-port 25GbE NIC had is that it offers active cooling.
Getting Out There: QNAP QM2 10Gbase-T Plus M.2 SSD
Perhaps the complete wildcard here is the QNAP QM2-2P10G1TA. This is a card designed for QNAP NAS units, but it can be utilized for other servers as well. It has a single RJ45 NIC port for 10Gbase-T via an Aquantia chipset. Onboard are also two NVMe slots as well. This allows one to install two NVMe drives and get a 10Gbase-T port on a single PCIe slot.
There are some major drawbacks to this design. First off, it is a PCIe 2.0 x4 card. That means that the entire card has about half of the available bandwidth of a single M.2 NVMe x4 drive.
Onboard this card utilizes a PCIe switch architecture. What that means is that the 10Gbase-T NIC, and two M.2 NVMe SSDs all connect to the switch chip and then share a PCIe Gen2 x4 backhaul to the system, even though the PCIe Gen3 x16 slot in the MSG10+ has plenty of bandwidth.
Where it gets even more restrictive is that each M.2 NVMe SSD slot gets only a PCIe 2.0 x2 lane connection to the switch chip. As a result, using a modern PCIe Gen3 x4 SSD in the slot yields around a quarter of the performance that one would get from a native PCIe 3.0 x4 slot, and that is the best-case scenario. In the worst-case of transferring data to and from a SSD over the network, data needs to be copied from the SSDs to memory then back out through the NIC and that done in both directions yielding more bandwidth than a PCIe 2.0 x4 slot can handle.
This card is also over $230 which means it is more expensive than purchasing a dual-port SFP+ NIC, and getting a USB 3.2 Gen1 boot drive plus a Gen2 data SSD by the time you fill it with drives.
As a simple, yet somewhat exotic solution, this works, but it is also not the lowest-cost nor the highest performing solution.
In this guide, we have covered a lot of ground. Hopefully, this gives you some sense of what is possible with the HPE ProLiant MicroServer Gen10 Plus. The external 180W power supply and lack of internal storage connectivity limit what one can do with the machine, but there are plenty of ways to work around these limitations and customize these small-footprint servers exactly how you need them.
With a bit of creative work, we have been able to test just how far one can push the MSG10+. For example, an Intel Xeon E-2288G will work in the system, but it draws too much power to be practical. This took an enormous amount of testing, and we hope the community is able to take the work we have done and expand on it. Since we mean for this to be a resource, if we find new hardware recommendations in the future, we will update this piece.