HPE ProLiant MicroServer Gen10 Plus Review This is Super


HPE ProLiant MicroServer Gen10 Plus Internal Hardware Overview

One area that HPE is saving costs in the MicroServer Gen10 Plus is with the backplanes. Instead, these are direct cabled units. HPE screws in these cable ends to create a pseudo backplane which is functional. There is one major drawback. These are not hot-swap bays. Hot-plug functionality is not enabled here. If one wants to install a new SATA drive, one needs to power off the unit, then back on. This is one feature that we wish HPE provided, especially for edge virtualization. Downtime must occur if one is going to service drives.

HPE ProLiant MicroServer Gen10 Plus Hard Drive Connectivity Rear
HPE ProLiant MicroServer Gen10 Plus Hard Drive Connectivity Rear

On the motherboard, we de-populated the risers and CPU heatsink to get a look at the system. One can see this is a very compact and dense motherboard. A key feature is the Intel LGA1151 CPU socket which allows HPE to create one motherboard and support both the lower-cost Pentium CPU as well as the performance-optimized Xeon E-2224 CPU. We are not going into alternative CPU options here, but that is a focus on our subsequent piece. We will cover the performance of the two HPE SKUs later in this review.

HPE ProLiant MicroServer Gen10 Plus Socketed
HPE ProLiant MicroServer Gen10 Plus Socketed

There are two DDR4 DIMM sockets. These can take ECC UDIMMs and HPE officially supports up to 2x 16GB for 32GB of memory. With the Xeon E-2224 SKU, we get DDR4-2666 operation. For the Pentium Gold G5420 we get DDR4-2400 operation. Both SKUs support two DIMMs and ECC memory. The socket itself supports up to four DIMMs per socket in two-channel memory mode, but HPE does not have the space for that in such a compact form factor.

HPE ProLiant MicroServer Gen10 Plus ECC UDIMM
HPE ProLiant MicroServer Gen10 Plus ECC UDIMM

There is still one USB 2.0 port on the platform and this is an internal USB Type-A header. It would have been nice if this was USB 3.0 as that would make it more practical for internal boot media.

HPE ProLiant MicroServer Gen10 Plus Internal USB Type A
HPE ProLiant MicroServer Gen10 Plus Internal USB Type A

The heatsink you are seeing may look substantial, and it is. It is a passively cooled unit with a heat pipe that helps get the heatsink into the airflow of the chassis fan. This is a very nice design.

HPE ProLiant MicroServer Gen10 Plus Heatsink
HPE ProLiant MicroServer Gen10 Plus Heatsink

We mentioned the four 1GbE ports previously. These are powered by the Intel i350-am4 NIC chip. This is a high-end 1GbE NIC that has been around since 2011. It is expected to be supported through 2029 making it a very long life part. Here is the NIC on Intel Ark. You will note the $36.37 list price. For some context, the Intel i210-at, the lower-end single port NICs, are $3.20 each. Inexpensive consumer Realtek NICs cost well under a dollar. As you can see, this is an area where HPE designed a premium network controller where there was ample opportunity to cut costs. At the same time, by using the i350-am4, HPE gets excellent OS support that we will see in our OS testing section of this review.

HPE ProLiant MicroServer Gen10 Plus Intel I350 Am4 NIC
HPE ProLiant MicroServer Gen10 Plus Intel I350 Am4 NIC

Another big feature is the addition of the HPE iLO 5 BMC. This adds cost to the unit, but also makes this system a first-class iLO manageable server, just like other servers like the HPE ProLiant ML110 Gen10 and HPE ProLiant ML350 Gen10 tower servers and the HPE ProLiant DL20 Gen10 and HPE ProLiant DL325 Gen10 rack servers. Competitive systems, such as the Dell EMC PowerEdge T40 (review almost complete) do not have BMCs to save costs at the expense of manageability.

HPE ProLiant MicroServer Gen10 Plus ILO5 Controller
HPE ProLiant MicroServer Gen10 Plus ILO5 Controller

PCIe expansion is via a PCIe Gen3 x16 slot. Given the power and cooling of this system that we will discuss on the final page of this review, we suggest that this is not suitable for higher-end SmartNICs and GPUs, even the NVIDIA Tesla T4. HPE has a low power AMD Radeon GPU option for those who need display outputs.

HPE ProLiant MicroServer Gen10 Plus PCIe And ILO Riser Installed
HPE ProLiant MicroServer Gen10 Plus PCIe And ILO Riser Installed

The top slot may look like a PCIe x1 slot at first, but it is designed to be used with the iLO Enablement Kit option. For STH readers, we highly recommend equipping this option and we will go into more depth on that later in this review.

The SATA connectivity is powered by the Intel PCH. This may look like a SFF-8087 SAS connector, but it is being used here for four SATA ports. There are no additional SATA ports or SATADOM headers onboard which means there is no extra SATA port for boot drives. We wish there was a powered SATADOM header onboard as that would allow for all four drive bays to be used solely for storage.

HPE ProLiant MicroServer Gen10 Plus SFF 8087
HPE ProLiant MicroServer Gen10 Plus SFF 8087

The motherboard connectors are greatly reduced from previous versions. HPE is using custom connectors to deliver power and data to the rest of the chassis except for the SFF-8087 cable.

HPE ProLiant MicroServer Gen10 Plus Motherboard Connection Cables
HPE ProLiant MicroServer Gen10 Plus Motherboard Connection Cables

Overall, this is a great hardware package with the HPE ProLiant MicroServer Gen10 Plus. There are a few areas where HPE has the opportunity to innovate and make a category-killing product. We like the hardware direction HPE took.

Next, we are going to look at the system topology then the management stack before getting to our OS and performance testing.

Design & Aesthetics
Feature Set
Previous articleSupermicro MegaDC Bridges Traditional and Hyper-Scale
Next articleAMD EPYC 7282 Benchmarks and Review
Patrick has been running STH since 2009 and covers a wide variety of SME, SMB, and SOHO IT topics. Patrick is a consultant in the technology industry and has worked with numerous large hardware and storage vendors in the Silicon Valley. The goal of STH is simply to help users find some information about server, storage and networking, building blocks. If you have any helpful information please feel free to post on the forums.


  1. I’ve skimmed this and wow. This is another STH Magnum Opus. I’ll read the full thing later today and pass it along to our IT team that manages branch offices.

  2. I made it to page 4 before I ordered one. That iLO enablement kit isn’t stocked in the channel so watch out. I’m now excited beyond compare for this.

  3. A really nice review, thanks a lot. impressed with the Xeon performance at this kind of low power system. I should/really want to get one, replacing my old gen 7 microserver home server.

  4. I like seeing bloggers and other guys review stuff, but STH ya’ll are in a different league. It’s like someone who understands both the technical and market aspects doing reviews. I think this format is even better than the GPU server review you did earlier this week.

    I’d like to know your thoughts about two or three of these versus a single ML110 or ML350. Is it worth going smaller and getting HA even if you’ve got 3 servers? I know that’s not part of this review. Maybe it’s a future guide.

  5. You’re Windows 10 testing is genius but you missed why. What you’ve created is a Windows 10 Pro remote desktop system that can be managed using iLO, is small and compact and it’s got 4 internal 3.5″ bays.

    If you plug RDP in, it’s a high-storage compact desktop when others this small in the market have shunned 3.5″.

  6. gentle suggestion: perhaps when taking photos of “small” items like this, have another human hold a ruler to give perspective of size (more helpful than a banana 🙂

    Thanks for mentioning the price within the article. Good info all around.

  7. Not impressed by this product nor this review; need more infos on thermal performances.

    Review lacks any discussion of thermal performance other than showing us the pretty picture of the iLO page and a brief mention of thermal limits on the PCI3 Gen3 slot with certain add-in cards.

    Complete lack of discussion of thermal performance of horizontally mounted HDD in this device where the review already admits to possible thermal issues with the design.

    For me this review looks like a Youtube “unboxing” article for HPE products and not a serious product performance review.

    Patrick, you can do better than this. Srsly.

  8. Sleepy – we used up to 7.2k RPM 10TB WD/HGST HDDs and did not see an issue. We also discussed maximum headroom for drives + PCIe + USB powered devices is around 70W given the 180W PSU and how the fan ramps at around 10min at ~110W.

    In the next piece, we have more on adding CPUs/ PCIe cards and we have touched the 180W PSU limit without thermal issues. Having done that, the thermal performance/ issue you mention is not present. If the unit can handle thermals up to the PSU’s maximum power rating, then it is essentially a non-issue.

  9. A random question, if I may : will the Gen10Plus physically stack on top of / below a Gen10 or Gen8 Microserver cleanly? It looks like it should but confirmation would be appreciated 🙂

  10. In the “comparison” article (between the MSG10 and the MSG10+), you wrote about the “missing” extra fifth internal SATA port: “[…] I think we have a solution that we will show in the full review we will publish for the MicroServer Gen10+.”
    I really had hoped to read about this solution! Or did I just miss it?

    Also, I’d like to know more about the integrated graphics: If I’m understanding it correctly, the display connectors on the back (VGA and DisplayPort, both marked blue) are for management only; meaning that even if you have a CPU with integrated GPU, that is not going to do much for you. (This is in line with the Gen8, but a definite difference with respect to the MSG10!) So … what GPU is it? A Matrox G200 like on the Gen8? Or something with a little more oomph?
    Personally, I’m saddened to see that HPE skimped on making the iGPU unusable. 🙁

  11. TomH – the Gen10 Plus is slightly wider if you look at dimensions. You can probably stack a Gen10 atop a Gen10 Plus but not the other way around.

    Nic – great point. As mentioned in the article, we ended up splitting this piece into a review of the unit for sale, and some of the customizations you can do beyond HPE’s offerings. It was already over 6K words. For this, we ended up buying 2 more MSG10+ units to test in parallel and get the next article out faster.

  12. Thanks Patrick – had hoped the “indent” on the top might be the same size as previous models, despite the overall dimensional differences, but guess not!

  13. Patrick – sounds great! Btw, next to the cmos battery, there is undocumented 60pin connector. Do you have any idea what is this for?

  14. Does iLO Enablement Kit allows you to use server after OS boot, ot is this the same as big servers where iLO advance licence is needed?

  15. Nikolas Skytter -> 4* WD40EFRX -> About 32C in idle (ambient around 20-21C), max 36C when all disks testing with badblocks. Fan speed 8% (idle).

  16. Patrick – I have found that undocumented connector exists on several supermicro motherboards as well.. and guess what.. undocumented in manual as well. Starting to be really curious..

  17. Lucky you, how were you able to install the latest Proxmox VE 6.1 on this server?
    As soon as the OS loads, the Intel Ethernet Controller I350-AM4 turns off completely :\

  18. Hi, could you please test if this unit can boot from nvme/m.2 disk in pcie slot without problem? There are some settings in bios that points to it, even there is no m.2 slot. Thanks!

  19. Having skipped the GEN10 and still owning a GEN7 and GEN8 Microserver this Plus version looks like a worthy replacement. Although I would have liked to see that HPE switched to an internal PSU, ditched the 3.5 HDD bays for 6 or 8 2.5 SSD bays (the controller can handle 12 lanes) and used 4x SODIMMS sockets to give 4 memory lanes. I also agree with Kennedy that 10Gbit would be a nice option (for at least 2 ports).

  20. How did you manage to connect to the iLO interface? My enablement board did not have the usual tag with the factory-set password on it. Is there some default password for those models?

  21. Has anyone else had / having issues when running VM’s? I have the E2224 Xeon model 16Gb RAM, but keep having performance issues. Namely storage.

    Current setup 1x Evo 850 500Gb SSD 2x Seagate Barracuda 7.2k 2Tb Spindle disks.

    Installing the Hypervisor works fine. Tried ESXi 6.5,6.7 and 7 and used the HPE images. All installed to USB and then tried to SSD all install and run ok, but when setting up a VM, it becomes slow – 1.5hrs to install a windows 10 image, then the image is unuseable.

    Installed Windows Server 2019 Eval on to bare metal, Installs ok, but then goes super sluggish when running Hyper-V to the point of being unusable. Updated to the latest BIOS etc using the SPP iso.

    Example. Copy 38Gb file from my Nas to local storage under 2k19, get full 1Gbps, start a hyper-v vm, it slows to a few kbps, even copying from USB on the Windows 2019 server, not VM, Mouse becomes jumpy and unresponsive.

    Dropped the VM vCPU to 2, then one, still no difference.

    Tried 2 other SSD’s.

    BIOS settings were set to General Compute performance, and Virtualization Max performance.

    Beginning to think I have a faulty unit.

  22. Hi! Do you think that it could be possible to add a SAS raid controller on the PCI express and use it with the provided sas connector?

    It would look a little frankenstein but with a NVME on the minipcie and a proper raid controller this would be a perfect microserver for ESXI


Please enter your comment!
Please enter your name here