In our HPE MicroServer Gen10 Plus Review we are going to cover a lot of ground, so get ready. This compact server is designed to be smaller and higher performing than the previous generation. What we have found over a few weeks of working with the system in various configurations is that this is an excellent platform.
Taking a moment to see the roadmap, upon announcement of the new server, we dissected the spec sheet of the HPE ProLiant MicroServer Gen10 Plus. We then did a piece on the MicroServer Gen10 Plus (or Gen10+) versus the older Gen10 revision. We are going to discuss that briefly below, but we wanted to show what this mid-generation change offers in terms of differences. Some have called that piece a review, which is something we disagree with. This piece will be our formal review of the MicroServer Gen10 Plus including common HPE options. Next in this series will be a more expansive view of what is possible. We have just shy of 20 CPUs we are testing in our MicroServer Gen10 Plus and that simply takes time. We also have various options to give you ideas regarding how you can take the server’s base and turn it into something truly unique to fit your, or your client’s needs.
In this review, we are going to focus on what HPE delivers to its customers with the ProLiant MicroServer Gen10 Plus so you know what you can expect. We are going to go into the hardware that makes this server on an in-depth basis. We are going to look at the system topology and management. After that, we are going to test 10 different OSes and the out-of-box experience including popular Linux, Windows, and even FreeBSD distributions. Next, we will delve into the performance of both CPU SKU options, the Intel Xeon E-2224 and Pentium Gold G5420 and compare them to the previous generation’s performance. Finally, we will end with power consumption, noise, the STH Server Spider, and our final thoughts. This will be an extremely thorough piece on the new HPE ProLiant MSG10+.
HPE ProLiant MicroServer Gen10 Plus v. Gen10
The first hands-on piece in this series is our HPE ProLiant MicroServer Gen10 Plus v Gen10 Hardware Overview. You can read the piece at that link, and also check out the short video summary.
Key changes in this generation are moving to a smaller physical footprint with an external power supply. Internally, changes were made to remove the optical drive bay, add iLO 5 management, and alter the PCIe slot configuration. We also witnessed a move from the AMD Opteron X3400 series to the newest generation’s LGA1151 Intel Xeon E-2224 and Pentium Gold G5420 processors that offer new features and more performance. There is a lot to cover, so if you were thinking about the HPE MicroServer Gen10 Plus and are familiar with the Gen10, that piece is worth going through.
In the rest of this piece, we are going to go in-depth into what you can expect from the new MicroServer including the hardware, software, performance, management, and operational aspects.
HPE ProLiant MicroServer Gen10 Plus Hardware Overview
We are going into a lot of depth here. As a result, we are going to split this section into an external hardware overview which is what one will see if they do not care about how the system works. We will then go into detail around the internal components and features before moving on to other sections of this review.
HPE ProLiant MicroServer Gen10 Plus External Hardware Overview
The HPE ProLiant MicroServer Gen10 Plus is small. It measures 4.68″ x 9.65″ x 9.65″ (11.89 x 24.5 x 24.5cm.) it is also one attractive box to have around especially if you have excellent lighting to make the HPE logo pop. One can see a power button as well as status activity lights. The two USB ports are USB 3.2 Gen2 ports with means they are capable of 10Gbps operation.
On the rear of the unit, we can see all of the ports and I/O surrounding a central fan. The fan is the only moving part in this generation. It is, therefore, non-redundant but half as likely to experience a fan failure as a two fan unit.
To the left of the fan, we have two low-profile slots for expansion. We will discuss those later in this hardware overview. Below them, one can find the DC input. With this generation, we have an external power supply so there is a power input on the rear of the unit.
External DC power supplies can be pulled out of the chassis accidentally so HPE included a retention clip for the DC plug. You can actually see that in the packaging, HPE had to make a cutout for this clip.
The power supply is a 180W LiteOn unit which looks like it could power an enormous laptop.
On the right rear of the unit, we find the primary system I/O. This includes four 1GbE NICs, a VGA and DisplayPort (for management) and four USB 3.2 Gen1 ports.
We wish HPE had found some way to provide SFP+ 10GbE networking here. The cost of SFP+ to 10Gbase-T and Nbase-T has fallen in just a few years from thousands of dollars per module to $35 to $60 per module. Adding 10GbE would have made this server immensely more interesting and freed the expansion slot for other duties.
Removing the top cover is required to access the internals. It also allows one to access the latching mechanisms that keep the front cover locked. Having these locks is important to keep hard drives safe especially in edge locations.
The two black screws on the system’s rear hold the motherboard tray in place. Once these are removed, one can simply slide the tray out, detach a few cables and get access to the internal components.
Hard drives are installed in a 2×2 matrix. 3.5″ drives utilize four pegs that are screwed into the standard hard drive mounting holes. HPE goes the extra step here and places these pegs in lines below drives so one can keep them safe and easily access them when needed. This is technically a tray-less but not tool-less design and is carried over from the original Gen10 latching mechanism.
For 2.5″ drives, you will need a 2.5″ to 3.5″ adapter to mount 2.5″ drives, such as SSDs, into the MSG10+. This is not a huge deal, but it does add cost to the system. It would have been extremely cool if HPE could have found a way to add two 2.5″ SATA bays just below the 3.5″ bays even if they were for 7mm SATA only.
3.5″ drives are supremely important in this market. They offer high-capacity and low-cost for local backups, surveillance video storage, as well as content sharing. While we received a lot of feedback on our earlier Gen10+ v. Gen10 content that an 8x 2.5″ chassis would be welcome for all-flash, and we agree to some extent, there is more to play here. HPE is using the Intel C242 PCH which means it only has 6x SATA III ports onboard. In order to support 8x 2.5″ bays, HPE would need to move up to the C246 PCH. While this is doable, it also adds cost. A good compromise is simply exposing two more SATA ports and bays if possible.
Continuing to add features such as a higher-end PCH and doubling bays may sound great, but then feature creep starts to bring it into HPE ProLiant ML30 Gen10 positioning. When we reviewed the ProLiant ML30 Gen10 we used an 8x 2.5″ model and that is based on a Xeon E platform as well. Our thought is that HPE is trying to service a specific market with the MSG10+ in the context of the company’s broader portfolio. HPE has an option for all-flash, it is simply a different server.
Next, we are going to look inside the system at what makes the package so effective.
I’ve skimmed this and wow. This is another STH Magnum Opus. I’ll read the full thing later today and pass it along to our IT team that manages branch offices.
I made it to page 4 before I ordered one. That iLO enablement kit isn’t stocked in the channel so watch out. I’m now excited beyond compare for this.
excellent review – would love to see one with Intel Xeon D Processor based model as well!
A really nice review, thanks a lot. impressed with the Xeon performance at this kind of low power system. I should/really want to get one, replacing my old gen 7 microserver home server.
Too bad they didn’t go EPYC 3000
I like seeing bloggers and other guys review stuff, but STH ya’ll are in a different league. It’s like someone who understands both the technical and market aspects doing reviews. I think this format is even better than the GPU server review you did earlier this week.
I’d like to know your thoughts about two or three of these versus a single ML110 or ML350. Is it worth going smaller and getting HA even if you’ve got 3 servers? I know that’s not part of this review. Maybe it’s a future guide.
@Teddy1974 Can you let me know more about the iLO enablement kit comment please so I can investigate? This is a shipping product.
I would use this for backup repository and perhaps as an SVN repository too
You’re Windows 10 testing is genius but you missed why. What you’ve created is a Windows 10 Pro remote desktop system that can be managed using iLO, is small and compact and it’s got 4 internal 3.5″ bays.
If you plug RDP in, it’s a high-storage compact desktop when others this small in the market have shunned 3.5″.
gentle suggestion: perhaps when taking photos of “small” items like this, have another human hold a ruler to give perspective of size (more helpful than a banana 🙂
Thanks for mentioning the price within the article. Good info all around.
Not impressed by this product nor this review; need more infos on thermal performances.
Review lacks any discussion of thermal performance other than showing us the pretty picture of the iLO page and a brief mention of thermal limits on the PCI3 Gen3 slot with certain add-in cards.
Complete lack of discussion of thermal performance of horizontally mounted HDD in this device where the review already admits to possible thermal issues with the design.
For me this review looks like a Youtube “unboxing” article for HPE products and not a serious product performance review.
Patrick, you can do better than this. Srsly.
Sleepy – we used up to 7.2k RPM 10TB WD/HGST HDDs and did not see an issue. We also discussed maximum headroom for drives + PCIe + USB powered devices is around 70W given the 180W PSU and how the fan ramps at around 10min at ~110W.
In the next piece, we have more on adding CPUs/ PCIe cards and we have touched the 180W PSU limit without thermal issues. Having done that, the thermal performance/ issue you mention is not present. If the unit can handle thermals up to the PSU’s maximum power rating, then it is essentially a non-issue.
A random question, if I may : will the Gen10Plus physically stack on top of / below a Gen10 or Gen8 Microserver cleanly? It looks like it should but confirmation would be appreciated 🙂
In the “comparison” article (between the MSG10 and the MSG10+), you wrote about the “missing” extra fifth internal SATA port: “[…] I think we have a solution that we will show in the full review we will publish for the MicroServer Gen10+.”
I really had hoped to read about this solution! Or did I just miss it?
Also, I’d like to know more about the integrated graphics: If I’m understanding it correctly, the display connectors on the back (VGA and DisplayPort, both marked blue) are for management only; meaning that even if you have a CPU with integrated GPU, that is not going to do much for you. (This is in line with the Gen8, but a definite difference with respect to the MSG10!) So … what GPU is it? A Matrox G200 like on the Gen8? Or something with a little more oomph?
Personally, I’m saddened to see that HPE skimped on making the iGPU unusable. 🙁
TomH – the Gen10 Plus is slightly wider if you look at dimensions. You can probably stack a Gen10 atop a Gen10 Plus but not the other way around.
Nic – great point. As mentioned in the article, we ended up splitting this piece into a review of the unit for sale, and some of the customizations you can do beyond HPE’s offerings. It was already over 6K words. For this, we ended up buying 2 more MSG10+ units to test in parallel and get the next article out faster.
Patrick – Thanks for the update: I’ll be eagerly awaiting the follow-up article! 🙂
Thanks Patrick – had hoped the “indent” on the top might be the same size as previous models, despite the overall dimensional differences, but guess not!
Patrick – when we could expect second part of article? I am curious to order this machine just now!
In the next 2 weeks. It is about half-written. We have 2 more inbound MSG10+ units to help get testing more parallel.
Patrick – sounds great! Btw, next to the cmos battery, there is undocumented 60pin connector. Do you have any idea what is this for?
HCX – not yet.
On the win10 setup was the embed RAID controller used?
run24josh – we did not install on a RAID array in this instance, but HPE has great documentation on how to use the S100i with Windows such as https://support.hpe.com/hpesc/public/docDisplay?docId=a00036381en_us
Does iLO Enablement Kit allows you to use server after OS boot, ot is this the same as big servers where iLO advance licence is needed?
Disk temperatures when loaded with say 4* WD Red would be interesting =)
Nikolas Skytter -> 4* WD40EFRX -> About 32C in idle (ambient around 20-21C), max 36C when all disks testing with badblocks. Fan speed 8% (idle).
Patrick – I have found that undocumented connector exists on several supermicro motherboards as well.. and guess what.. undocumented in manual as well. Starting to be really curious..
OK, sadly it looks its really only debug connector, at least on other boards.
Lucky you, how were you able to install the latest Proxmox VE 6.1 on this server?
As soon as the OS loads, the Intel Ethernet Controller I350-AM4 turns off completely :\
Hi, could you please test if this unit can boot from nvme/m.2 disk in pcie slot without problem? There are some settings in bios that points to it, even there is no m.2 slot. Thanks!
This boots NVMe no problem.
Wonder if there would be a clever way to power this thing with redundant external power supplies
Having skipped the GEN10 and still owning a GEN7 and GEN8 Microserver this Plus version looks like a worthy replacement. Although I would have liked to see that HPE switched to an internal PSU, ditched the 3.5 HDD bays for 6 or 8 2.5 SSD bays (the controller can handle 12 lanes) and used 4x SODIMMS sockets to give 4 memory lanes. I also agree with Kennedy that 10Gbit would be a nice option (for at least 2 ports).
How did you manage to connect to the iLO interface? My enablement board did not have the usual tag with the factory-set password on it. Is there some default password for those models?
@Raphaël PWD is on bottom side of case, together with some other tags.
Has anyone else had / having issues when running VM’s? I have the E2224 Xeon model 16Gb RAM, but keep having performance issues. Namely storage.
Current setup 1x Evo 850 500Gb SSD 2x Seagate Barracuda 7.2k 2Tb Spindle disks.
Installing the Hypervisor works fine. Tried ESXi 6.5,6.7 and 7 and used the HPE images. All installed to USB and then tried to SSD all install and run ok, but when setting up a VM, it becomes slow – 1.5hrs to install a windows 10 image, then the image is unuseable.
Installed Windows Server 2019 Eval on to bare metal, Installs ok, but then goes super sluggish when running Hyper-V to the point of being unusable. Updated to the latest BIOS etc using the SPP iso.
Example. Copy 38Gb file from my Nas to local storage under 2k19, get full 1Gbps, start a hyper-v vm, it slows to a few kbps, even copying from USB on the Windows 2019 server, not VM, Mouse becomes jumpy and unresponsive.
Dropped the VM vCPU to 2, then one, still no difference.
Tried 2 other SSD’s.
BIOS settings were set to General Compute performance, and Virtualization Max performance.
Beginning to think I have a faulty unit.
Hi! Do you think that it could be possible to add a SAS raid controller on the PCI express and use it with the provided sas connector?
It would look a little frankenstein but with a NVME on the minipcie and a proper raid controller this would be a perfect microserver for ESXI
Hi Patrick, thx for the amazing review!
After reading this review I decided to buy this amazing device with the Xeon CPU and 2x Crucial 32GB. So-far-so-good. But I have a strange problem. The Sata disk (1Tb) is performing very bad (10-20 MB/s) in Vmware, Xen and HyperV. After replacing them, i still had the same problem. SSD is performing better, 200-300 MB/s, but still not optimal.
After connecting an 6TB Sata disk via RDM in VMWare it was performing better and when I install clean Ubuntu it works as suppose to be. But Virtualization is a disaster. I hope I do not have a faulty device.
@Chris: You are discribing exactly the same problem I have!
Nice to see half microserver. It’d be a tease if I had choice at the time buying full microserver. Especially for the noise. Full microserver just can’t be tweaked for noise and would spin non-standard fan on non-standar connector with non-standard pwm up and down.
There’s even 4 aggregable NICs and 1 slot for 10gbit.
The only issue is CPU choice this time, 3226 passmark / 54W or 7651 passmark / 71W. That’s far above from i’d expect for a microserver. In the era of almost zero watt i7 NUCs. Otherwise, nice build.
Hi, I was googling HPE Microserver Gen 10 plus Windows 10 and got to this article. I have been trying to install Windows 10 Pro on it. First it would not installed when the S100i SW RAID Array was enabled and two drives set up as RAID 1. W10 did not recognize the array. I reset the control to SATA and was able to finish the installation. However, the display driver was not installed (use Microsoft basic display adapter driver) and device manager shows unknown devices: 2 x Base System Device and PCI simple communications controller.
I updated the BIOS to the latest one U48 2.16. The Windows updates did not update any driver. I’ve check the HPE support site and found no driver for Windows 10 (unlike Gen 10). I am wondering how you have installed Windows 10 on this machine. Thank you for any feedback.
It´s possible to remove this riser and add another one to add a nvme and a GPU?
@Jonh Peng: Hi John, I found this article due to the same reason as you. Have you solved the troubles with Windows 10 yet? Have you tried to install drivers which are available for HPE MicroServer Gen 10 (not plus)?
btw the price difference between pentium and xeon in europe is massive. it’s like almost double.
what is VGA “management”? can it connect to display or not? confused about the secretivness about it.. i don’t care about transcoding, but i assume i can log in to console via VGA/DP port.
VGA is connected to iLO so it is more like a traditional server console display output.
Ouch. The price has gone up by over $100 since March! HP Site lists the P16006-001 at $676.99 now.
Thomas – check resellers. They are often less expensive.
Hi there. Thanks for posting this article, there’s lots of great information.
One thing I am trying to find out is what is the maximum resolution from either the VGA port or the Display port, under Windows Server 2016.
We would like to run them at 1920×1080
HPE were unable to help (not sure why, they make them but ???)
Any help would be appreciated.
Has anyone tested 64 GB RAM?
Can you please disclose which reseller? Cannot find iLO Enablement Kit below $90. Thanks
Do the USB3.2 Gen 2 ports have Displayport 1.4a support?
i.e., can they emit 4K 60hz HDR?
Any BIOS updates needed if I need to get an E-2236 for better performance?
My i5-6500 won’t work on it.
I was wondering if I could get more specs on the power supply? I’ve just ordered a few microservers but I’d like to have a spare PS on hand. Looking up the official part number has turned up nothing relevant.
I am wondering if anyone has had issues with the fan noise when running Debian or Proxmox? I’ve found that even when they are idle, the fans after a while will spike to 100% and sit there indefinitely till I restart the system.
Just curious if there is a solution to the problem, perhaps additional tools to install or compatible drivers, or something I can change in the BIOS to stop this happening?
@Jonh Peng & @Michal, did you have any success with Windows 10 in the end?
Can I install and run Windows Server 2019 from an M.2 drive in a PCIe board without need for additional drivers? If so, do I need a specific type of card or M.2 SSD?
@SomebodyInGNV, I have W10 running using a Silverstone ECM22 PCIe Expansion Card with a
WD Black 500GB SN750 M.2 2280 NVME PCI-E Gen3 Solid State Drive (WDS500G3X0C).
Do you think a RAID card (LSI 9267-8i for ex.) could be used keeping power connector on the MB and using the card’s data port ?
Very useful and enjoyable review, one the best I’ve read in years 🙂
I wonder if I could seek your thoughts…. I have an Adaptec 3405 which I’ve used for years. I know it’s an old card now (purchased about 2007) but works great and is super solid with every SATA and SAS drive configuration I’ve used with it. It has never let me down and I know how it ticks.
I appreciate the drives can be SATA only in the HPE Gen 10+, because of the back-plane to the drives, but would you say there is a reasonable chance this card will work with this HP’s motherboard as long as I continue to use SATA drives only? I have a low profile card bracket for it and it’s not a particularly wide card (so I’m pretty sure it will physically fit) but I’m most concerned the HP’s firmware won’t like it. The server it is currently in was built only in 2019 and works fine. Do you have any experience of using older RAID cards that are from non-HP vendors?
Many thanks in advance for even just your thoughts – alternatively whether you can recommend any hardware RAID cards that are suitable for this unit if you think my Adaptec re-use might be unwise would also be appreciated 🙂
Great Review! I actually bought a MicroServer Gen10 plus on the recommendation form this article. unfortunately, when I received it, I could not access the bios. It seems like the F9-F11 keys do not work. ILO 5 is disabled (per the msg in the boot screen). I am at a loss. I know the keyboard works ( can navigate a Ubuntu installation with it and Ctrl+alt+Del works). Ive tried to clear the NVRAM using the #6 jumper, to no avail.
Any ideas on how I can get into the bios? I’ve tried every USB port including the internal one. any helps would be greatly apprecaited
hi, I’ve stumbled on this guide while looking for ways to properly install windows 10 pro on the gen10 plus. so far, while windows runs, the driver situation is just horrible. so may unrecognized drivers, and the display is basically locked in a single resolution. so drivers are available anywhere in HP’s site and windows update doesnt pull in any either. just wondering if anyone has been able to resolve the driver situation. thanks
Jati Indrapramasto, it is interesting to see someone with the problem of not being able to find desktop OS (Windows 10) drivers for a box instead of the other way around of being able to find all of the desktop drivers you could need but no server OS (Server 2019) drivers.
I guess this nice little box is a server at heart.
It is a real pleasure not having to manipulate driver files to work with Server 2019 on this sweet little server.
Jati Indrapramasto, I forgot to wish you luck with finding drivers for Windows 10. Good luck hope you are successful.
Seems that, at the moment at least, the G5420 model is fairly hard to find. Given that this box is now nearly 3 years old, do we have any idea if HPE are going be replacing this soon, or is this purely a victim of the semiconductor shortage?