The redundant power supplies in the server are 0.75kW units. These are 80Plus Platinum rated units. At this point, most power supplies we see in this class of server are 80Plus Platinum rated. Almost none are 80Plus Gold at this point and a few are now Titanium rated.
For this, we wanted to get some sense of how much power the system is using with a 64 core EPYC 7702P CPU and 256GB of memory. We thought it would be important to give a range.
- Idle: 0.13kW
- STH 70% CPU Load: 0.35kW
- 100% Load: 0.38kW
- Maximum Recorded: 0.49kW
There is room to further expand the configuration which can make these numbers higher than what we achieved, but the 750W PSU seems reasonable.
Note these results were taken using a 208V Schneider Electric / APC PDU at 17.5C and 71% RH. Our testing window shown here had a +/- 0.3C and +/- 2% RH variance.
STH Server Spider: ASRock Rack 1U10E-ROME/2T
In the second half of 2018, we introduced the STH Server Spider as a quick reference to where a server system’s aptitude lies. Our goal is to start giving a quick visual depiction of the types of parameters that a server is targeted at.
Although the ten front panel slots are configured for SATA as well, the ability to move to NVMe SSDs across the front means that we get a lot of storage performance. The single AMD EPYC CPU in the 1U10E-ROME/2T means that it is not as dense as dual-socket or 2U4N platforms. Still, it is a nice balance that ASRock Rack managed to achieve with the server.
When we first saw the 1U10E-ROME/2T was heading our way, we thought that this was going to be a heavily cost-optimized platform. Against that backdrop, it both met and exceeded our expectations.
There are aspects to the server such as the PCIe riser design and using a standard form factor motherboard that are clear nods to cost optimization. Another example is that there is not a printed service guide sticker on the chassis or inside the cover which is something we are seeing more vendors adopt. The front and rear chassis designs do not have the fancy ornamentation that high-end servers have. OCP NIC 2.0 designs are plentiful and lower-cost, but we know OCP NIC 3.0 will be the future. Using an OCP NIC 2.0 slot here is being done to lower total system costs. Frankly, that is the point of a server like this in the market.
Where the server exceeded our expectations was in terms of features. 10Gbase-T networking is something that we really like on the server since it adds more flexibility to deployment. One does not need to add an extra NIC to get 10GbE which lowers costs. Make no mistake, the X550 is more expensive than putting dual Intel i210 NICs in the system so this is a performance and feature versus cost trade-off. We also really like the front panel storage configuration and that we saw PCIe Gen4 speeds from the front bays that also can support SATA. It is nice to not have to re-wire the system to add NVMe or SATA functionality.
Overall, the key impression this left us with is that this would be just about the ideal server for us to use in the STH hosting infrastructure. We tend to use Optane SSDs for databases and NVMe SSDs for other VMs. The ASRock Rack 1U10E-ROME/2T has the right storage configuration while also having the onboard and OCP networking that we would use immediately. Most of our new hosting nodes are AMD EPYC 7002 based with 8x 32GB DIMMs per CPU so this system profile fits well. Perhaps that is a high compliment from a site like STH that a server we review is one we tested and are willing to use in our hosting infrastructure.