As our first motherboard reviewed for the Intel Xeon Scalable Processor generation, the Supermicro X11SPH-nCTF certainly feels like a next-generation platform. We are going to review the motherboard from the perspective of forming the foundation of a storage server because it has so many storage related features such as an onboard Broadcom SAS3 controller. This is also our first platform using the Intel C622 PCH which you can read more about in our Burgeoning Intel Xeon SP Lewisburg PCH Options Overview piece. The key difference between the Intel C622 and the lower-end C621 is the inclusion of 10GbE networking which adds to the appeal of this platform over previous generations. Let us take a look at the board.
Here is our basic test configuration for this motherboard:
- Motherboard: Supermicro X11SPH-nCTF
- CPU: Intel Xeon Silver 4114
- RAM: 6x 16GB DDR4-2400 RDIMMs (Micron)
- SSD: Intel DC S3710 400GB
- SATADOM: Supermicro 32GB SATADOM
We will note quickly that this motherboard has now taken the following Intel Xeon CPUs in our labs:
- Gold 6132
- Silver 4114
- Silver 4112
- Silver 4108
- Bronze 3106
- Bronze 3104
The impressive part is that the mechanicals with the heat sink and socket have held up well with over 20 CPU installations which is more than we expect the vast majority of deployments to ever experience. The motherboard is listed as being able to handle 205W TDP CPUs, however, our Intel Xeon Platinum 8180’s are being used elsewhere so we did not get the opportunity to try them yet.
The Intel Xeon Bronze 3104 enabled all of the system’s storage and networking controllers along with the various expansion slots and ports. As you are preparing to configure a storage system, we do recommend looking slightly higher in the stack to at least the Xeon Silver 4108.
Supermicro X11SPH-nCTF Overview
Before delving into the details, the Supermicro X11SPH-nCTF is particularly interesting as an ATX motherboard (12″ x 9.6″.) While normally motherboard sizing is less of a concern in servers, the standard form factor means that the X11SPH-nCTF can fit in a wide variety of chassis potentially including short depth, tower, and standard rack mount cases.
With the new series of CPUs, it is important to look at how the PCH is connected. While the Intel PCH solution has a standard DMI3 connection, designers like Supermicro can allocate more PCIe lanes to the PCH. In a design like the X11SPH-nCTF we wanted to bring attention to the fact that this is what the company did. There is an additional PCIe x8 link to the PCH. That is important because it allows Supermicro to provide more available bandwidth for the 10x SATA lanes, 10GbE and two PCIe slots (PCIe x4 and m.2) that are connected to the PCH.
While running every slot and every device at full speed would still mean the PCH has some bottleneck, the reality is that it is a very unlikely scenario where your boot drives, both 10GbE links, all eight data SATA devices, m.2 SSD and PCIe x4 device will all be running at 100% at the same time. Had Supermicro only used a DMI3 link to the PCH, it would have been a cause for concern as a potential bottleneck, however, the additional PCIe 3.0 x8 link alleviates this concern. On balance, this is an appropriate design choice.
The CPU socket is flanked by eight DDR4 DIMM slots, four on each side. In our Intel Xeon Scalable launch coverage, we showed that the new “Skylake-SP” generation of CPUs can take six channels of memory and up to two DIMMs per channel. The black DIMM slots are additional DDR4 DIMM slots to make the total RAM capacity equal to the previous generation two DIMM in four channel configurations (totaling eight.)
The LGA3647 socket has held up extremely well in our testing. After a dozen installs we now think the new socket is easier, faster, and less risky to service than previous generations.
With six DDR4 DIMMs installed and a 2U heatsink, the front to back airflow should work easily in most cases.
The Supermicro X11SPH-nCTF is a special platform as it offers much more than a bare minimum set of features. It is a storage focused platform and we, therefore, see a multitude of features designed to excel in that role. One great example of this is the onboard Broadcom SAS 3008 controller which is a SAS3 design from the company’s LSI/ Avago acquisitions. This allows the motherboard to have eight SAS3 12gbps channels and connect to more advanced topologies such as SAS expanders and external disk shelves.
Next to the SFF-8643 SAS3 ports there are two connectors which many of our readers may be less familiar with. Those are Oculink connectors which provide connectivity to NVMe SSDs and backplanes. In the near future, we will review a platform that takes advantage of these connectors.
Moving to the PCH enabled storage, there are two gold/ orange 7-pin SATA ports. Each can support a SATADOM module without requiring an external cable. We use SATA DOMs heavily in our hosting and lab infrastructure as boot disks and the SATA powered drives are significantly easier to work with.
There are two SFF-8087 connectors, each carries four SATA III 6.0gbps ports for eight ports between t he two. Add to this the two standard SATA ports and one sees ten devices total. For those looking to build a lower-cost storage server, this means there are a total of eighteen SATA 6.0gbps ports available to the platform, eight of which are SAS3 12gbps ports and capable of using expanders. That is a lot of storage connectivity.
Next to the SFF-8087 connectors is a front panel USB 3.0 header. This is an industry standard connector. Beyond this, Supermicro also adds a USB 3.0 Type A header for an internal USB drive. While we prefer the SATA DOM boot drive, some organizations like using USB drives with an embedded OS.
In terms of PCIe expansion, the Supermicro X11SPH-nCTF has a surprising amount. There are four PCIe slots along with one m.2 PCIe slot for a NVMe drive. A pair of PCIe 3.0 x16 and PCIe 3.0 x8 physical slots can be used either in x16/x0 or x8/x8 mode. Outside of GPUs and 100GbE networking cards, PCIe x16 cards are less common so we like the option to use both slots as x8. There is a PCIe 3.0 x8 slot in a x8 connector. The third PCIe 3.0 x8 connector is a PCIe 3.0 x4 electrical slot that connects via the PCH.
We appreciate that these new platforms support m.2 NVMe devices out of the box. In previous generations of Supermicro motherboards, m.2 support was achieved via an add-in card for many platforms. Now, we have that capability built-in further enhancing storage capabilities.
One can also see a new ASPEED AST2500 BMC. This is the latest generation BMC that powers the out of band management interface. The out of band management feature has its own dedicated NIC that sits between the serial port/ USB 3.0 blocks and atop the USB 2.0 block.
The two stacked RJ-45 ports are 10Gbase-T ports via the PCH Intel C622 chipset and that uses the Intel i40e driver.
Rounding out the I/O is a legacy VGA port to connect KVM carts in the data center.
These days, out of band management is a standard feature on servers. Supermicro offers an industry standard solution for traditional management, including a WebGUI. The company is also supporting the Redfish management standard.
In the latest generation of Supermciro IPMI is a HTML5 iKVM. One no longer needs to use a Java console to get remote KVM access to their server.
Currently, Supermicro allows users to utilize Serial-over-LAN, Java or HTML5 consoles from before a system is turned on, all the way into the OS. Other vendors such as HPE, Dell EMC and Lenovo charge an additional license upgrade for this capability (among others with their higher license levels.) That is an extremely popular feature. One can also perform BIOS updates using the Web GUI but that feature does require a relatively low-cost license (around $20 street price.) That is a feature we wish Supermicro would include with their systems across product lines.
At STH, we do all of our testing in remote data centers. Having the ability to remote console into the machines means we do not need to make trips to the data center to service the lab even if BIOS changes or manual OS installs are required.
In terms of power consumption, we measured using our APC PDUs on 208V data center power at 17.1 °C and 70% relative humidity. We are using the Intel Xeon Silver 4114 for these power tests:
- Power off BMC only: 6.1W
- OS Idle: 63.2W
- GROMACS AVX-512 Load: 110.3W
These numbers are great. The Intel Xeon Silver 4114 is a 85W TDP CPU (note TDP does not equal power consumption) and AVX-512 will stress the CPU considerably. The system idle figure is lower than similar configurations with SAS 3008 controllers and two active 10Gbase-T links (via add-in X550) by about 9W. If you compare this to previous-generation systems, one must remember that this system includes two SSDs as well as six DDR4-2400 DIMMs as well.
When we first saw the Supermicro X11SPH-nCTF it was due to one of our readers sharing their excitement about the platform. Upon reviewing the unit, and running it through almost two dozen CPU configurations, we can say it has held up extremely well. Many of our readers are using SFP+ DACs or optics for their 10GbE networking. There is a closely related variant of this platform called the Supermicro X11SPH-nCTPF (note the P) which trades the RJ-45 ports for SFP+ cages. We also asked about Oculink backplanes and will be publishing a review with this platform in a system making use of the two Oculink ports to connect U.2 NVMe drives.
Current street prices are hovering just over $500 USD. Combine this with a Intel Xeon Bronze 3014 and for $750 you have a 10Gbase-T enabled, SAS3 capable server platform. When you look at the cost of adding a Broadcom SAS 3008 controller and 10Gbase-T networking to a platform, it is considerably less expensive to get those features built-in.
If you are looking for a single socket 10Gbase-T storage server, this is going to be a solid platform.