Inspur Systems NF8260M5 4P Intel Xeon OCP Server Review

7
Inspur Systems NF8260M5 Internal Overview
Inspur Systems NF8260M5 Internal Overview

The Inspur Systems NF8260M5 is a four-socket Intel Xeon server with a unique twist. Inspur and Intel jointly developed the “Crane Mountain” platform specifically for the cloud service provider (CSP) market. While many of the four-socket systems we review are developed for enterprises, this is the first 4P server we have seen specifically designed for CSPs. Inspur and Intel are going a step further and contributing this design to the OCP community so others can benefit from the design.

Our introduction to the Inspur Systems NF8260M5 was through Intel’s Jason Waxman who held a system up on stage at OCP Summit 2019 during Intel’s keynote. He noted at the time that this would support 2nd Gen Intel Xeon Scalable processors and Intel Optane DC Persistent Memory.

Jason Waxman 4 Socket Cascade Lake Server
Jason Waxman 4 Socket Cascade Lake Server

After the show, we immediately got one in our test lab and our review here today shows what this OCP contribution has to offer.

Inspur Systems NF8260M5 Overview

At the top-end packaging, this is a 2U server with 24x 2.5″ hot-swap bays up front. We are going to discuss storage in a bit, however, this form factor is a big deal. Previous generation Intel Xeon E7 series quad socket servers were often 4U designs. Newer quad socket designs like the Inspur Systems NF8260M5 are 2U, effectively doubling socket density for these scale up platforms.

Inspur Systems NF8260M5 Front
Inspur Systems NF8260M5 Front

Inside, we find the heard of the system which are four Intel Xeon Scalable CPUs. These CPUs can be either first or second generation Intel Xeon Scalable processors. Our test system, we utilized a range of second generation options. Higher-end SKUs top out at 28 cores and 56 threads meaning that the entire system can handle 112 cores and 224 threads.

Inspur Systems NF8260M5 Internal Overview
Inspur Systems NF8260M5 Internal Overview

Each CPU is flanked by a maximum set of twelve DIMM slots, making 48 DIMM slots total. One can utilize 128GB LRDIMM for up to 6TB of memory or reaching into 12TB using 256GB LRDIMMs that are coming on the market now. One can also utilize Intel Optane DC Persistent Memory Modules or Optane DCPMM.

Intel Optane DCPMM 6TB Capacity
Intel Optane DCPMM 6TB Capacity

These modules combine persistent memory attributes, like NVMe SSDs, with higher speeds and lower latency from being co-located in the DRAM channels. Our system, for example, has 24x 32GB DDR4 RDIMMs along with 24x 256GB Optane DCPMMs for a combined 6.75TB of memory in the system. That is absolutely massive.

Inspur Systems NF8260M5 DDR4 And Optane DCPMM Support
Inspur Systems NF8260M5 DDR4 And Optane DCPMM Support

In the middle of the chassis, we find six hot-swap fans cooling this massive system.

Inspur Systems NF8260M5 Hot Swap Fan
Inspur Systems NF8260M5 Hot Swap Fan

The rear I/O is handled mainly via risers. there are three sets of risers across the chassis. The Inspur Systems NF8260M5 also has an OCP mezzanine slot for networking without using the risers. Our single riser configuration was enough to handle the full storage configuration for our system.

Inspur Systems NF8260M5 Middle Riser Populated
Inspur Systems NF8260M5 Middle Riser Populated

Storage is segmented into three different sets of eight hot-swap bays. There are three PCBs each servicing a set of eight bays. Using this method, the NF8260M5 can utilize NVMe, SAS/SATA, or combined backplanes depending on configuration needs.

Inspur Systems NF8260M5 Storage Backplane
Inspur Systems NF8260M5 Storage Backplane

Power is supplied via two power supplies. The PSUs in our test unit were 800W units which is not redundant for our configuration. Inspur has 1.3kW, 1.6kW, and 2kW versions which we would recommend if you were configuring a similar system.

Inspur Systems NF8260M5 800W PSUs
Inspur Systems NF8260M5 800W PSUs

Rear I/O without expansion risers and the OCP mezzanine slot is limited to a management RJ-45 network port, two USB 3.0 ports, and legacy VGA plus serial ports.

Inspur Systems NF8260M5 Rear
Inspur Systems NF8260M5 Rear

On a quick usability note, the Inspur NF8260M5 was serviceable on its rails. Some lower-end units require the chassis to be completely removed for service.

Inspur Systems NF8260M5 Service Out Of Rack
Inspur Systems NF8260M5 Service Out Of Rack

The top cover had a nice latching mechanism and good documentation of the system’s main features within. This is a feature we expect now from top tier servers.

Inspur Systems NF8260M5 Cover Diagram
Inspur Systems NF8260M5 Cover Diagram

Overall, this streamlined hardware design worked well for us in testing.

Next, we are going to take a look at the Inspur Systems NF8260M5 test configuration and management, before continuing with our review.

1
2
3
4
5
REVIEW OVERVIEW
Design & Aesthetics
9.2
Performance
9.6
Feature Set
9.6
Value
9.5
SHARE
Previous articleA New STH 1P 2nd Gen Intel Xeon Scalable Test Platform
Next articleAMD Ryzen Embedded R1000 Family Launched
Patrick has been running STH since 2009 and covers a wide variety of SME, SMB, and SOHO IT topics. Patrick is a consultant in the technology industry and has worked with numerous large hardware and storage vendors in the Silicon Valley. The goal of STH is simply to help users find some information about server, storage and networking, building blocks. If you have any helpful information please feel free to post on the forums.

7 COMMENTS

  1. Good review but a huge market for Optane-equipped systems is databases.

    Can you at least put up some Redis numbers or other DB-centric numbers?

  2. From what I read that’s what they’re doing using memory mode and redis in VMs on the last chart. Maybe app direct next time?

  3. Can you guys do more OCP server reviews? Nobody else is doing anything with them outside of hyperscale. I’d like to know when they’re ready for more than super 7 deployment

  4. It’s hard. What if we wanted to buy 2 of these or 10? I can’t here with Inspur.

    I’m also for having STH do OCP server reviews. Ya’ll do the most server reviews and OCP is growing to the point that maybe we’d have to deploy in 36 months and start planning in the next year or two.

LEAVE A REPLY

Please enter your comment!
Please enter your name here