Notes from our latest enterprise PCIe and NVMe SSD efforts

2
Intel DC P3700 400GB PCB Open
Intel DC P3700 400GB PCB Open

Tom’s IT Pro published a review of the Supermicro SuperServer 6028U-TR4T+ that included a 2.5″ Intel DC P3700 hot swap NVMe SSD along with SATA SSDs (Intel S3500) and SAS3 HDs. The review (found here) goes over the system specs in some detail. I did have a few lab notes that I wanted to share on the NVMe experience from when I was writing that piece. 

First off, NVMe drives on the enterprise side really are significantly faster than consumer counterparts. Since the Samsung SM951 was not out yet, we had the Samsung XP941 512GB in the lab in a system next to the Supermicro/ Intel NVMe platform. On our longer IOmeter tests that we are using to build our enterprise dataset, it is readily obvious that the Intel DC P3700 400GB drive blew away the XP941 in terms of performance. We now have two of the PCIe based 2.5″ Micron p320h 350GB SLC SSDs and four of the Samsung XS1715 800GB SSDs for use once our new NVMe test system is up and running.

NVMe: A new performance storage tier

The bottom line here is that SAS and SATA SSDs are moving toward a lower power and performance tier sitting between PCIe/ NVMe SSDs and spinning media. PCIe/ NVMe drives practically get limited by the number of PCIe lanes in a system. For platforms like the Intel Xeon E3 series There is a very practical limit to the number of NVMe SSDs, especially if there is a PCIe x16 GPU installed. SATA and SAS SSDs still allow for larger, lower performance arrays in a given system and retain a significant speed advantage over spindle disks. Whether one is using NVMe drives or PCIe drives with a custom stack (like the upcoming Micron p320h 350GB SLC PCIe drives on our lab to review) the PCIe bus is quickly becoming the limiting factor.

Micron p320h 350GB x2
Micron p320h 350GB x2

Power and Heat

The second aspect to the 2.5″ NVMe SSD form factor is heat. What we are seeing is that drive manufacturers are using a 25w performance mode. Having eight of these drives in the front of a chassis will mean that one can have potentially 200w of heat dissipated, primarily in-front of the CPU, memory and other components. Compared to the performance, the power usage is going to be acceptable in most cases but the NVMe drives do use more power than spindle disks. Cooling is going to be a primary concern and is going to take up an increasingly higher part of the overall power budget. There is a good reason we are seeing 2.5″ NVMe drives with thermal pads and heatsink casings.

Intel DC P3700 400GB Internal View
Intel DC P3700 400GB Internal View

NVMe/ PCIe SSDs and Networking

Networking is going to become a very large bottleneck. With 12gbps SAS SSDs like the Toshiba PX02 and PX03 series we started to see the ability to saturate a 10GbE link with large sequential reads. For example, with four NVMe PCIe 3.0 x4 drives pushing 3GB/s each, that is 12GB/s of outbound bandwidth potentially required. The bottom line is that for every PCIe lane one can saturate with NVMe data, that data still needs to go somewhere. If that data needs to go outside of the system, then it is almost a 1:1 storage:networking need on the available PCIe lanes.

Wrapping it up (for now)

PCIe and NVMe drives are going to be everywhere in the next few months. Every major manufacturer is going to have consumer NVMe offerings that will drive down the price of the segment substantially. We are going to see a new performance tier with PCIe/ NVMe SSDs becoming the performance mainstream while SAS and SATA SSDs start becoming a higher capacity, faster access time storage tier. Hard drives will become relegated to the third tier of non-volatile storage (NVMe SSD -> SAS/ SATA SSD -> hard drive) more so then they already are.

2 COMMENTS

  1. I’m in agreement with what was posted here. To me, the problem is that these drives cost so much more than SAS drives. It doesn’t make sense. The majority of cost is raw NAND on larger drives so the premiums are hard to justify.

    The other issue is that you can’t really get many systems with them. I can’t even buy a backplane where I can add these drives into an existing 5.25″ bay to upgrade a current gen system.

    Networking – 10g can’t even get rolled out everywhere. Let alone 40g or 100g.

  2. That’s why you only use two of them NVMe drives for mirrored ZILs to boost sync’d writes. (Also if PMC is smart, they would price their new NVMe NVRAM nicely..<$1000 and make a 2.5inch version?) The actual storage can still rest on 8x regular SATA SSDs. That would solve the cost problem.

    As to networking, you are right, we desperately need 40GigE L2 switches to become commodity stuff. The only way to accomplish that is if Intel integrates Fortville XL710 into Xeons.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.