We recently had the opportunity to test 24x 2TB Intel DC P3320 NVMe SSDs in a system. While getting 24x 2TB NVMe SSDs may sound extremely expensive, we went the opposite route. We are using the (currently) lowest cost data center NVMe SSDs around, the Intel DC P3320 2TB drives. As we first discussed in March 2016, the Intel DC P3320 2TB drives are priced around $0.50/ GB or around the price of low end DC S3500 SSDs. While you do trade performance for the excellent pricing, you still get much higher performance than using SATA.
We had to do a bit of lab picking to come up with this NVMe test bed. We used a Supermicro 2U “Ultra” server with 24x NVMe hot swap drive bays and 2x 2.5″ SATA rear hot swap drive bays. We also used some Intel Xeon E5-2698 V4 CPUs and 512GB DDR-2400 RAM from the STH lab to outfit this system.
- CPU: 2x Intel Xeon E5-2698 V4
- System: Supermicro 2U “Ultra” NVMe server (SYS-2028U-TN24RT+)
- RAM: 16x 32GB DDR4-2400 RDIMMs SK.Hynix
- OS SSD: Intel DC S3700 400GB
- NVMe SSDs: 24x Intel DC P3320 2TB
This particular server has 32x PCIe 3.0 lanes between the drives and the host machine. Other PCIe lanes are dedicated for the 4x 10Gbase-T ports, and PCIe slots. This is a very common configuration and Microsemi, while briefing us for our recent PCIe switch piece, suggested 32 lanes dedicated to the system is requested often by ODMs and customers. Remember, the biggest PCIe switches from PLX and Microsemi are now 96 lanes. The rationale is that one may want more storage but will be ultimately limited by network bandwidth. As you will see, our low cost 24x NVMe array is easily able to saturate a 100GbE link (or two.)
Another note is that this is a shipping production system. For those thinking that NVMe is still far away, Silicon Valley startups are actively testing these NVMe systems for their storage appliances. Here is the system in the data center.
Going Fast with a 48TB NVMe System
We are going to have a formal benchmark piece of the system and array soon, but we did want to give you a taste of what low cost NVMe can do. We have a short video of what you can see with iometer and these drives. Caution – if you just bought a SATA SSD array for low cost storage, this is going to make you (extremely) jealous:
On the 128KB sequential reads we hit 25GB/s. We did try a hyperconverged style workload using KVM hadoop VMs and hit around 20GB/s before we ran out of time. The performance is clearly there and well beyond what SATA offers.
As you will notice, we are using Intel Xeon E5-2698 V4 processors. To generate this much 4K random read load using iometer, we even had to change our worker configuration slightly. The fact remains, this array is fast providing over 3 million random read IOPS with a setup time of under 15 minutes. Compare this to our recent read-optimized 24x SATA SSD piece where we saw only 1.6 million random read IOPS and 12GB/s sequential reads. One can now get twice the performance at a relatively modest price increase.
We also tested the array with a pair of Intel E5-2698 V3 CPUs and 512GB of DDR4-2133 memory. While using the older/ slower platform works well for the sets of hard drives, SATA SSDs and even SAS3 SSDs, with NVMe we did see a difference. Using Intel Xeon E5 V4 with faster memory gave us about 7% better read performance. In our longer benchmarks the write performance does suffer however these arrays are not meant for heavy writes. As we found in our real world data center SSD write usage sampling, there seem to be many workloads that write well under 0.3 DWPD.
On the drives, our key recommendation is that it seems as though the Intel DC P3520’s are the go-to drives at the moment for low cost 2TB storage. We have seen companies looking to house big data sets for analytics opt for the Intel DC P3520’s in their appliances as they offer a huge performance boost over SATA SSDs. While they are not as fast as higher-performance drives like the Intel DC P3700, they offer much better $/GB ratios and the performance in these larger array type systems with PCIe switches is more than adequate.
Price parity with SATA on the low end of NVMe drives is going to be the new normal. While the Intel DC P3320’s are not individually the fastest drives, in these larger arrays they can provide great performance for read intensive workloads and clear benefits over going SATA for low cost storage. The other side to this is that for companies who are operating write infrequently/ read often applications (web servers, CDNs, streaming video servers and the like) the Intel DC P3320’s are great SSDs. Software startups in the Silicon Valley have taken note and we expect to see more NVMe based storage arrays in the future. Starting in October, the DemoEval lab will be hosting clusters for Silicon Valley startups using all NVMe SSDs. A year ago, these were SAS/ SATA clusters so the change is clearly upon us.
Were the IOPs against a single or multiple logical volume(s)?
What raid controller did You.use?
What’s the cost for HW & drives at time of article?
What’s the cost of the whole set up?
Is there any chance to see smaller a SME version (for the office, not rack-based) with 8-16 NVME m.2 SSDs and 4x Thunderbolt 3 ports? That would be a lot more useful for office clients like Macbooks and Intel NUCs with Thunderbolt 3 ports.
Also, Samsung m.2 SSDs are cheaper each year so it would be quite acceptable price-wise to buy 8x 1TB SSDs and add 8 more 1 year later for a fast network back-up NAS that can sync off-hours with the larger capacity HDD based storage systems in the office.
Thanks in advance
Which raid controller were you using?
Freddie these are directly attached to PCIe lanes