At Computex 2017 in Taipei Taiwan, we saw a new data center SSD form factor being championed by Samsung: the M.3 form factor. For the past few years, m.2 SSDs have been gaining significant market share. While visiting with server vendors at the show, the continued trend to support m.2 SSD form factors either on motherboard PCB or via risers was apparent. Without going into this summer’s new platforms in great depth, “gum stick” SSDs are set to make a splash. Samsung is championing a modified server form factor, M.3 which it expects will allow 1 petabyte (PB) per U of storage.
The Challenge With M.2 SSDs in the Data Center
M.2 SSDs were primarily designed for consumer devices. Eschewing traditional 2.5″ form factors for 22 x 80mm (or smaller) m.2 sticks helped usher in thinner, lighter laptops than could be achieved with 2.5″ drives.
Since the 2280 m.2 form factor was largely driven by the portable business, capacities were targeted at that PCB size. On the server side this presents challenges. Normal data center SSDs require additional PCB space for capacitors providing drives with power loss protection. While vendors have greatly reduced the footprint required for PLP components from just five years ago, these components still use considerable space.
One way server SSD manufacturers are solving the challenge is with 22110 (110mm long) m.2 SSDs. Adding an extra 30mm over 2280 drives allows SSD vendors space to add PLP components that are not common in notebook drives. With the next-generation platforms sporting more RAM capacities (e.g. AMD EPYC with up to 16 DIMMs per CPU/ 32 DIMMs per system), motherboard PCBs are growing and space is again at a premium. That makes 110mm long m.2 drives too large to fit in many current 80mm designs.
A Larger Drive, the “up to” 16TB M.3 SSD and 1PB per U
When you are looking to expand capacities for servers, you need to fit more NAND packages and PLP components. PCIe lanes in servers are valuable commodities with next generation servers maxing out around 88-128 PCIe lanes in dual socket configurations. At the same time, we are seeing more designs using PCIe switches to have 1U shelves with 64x gum stick SSDs.
Samsung’s answer: the 16TB M.3 SSD.
As you can see, the M.3 SSD uses the standard m.2 PCIe 3.0 x4 connector. The actual PCB is much wider leaving more room for PLP circuits and NAND packages.
Samsung is claiming that with 16TB M.3 SSDs it will be possible to hit ~1PB of storage in a 1U form factor! Each drive is also capable of 500K read IOPS so you will be limited by PCIe switch back-haul links to servers. At 16W each 1PB of raw storage in 64 drives will consume only about 1kW.
What this means is that we are going to see “shelves” with M.3 SSD connected via PCIe switches. Each providing bandwidth equal to the PCIe connection and up to 1PB per U. This may be an M.3 version of the OCP Lightning JBOF platform as an example.
Of course, there is one major obstacle to this: NAND shortage. We are still hearing of SSD allocation situations due to the global NAND shortage. There are some very large and well-known vendors struggling to keep up demand at given prices. For these 1PB 1U systems to make sense, the cost to achieve this density needs to outweigh the cost of adding more rack space.
Even at $0.25/GB, we would expect 1PB per U storage leveraging M.3 to cost more than $250,000. Still, it is clearly a direction that we can see the industry heading. While all applications may not need 1PB per U, the storage industry is finding ways to squeeze more density into each server. By comparison, 4U 102 drive (and larger) hard drive chassis that can weigh upwards of 300lbs are still less than a third as dense as the M.3 form factor will allow in its first iteration.
There is not much detail about the M.3 form factor but the new 30.5mm wide SSD form factor may be coming to a server near you.