Perhaps one of the technologies we are most excited about is the Ethernet SSD. Toshiba (now Kioxia) and Marvell have been showing concept devices for years. Now, the Kioxia Ethernet SSD is becoming a reality and is sampling to certain customers and partners. That may not be the general availability announcement we are eagerly awaiting, but it is very exciting as a next step. Kioxia has seen enough interest in the product to continue development toward commercialization.
Kioxia Ethernet SSD Background
The concept is relatively simple and is something that we are seeing multiple companies attack from different angles. In the industry, there is a growing question with technologies such as NVMeoF becoming both viable and popular, of whether flash storage needs to be directly attached to x86 nodes. Some SmartNICs have the capability to run NVMeoF stacks on the NIC since it is relatively lightweight, and can use PCIe Peer-to-Peer transfers to directly access storage in the same system. We see devices such as the Mellanox Bluefield-2 line, Fungible F1 DPU, and Pensando Distributed Services Architecture SmartNIC feature ways to directly get multiple SSDs connected to an Arm/MIPS based device and then exposed to the network directly, even without an x86 host.
Kioxia and Marvell have been working on a different strategy. Instead of going through an intermediary processor, the host interface on each drive can be Ethernet instead of PCIe. Indeed, PCIe Gen3 x4 is about equivalent in bandwidth to a 25GbE lane so the ratios make sense. The key to making this work is Marvell’s 88SN2400 controller. Marvell’s controller allows SSDs to be placed on the network directly as NVMeoF targets.
As you can see, each SSD gets two paths to two different switches allowing high-availability connectivity to each device, similar to what we would get in a traditional SAS storage array. Marvell (and Kioxia) have shown off the solution with Marvell network switches to handle the links. One can see the eight QSFP28 connectors on the rear of the chassis shown below. This allows full bandwidth outside of the chassis.
In 2018, this product concept was a SSD with an external board attached to the standard SSD’s 2.5″ connector. By FMS 2019, the controller was integrated directly into the SSD itself. Here we can see the Kioxia Ethernet SSD (formerly Toshiba) with an integrated controlled and output directly to pluggable cages.
Either built into a high-availability chassis or with a more direct to network design, each SSD can be added as a network endpoint and provide NVMeoF targets.
The fact that this solution is moving to customer/ partner sampling is a good sign.
Highlights From the Press Release
Here are some of the Ethernet bunch of flash (EBOF) and Kioxia Ethernet SSD highlights from the press release:
EBOF System Highlights:
- Simpler EBOF design with KIOXIA Ethernet SSD connecting directly to the embedded Ethernet switch inside EBOF
- Available EBOF 2U systems can connect up to 24 KIOXIA Ethernet SSDs with up to total 600 gigabits per second (Gbps) storage throughput
- Each system supports 2.4 terabits per second (Tb/s) of connectivity throughput which can be split between network connectivity and daisy-chaining additional EBOFs
- High performance: 670K IOPS per drive, over 16M IOPS per 24 bay EBOF (@4KB random read)
- Runs Marvell EBOF SDK, leveraging the SONiC network operating system and enabling advanced discovery and management functions
KIOXIA’s new drives incorporate the Marvell® NVMe-oF™ Ethernet SSD converter controllers into its enterprise NVMeTM SSDs, bypassing the requirement for an external Ethernet to NVMe bridge adapter.
KIOXIA Ethernet SSD Features:
- Support for single or dual 25Gb Ethernet and RoCEv2 RDMA connection
- NVMe-oF 1.1 and NVMe 1.4 compliance
- 2.5-inch1 15-millimeter (mm) Z-height form factor
- Capacities supported: 1920/3840/7680 GB
- Two-die failure recovery and other reliability features
- Support for Redfish® and NVMe-MI™ storage management specifications
- Support for IPv4 and IPv6 architecture
If we go beyond today’s announcement and look to the future, this is very powerful. Instead of a NVMe SSD going through an x86 server’s PCIe lanes, and potentially PCIe switches, to the network, or a SAS SSD traversing SAS switches to an x86 server’s PCIe lanes to the network, the NVMe SSD itself can have an IPv6 address. Multiple services can use RoCEv2 to get high-speed access over the network using standard Ethernet gear. This greatly simplifies design since there are fewer hops and different interfaces being traversed in an Ethernet SSD solution.
In an NVMeoF solution like this, one part is the hardware, but the other is the software enablement. Kioxia has been working on its Kumoscale NVMeoF software solution to address that side of the equation.
We still have not been able to get one of these SSDs, or even the Marvell PCIe to 25GbE adapter, but we are holding out hope that we can show this off one day.