Kioxia FL6 800GB Performance by CPU Architecture
If you saw our recent More Cores More Better AMD Arm and Intel Server CPUs in 2022-2023 piece, or our pieces like the Supermicro ARS-210ME-FNR Ampere Altra Max Arm Server Review, Huawei HiSilicon Kunpeng 920 Arm Server piece, you may have seen that we have been expanding our testbeds to include more architectures. This is in addition to the Ampere Altra 80 core CPUs that are from the family used by Oracle Cloud, Microsoft Azure, and Google Cloud. We also managed to test these on the newest generation AMD EPYC Bergamo and Genoa-X SKUs.
Since that is hard to read, we have a zoomed-in view below without a 0 X-axis.
Generally, this drive performed well on the newer PCIe Gen4 and Gen5 x86 controllers. The Arm and IBM Power9 controllers are generally slower and that is exactly what we saw here. There are no surprises at this point.
In this edition, we had up to the 5th Gen Intel Xeon, codenamed “Emerald Rapids.” Just based on the testing window, we did not get to test these in the Intel Xeon 6 Sierra Forest systems. We did, however, get them into the newer AMD EPYC platforms.
It is fun to see that not all PCIe controllers are created equally.
Final Words
The Kioxia FL6 is not exactly the company’s newest drive. This SSD was announced in 2021 and we first saw it in Q3 2022. Still, the overall class of drives is one that tends to move slowly over time with far less frequent releases than we see on the read-focused capacity segment. We managed to snag a great deal on these, so we figured we would at least share what we saw from them.
60 DWPD is a lot. At FMS 2024 this coming week, we are going to have a discussion on whether 60 DWPD is even the right metric for modern SSDs, especially capacity ones. We are also going to have an update to a fun project we started in 2013 in the SSD space. Still, there are logging devices, cache devices, and so forth that really focus on heavy write workloads. If you truly have a write-heavy workload, then the Kioxia FL6 is designed to meet those needs by consistently writing data to NAND.
Stay tuned to FMS 2024 when we are going to have an update to our Used enterprise SSDs: Dissecting our production SSD population piece, which is exactly what we had to do to get these drives (albeit these were sealed and new when they arrived.)
I wonder how much actual NAND they have inside, as it would be nice to see how it’s split up between the useable and the spare area.
I’d say 1 TiB = 1.1 TB, but that’s pure guess. That would be your standard “write-intensive” 27% spare, but given it’s SLC, this might be enough to do 60 DWPD.
Anyhow, this is an important piece of information I’d also like to see mentioned in the review (in *all* SSD reviews, actually): actual NAND capacity and number of packages.
@Robert & @G., TechPowerUp says:
Name: BiCS4 XL-Flash
Part Number: TH58LJT0SA4BA8H
Type: SLC
Technology: 96-layer
Speed: 800 MT/s
Capacity: 8 chips @ 1 Tbit
Topology: Charge Trap
Die Size: 96 mm² (1.3 Gbit/mm²)
Dies per Chip: 8 dies @ 128 Gbit
Planes per Die: 16
Decks per Die: 1