Seagate 10TB drives now shipping in volume

2
Seagate Enterprise 10TB
Seagate Enterprise 10TB

Seagate announced that they are now shipping 10TB helium filled drives in volume. Here is a link to the news release. This has some big implications for those building high density storage arrays. With these drives hitting 900TB in a 4U is possible in JBODs and 600TB+ with servers onboard. Up to 9PB/ rack. Suffice to say this kind of capacity is awesome.

Some of the Seagate Enterprise Capacity 3.5″ HDD 10TB helium filled drive specs:

  • 10TB Capacity
  • 7200rpm
  • SATA III 6.0gbps interface
  • Helium filled
  • 2.5M hour MTBF
  • 0.35% AFR
  • 512e sector format
  • 5 year warranty
  • 8W max
  • Non-recoverable read errors 1x 10^-15

Seagate also claims a maximum sequential transfer rate of 254MB/s so these drives can send data faster than two 1GbE links can handle (in most real-world networks). That means that under continuous write conditions we would expect these drives to to take more than 12 hours to read completely. The drives are also “limited” to 550TB/ year workload like the WD Gold drives recently released. We also saw WD 8TB Helium drives reach around $225 each.

We should note that the 10^-15 non-recoverable read error rate with drives this large is starting to become substantial. That is likely why we have seen a transition away from RAID arrays to distributed storage systems.

Seagate Enterprise 10TB
Seagate Enterprise 10TB

And yes, we do realize that these drives also will mean that we are going to see 8-bay desktop systems from the usual NAS vendors reach 80TB which is borderline insanity. The NetApp FAS2020 was a filer sold between 2007 and 2012 that cost $10,000 base price and had a maximum of 68TB. It is easy to see that in applications such as large back up systems the new 10TB drives are going to be a game changer. Even if that game changing is pushing down the price of 6TB and 8TB drives.

2 COMMENTS

  1. “… That is likely why we have seen a transition away from RAID arrays to distributed storage systems …”

    Patrick, I’m not sure I fully understand your point here. Could you elaborate a bit. TIA.

  2. In a traditional RAID setup, 10^-15 non-recoverable read error rate begins to be a problem with large drives, because the risk of another drive failing during rebuild ii getting to high.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.