With all the recent controversy regarding WD, Toshiba, and Seagate slipping SMR drives into retail channels and failing to disclose the use of their slower technology, we thought it would be interesting to dive into the actual impact of using a SMR drive. We can hypothesize that there is a negative impact, but it is better to show it. To that end, today we will be comparing a WD Red 4TB SMR drive to its CMR predecessor, as well as CMR drives from other manufacturers.
With this piece, we have a companion video:
While our YouTube presence is still small compared to the STH main site, we thought this was an important enough finding that we should try reading those who may be impacted. Feel free to listen along while you read.
SMR vs CMR – A quick primer
Many of our readers may already be familiar with the differences between SMR and CMR, but a quick refresher never hurt anyone!
First up is CMR, which stands for conventional magnetic recording. This has been the standard technology behind hard drive data storage since the mid-2000s. Data is written on magnetic tracks that are side-by-side, do not overlap, and write operations on one track do not affect its neighbors.
The newer contender is SMR, or shingled magnetic recording. It is called shingled because the data tracks can be visualized like roofing shingles; they partially overlap each other. Because of this overlap, the resulting tracks are thinner allowing more to fit into a given area and achieving better overall data density. The WD Red is a device-managed SMR drive, which presents itself to the operating system as a normal hard drive.
The overlapping arrangement of SMR tracks complicates drive operations when it comes time to write data to the disk. When data is written on a SMR drive the data on the overlapping tracks will be affected by the write process as well. This forces the data on the overlapping tracks to be rewritten during the process, which takes extra time to perform.
As a mitigation against this penalty, writes can be cached to a segment of the drive that operates with CMR technology, and during idle time the drive will spool those writes out to the SMR area. Obviously this CMR cache will have a limited capacity, and with enough write operations can be exhausted.
When that happens the drive has no choice but to write directly to SMR and invoke a performance penalty. WD has not provided the specifics of how their drives mitigate against the performance impact of using SMR, so we are operating on guesswork as to the size or even existence of a CMR cache area in the WD Red.
If you want another 3rd party description of this, you can see the great 2015 paper by Toshiba: Shingled Magnetic Recording Technologies for Large-Capacity Hard Disk Drives.
After the previous article on STH, the question we received was how this impacts arrays. Specifically, RAID arrays our readers use. We utilize a lot of ZFS at STH, so in mid-April 2020 we started a project to see if, indeed, there was a difference. In short, there was, and in a big way.
For the test configuration, we wanted a configuration that de-emphasized CPU performance. Effectively we wanted to take CPU performance out of the equation to focus on drive performance. Here is what we utilized:
- System: ASRock X470D4U
- CPU: AMD Ryzen 5 3600
- Memory: 2x Crucial 16GB ECC UDIMMs
- OS SSD: Samsung 840 Pro 256 GB
- OS: Windows 10 Pro 1909 64-bit / FreeNAS 11.3-U2 (note, the latest is FreeNAS 11.3-U3.1 but it was released well after we started the project)
- RAIDZ array disks: 4x Toshiba 7200RPM CMR HDDs
Here is our list of drive contenders:
- HGST 4TB 0F26902 (7200 RPM)
- Seagate Ironwolf NAS 4TB ST4000VN008
- WD Red 4TB WD40EFRX (CMR)
- WD Red 4TB WD40EFAX (SMR)
The WD40EFAX is the only SMR drive in the comparison and is the focus of the testing.
Testing the WD Red 4TB SMR WD40EFAX Drive
We had two main areas of testing. First up, the new SMR drive has been put through a handful of standard benchmarks just to see how it performs in the context of a larger pool of drives. After that, some more targeted tests were run, pitting the WD40EFAX against three other CMR 4TB drives in a standard ZFS RAIDZ operation: rebuilding an array with a new drive after a drive has failed.
Prior to beginning this sequence of tests, the drives were prepped by having 3TB of data written to them, and then 1TB of that data is deleted. Testing commenced immediately after the drive prep was completed. First, a simple 125GB file copy to test sequential write speeds outside of the context of a benchmark utility. Following that, CrystalDiskMark was used to see if the large sequential write from the first test would have an lasting impact on the drive performance. These tests were performed as rapidly as possible to minimize drive idle time between them. Finally, a FreeNAS RAIDZ resilver was performed.
These targeted tests are not designed to be comprehensive, but instead, illuminate any obvious differences between the SMR drive and its CMR competitors.
The RAIDZ resilver test is of particular interest, since the WD Red drive is marketed as a NAS type drive suitable for arrays of up to 8 disks. A resilver or RAID rebuild involves an enormous amount of data being read and written, and has the potential to be heavily impacted by the performance penalties of SMR technology.
The test array is a 4-drive RAIDZ volume that has been filled to around 60% capacity. A drive is then removed from the array, and our test drives will be inserted in its place and the resilver timed. The other three drives in the array remain consistent. In an attempt to add additional stress to the scenario, during the resilver some load will be placed upon the array; 1MB files will be copied to the array over the network, and 2TB of data will be read from the array and copied over the network to a secondary device. This is a significant workload, but we wanted to stress the drives to ensure we could get separation. Also, NAS units and RAID arrays are designed to continue serving applications and users when in degraded states.
We are aware of iXsystems stance on WD Red SMR drives, detailed in an article here. The short version is that they advise against use of these drives. That blog was posted after we had already embarked upon this adventure. It says look to WD for more information and WD has not, over the course of the ensuing month, provided an update. Even after it came out we thought the experiment worthwhile since the number of users that read the iXsystems blog is likely a minority, even among STH readers.
We are also testing a common use case that many may not think of. Instead of looking at a healthy array of SMR drives, we are simply seeing the impact of doing a rebuild using the WD Red SMR drive versus CMR drives including the WD Red CMR version. This is important because it touches on those who are not just buying a new set of drives for an array, but instead is a commonly-known case where a user may have to purchase a drive quickly to get a NAS back to a healthy state as soon as possible. Think of this as you know you have WD Red 4TB drives (likely CMR) in your FreeNAS array, a drive fails so you go to Best Buy. They have a WD Red in stock so you buy it and install it without doing a day’s worth of online research.
Next, we are going to get to our test results before getting to our final words.