Download the authoritative guide: Enterprise Data Storage 2018: Optimizing Your Storage Infrastructure
If you look at how the data is written to the tracks, you might notice an issue. The write head writes to the current track, such as Track N, but it also writes to the overlapping track, Track N+1, because of the width of the write head. What if there is data on Track N+1 where the write head needs to write data. This is shown in Figure 5 below.
Figure 5: SMR hard drive track layout
The write head is in yellow and it needs to write some data. The post-trimmed data is show as "New Data," and the data it will over-write is shown as "Disturbed Data" and is in red. This data already exists and will be overwritten if the "New Data" is written.
To get around the problem of data over-write, the drive has to read the data that is about to be overwritten and write it somewhere else on the drive. After that, it can write the new data in the original location. Therefore, a simple write becomes a read-write-write (one read and two writes). I'm not counting the "seeks" that might have to be performed, but you can see how a simple data update can lead to a large amount of activity.
For sequential writes, SMR drives are great because it's just a pure write operation (no reading and no seeking). However, as soon as an application wants to overwrite a piece of data or perform a data update, the write suddenly becomes a read-write-write operation.
In an additional scenario, if a block is erased you suddenly have a "hole" in the drive that a file system will likely use at some point. There may be data on neighboring tracks surrounding the hole. A typical file system doesn't know that an unused block may or may not have neighboring tracks with data so it may be decide to write into that block. Therefore the disk will experience a read-write-write situation again.
IO elevators within the kernel try to combine different writes to make fewer sequential writes rather than a large number of smaller writes that are possibly random. This can help SMR drives, but only to a certain degree. To make writing efficient, the elevator needs to store a lot of data and then look for sequential combinations. This results in increased memory usage, increased CPU usage and increased write latency. The result is that the write speed can be degraded for both sequential write IO and random write IO, depending upon where the data is to be written.
Therefore many times you will see SMR drives referred to as "archive drives." That is, once the data is written, it is read much more than it is written, which sounds more like archive storage. However, the appeal of vastly increased drive capacity has many people working on ways overcome the write issues of SMR drives.
Improving the Situation
How to "work around" the idiosyncrasies of SMR drives has been a field of study for several years. There have been many approaches to "fixing" the overwrite issue in SMR drives, but I'm only going to cover a few of them in this article starting with drive banding.
As data is written to the SMR drive, the probability of having to rewrite neighboring data increases. In the pathological case, a simple rewrite may cause all of the data on the drive to be read and written again. To help avoid this, SMR drive manufacturers have created "bands." Inside a band there are a fixed number of shingled tracks. Separating bands is a large "guard space" so that the data on an outside track of one band won't over write data on a track in a neighboring band. This increased guard space between bands contains all overwrites due to data updates to the particular band. If small amounts of data need to be re-written, the possible cascade of read-write-write cycles to move data to prevent it from being over-written is contained to the band.
Using bands has a small impact on the overall areal drive density but not much. The bands have a reasonable number of tracks relative to the number of larger guard spaces so that the overall areal density impact is minimal. The trade-off is between areal density (fewer guard spaces with very large number of SMR tracks in a band) and performance where there are more guard spaces and fewer tracks per band, limiting read-write-write cycles. But so far the verdict is that drive SMR drive manufacturers are using bands in the newer SMR drives.
You may have noticed that SMR drives are somewhat similar to SSDs. When you have to update a small amount of storage on a SSD, you have to read the entire block, update the data, and then write the block back to the SSD. SMR drives are very similar. If you want to update a small bit of data that is surrounded by existing data, you may have to read the other data and then write the data back to the drive. This is also referred to as write amplification. Therefore researchers and engineers have been trying to adapt SSD concepts to SMR drives.
One of the first techniques adapted from SSDs to SMR drives is over-provisioning. Over-provisioning is the technique of reserving space on the drive that is not accessible by the user for use by the drive for storing data. The drive can use this space for anything it wants or needs. In the case of SMR drives this space can be used to "park" data for blocks that are to be overwritten because of a write-update to an existing block. Once the data has been updated, the drive can copy the data from the "parking area" to the final location. This can also be used in an attempt to lay out files in a more sequential manner to reduce the fragmentation of the SMR drive.
Over-provisioning is usually expressed as a percentage with the following definition:
Percentage Over-provisioning = (Physical Capacity - User Capacity) / (User Capacity)
As you can tell from the equation and what you might already intuitively know, over-provisioning takes away user addressable capacity from the drive. For SSDs the trade-off between losing some capacity and the boost in performance is usually a good one. For SMR drives, the trade-off may not be as clear cut, but that is yet to be determined.