More Solutions for Shingled Drives: Page 3 - EnterpriseStorageForum.com

More Solutions for Shingled Drives - Page 3

File System Integration

One option for improving the performance from SMR drives is to use a file system more appropriate to the sequential nature of SMR drives. Many people have discussed using log-based file systems on SMR drives to improve performance.

Log-Structured File Systems are a bit different than other file systems with both good points and bad points. Rather than write to a tree structure such as a b-tree or an h-tree, either with or without a journal, a log-structured file system writes all data and metadata sequentially in a continuous stream that is called a log (actually it is a circular log).


The concept of a log-structured file system was developed by John Ousterhout and Fred Douglis. The motivation behind log-structured file systems is that typical file systems lay out data based on spatial locality for rotating media (hard drives). But rotating media tends to have slow seek times limiting write performance. In addition, it was presumed that most IO would become write-dominated. A log-structured file system takes a new approach and treats the file system as a circular log and writes sequentially to the “head” of the log (the beginning), never overwriting the existing log. This means that seeks are kept to a minimum because everything is sequential, improving write performance. Log-structured file systems seem like a good option for SSDs or SMR drives because of their emphasis on sequential writes rather than random writes. But they have other advantages as well. I haven't seen a comparison of a log-structured file system to a tree-based one on SMR drives, but the general thought is that they are much better suited for SMR drives than tree based file systems.

There are a couple of notable log-structured file systems in Linux. One is NILFS. It was included in the 2.6.30 kernel and has been shown to have great performance on SSDs. It is also designed for large file systems (exabytes) so it's suitable for SMR drives.

Another example is F2FS which is a log-structured file system designed for NAND-based storage devices, particularly embedded devices such as cell phones and tablets. It currently is limited to 16TB file systems (3.94 TB for an individual file); however, it has been shown to be quite fast for some benchmarks.

We can embellish on the idea of appropriate SMR file systems using a combination of drive layout modifications and a file system that understands the drive layout. This concept comes from the supposition that metadata is potentially changed more often than the data itself. For the best performance, the drive would need the ability to be able to do small writes very quickly (write IOPs) for the metadata. SMR drives can't really do this very well, but classic magnetic drives can do this much better than SMR drives. What if we could combine SMR tracks and "regular" drive tracks within the same drive and get the file system to write the metadata to one track type and the data to another track type? This is exactly what has been proposed.

In this new type of drive there are two track types. One track type would retain the current guard space between tracks eliminating the over-write issue of SMR drives, and the other track type would consist of SMR bands. The first track type is sometimes referred to as a "Random Access Zone" (RAZ) and stores the metadata for the file system. The second track type, which uses SMR tracks, is used to store the data itself. Putting file metadata on the RAZ tracks allows it can be easily updated without a read-write-write scenario associated with SMR tracks. The file data is put on the SMR tracks in as close to a sequential manner as possible to reduce any data over-write that might need to take place.

However, mixing track types in a single drive reduces the areal density of the drive, and improving the areal density is the strength of SMR drives. In a study by Garth Gibson and his students from Carnegie Mellon University, they showed what happens to the areal density when SMR tracks alone are used and when mixed RAZ/SMR tracks are used. They computed the increase in areal density relative to a non-SMR drive using a simplified model.

When only SMR tracks are used, the theoretical increase in areal density was 2.25. When 1 percent of the tracks are devoted to RAZ, the areal density is barely affected, particularly when the number of tracks per band is large. Even 10 percent of the tracks being devoted to RAZ only slightly impacts the overall areal density. For large number of tracks per band, the impact on areal density reduces from 2.25 to about 2.1.

For track type combinations to be effective, the file system has to be aware of where the RAZ tracks and the SMR tracks are located on the drive. Or conversely, the file system has to tell the drive which part of the data is metadata and which part is just data, and the drive uses the appropriate track type. Either approach will require both file system work and work on interfacing with the drive. One area this can be used very effectively is for object-based storage since this naturally separates metadata and data.

Renumber Tracks

A recent research paper suggested that a novel renumbering of tracks of a different track usage pattern could improve SMR performance until a large percentage of the tracks are used. Conventionally, a drive will use the tracks in increasing order. For example let's say a drive has 4 tracks. Track 1 will be first used, then tracks 2, 3 and 4. For an SMR drive you could potentially get a significant number of read-write-write sequences if data is written in this manner. But if you write the tracks in the order 4, 1, 2, 3 then you can write up to 50% of the number of tracks before the performance decreases (read-write-write operations start happening instead of a simple write).

Changing the order of tracks for writing fits well with SMR drives. Since typical SMR drives typically use bands, the order in which the bands are written can be controlled. For example, if the ordering is done in a round-robin fashion, you avoid writing to one band more than the others. In the case of the 4123 track pattern where you write in a round-robin fashion, all of the 4 tracks in the bands are written to before you move on to the first tracks.

The paper examined four re-ordering schemes of which two were very good in terms of performance relative to a conventional hard drive. The two schemes show a great deal of promise and only require a little change in drive firmware. I hope this gets implemented in production drives in the near future.

IO Patterns

All of these "fixes" or "adaptations" to SMR drives can have an impact on performance, areal density (the point of SMR), and overall capacity. But ultimately, the real impact depends upon the specific IO pattern hitting the drive. Do your application(s) do a great deal of "in-place" data updates? Do they do a great deal of IOPS?

SMR Drive Examples

SMR drives are popping out all the time. Recently, Western Digital announced a 10TB drive with a 12Gb SAS interface that uses SMR. The drive uses 7 platters or about 1.43 TB per platter. It has a 128MB cache, a five year warranty, and a two million hour meantime between failures (MTBF) rating.

Seagate also announced a 8TB SMR drive. It too has a 128MB cache and spins at 5,900 RPM. It has an average read/write throughput of 150MB/sec (190MB/sec max), comes with a three-year warranty and a MTBF of 800,000 hours. The price on this drive is targeted at $260 or just a bit over 3 cents/GB. On Amazon you can order a Seagate SMR drive that has 6TB of capacity, a 128MB cache, and a 3 year warranty for $281.00.

All of these drives are being labeled as archive drives because of the use of SMR tracks. But given the price, the areal density and the insatiable user desire for storage space, drive manufacturers are likely to develop technology to improve the performance of SMR drives.

Summary

In the never-ending quest for increased storage capacity, drive manufacturers have created a new type of hard drive called a Shingled Magnetic Recording drive (SMR). SMR drives use tracks that overlap one another like shingles on a roof. The drives write the data to multiple tracks at a time, and the data is then trimmed to be the width of the drive read head. This reduces the size of the track storing the data improving the areal density of the drive. However it also introduces the issue of updating data in place or writing data that has neighboring data that is not to be disturbed.

Updating data in place means writing or overwriting the data that is already on a track. Since the tracks are so close together this means that the neighboring tracks will also be overwritten. If there is any data there, the drives needs to first read it, put it somewhere else on the drive, and then update the data. This means that a simple write becomes a read-write-write operation resulting in lower throughput and higher latency.

This article presents some possible solutions to the read-write-write issue that people have discussed. Some of these focus on the drive itself and some focus on the file system and others focus on a combination. One of the best and simplest options appears to be a simple re-ordering of how the tracks are used.

Whether the read-write-write problem affects you and your applications really depends upon the IO pattern. I've talked about the importance of knowing your IO patterns before, but they become even more critical when using SMR drives.

Right now SMR drives are very good for data that is read much more than it is written such as in an archive because of the data overwrite issue. But the ideas mentioned here show great promise in restoring the SMR drive performance allowing them to be used in a great variety of situations.

Photo courtesy of Shutterstock.


Page 3 of 3

Previous Page
1 2 3
 

Comment and Contribute

 


(Maximum characters: 1200). You have characters left.

 

 

Storage Daily
Don't miss an article. Subscribe to our newsletter below.

Thanks for your registration, follow us on our social networks to keep up-to-date