About a year and a half ago, I wrote about the problem of tape wind quality, a serious issue that needed to be addressed from an architectural perspective to ensure long-term data integrity.
Much of the reason that wind quality had become a problem was that all of a sudden the technology and compression buffers within tape meant that you were using a much higher percentage of an HBAsthan in previous generations of technology. Well, times have since changed, and I wanted to review some of the changes that have occurred and some of the architectural effects.
The Times They Are A-Changin’
In a nutshell, tape wind quality became a problem when tapes could not stream at the full data compression rate, resulting in improper winding that raised the risk of data corruption. This problem has plagued the tape industry for many years, but became critical when tape drives started to use a high percentage of the host bandwidth (see table at bottom).
I think many companies started to recognize the problem with tape wind issues and felt it was necessary to address these problems in the design of new tape drives. In recent years, LTO drive vendors have added multiple speeds to the tape drive to match the speed of the incoming data better.
Before this, based on the design of the tape drives, it appears that the expectation was that users were going to write the tape as fast as it could be written with compression, otherwise vendors should have had multiple speed drives a long time ago.
The first definitive study on Wind Quality Issues came from a U.S. government research group called the National Technology Alliance (NTA). The report on wind quality came out in the mid-1990s, so it took several years for the problems to be addressed with new technology.
So has the problem completely gone away? I see two reasons why tape wind problems may still be with us. First, most of the tapes out there are the old type of tapes and drives that still have the wind quality problem. And second, even with multi-speed drives, you still have to speed match and understand the speed at which you can read from storage for backup, or in the worst case scenario, write to storage for restoration.
Let’s assume the vendors are right and they have fixed the tape wind quality issue for current and future generations of tape drives. I have no reason to doubt that multi-speed drives will fix the problem for most cases. The problem now is how many bytes of data have been written using the older technology, and how many of those bytes of data have to be re-read to migrate to the new technology? No matter how you look at this problem, the data that was written on the previous generations of tape drives from most every vendor must be re-read, and they potentially have a wind quality problem.
I am not suggesting that every site go out and buy new tape drives and begin a massive migration project, but for your critical data, it might be a good idea to begin the process. Tape drive vendors and media manufacturers aren’t paying me to say this, but the internal people that deal with low-level media issues at a few of these companies are strongly in favor of this idea, but they can’t state it publicly since it would put them in the position of telling you they have fixed a problem that they haven’t told you about for the last 10 years.
To be fair, you might have had the problem 10 years ago, but it was not until recently that the tape speed started to approach the channel speed of the interface. So while NTA reported on the problem 10 years ago, it was not until the early 2000s that the problem started to be consistent across vendors and industries.
Can Problems Still Happen?
With all of these new tape drives with multiple speeds, can the problem still happen? Finding out about the internal workings of all of the tape drives requires non-disclosure agreements (NDAs) from each of the vendors, but the first question you need to ask yourself is how fast can I read from storage to write to tape?
Many enterprise RAID systems and even some midrange systems are configured to use RAID-1 to improve performance for small block random I/Os. Well, the fastest drive today from Seagate can only on average sustain 96 MB/sec, which is far slower than the compressed speed of all of the tape drives today and slower than the native speed for many. So the problem may be how many speeds does the tape drive have, and will those speeds match your data rate? Let’s say you have a controller that can only read streaming at 20MB/sec, perhaps because of the controller or other load factors within the RAID, data path or server. What I do not know, nor have figured out yet, and even if I had I would be placed under NDA, is what are the multiple speeds of the drive and what happens if my data rate speed from disk is slower than the drive’s slowest speeds? Do we still have the same problem that the NTA described almost ten years ago?
I honestly don’t know, but my guess is that we may still have the same problem but not to the same degree, since the tape is moving across the rollers at a slower speed so the air pressure changes will not be as large. Start-stopping a drive when it is moving at five meters a second will have far more impact on air pressure changes than start-stopping a drive that is moving at 1.75 meters per second.
Newer LTO tape drives have addressed the problems first put forth by some smart guys at the NTA (most of them were 3M at the time and are now at Imation), where they were seeing the future where tape drive speeds were going to approach connection speeds and cause problems.
Personally, I suspect I can come up with a scenario where wind quality problems could still happen. I think the likelihood is much reduced, but I would still want to understand how fast I can read data from storage, and how many speeds the tape drive has and what are the slowest speeds.
As always, you have to know your data path.
|Vendor||Drive||Introduced||Peak Xfer Rate MB/sec uncompressed||Peak Xfer Rate MB/sec uncompressed||Interface in MB/sec||% of interface
|% of interface
with compression on
Henry Newman, a regular Enterprise Storage Forum contributor, is an industry consultant with 25 years experience in high-performance computing and storage.
See more articles by Henry Newman.