Many large IT organizations have a strategic planning process that tries to figure out which technologies they will need in three to five years, and long-term budgets are set in the process. The problem, of course, is that it’s tricky business indeed trying to predict where technology will go over the course of a few years, and there’s almost certainly going to be some new technology that comes along to throw a wrench into those plans. Spotting disruptive or innovative technology early on — or at least identifying the places where it’s likely to occur — can go a long way toward making the process smoother, or at least minimize the surprises. This planning process got me thinking about why technology gets developed and where the surprises might be for the storage networking market in the years ahead.
“Disruptive technology” is a significantly overused term, and I believe there are few truly disruptive enterprise technologies, but that doesn’t mean there isn’t significant innovation. Let’s look at a couple of examples — tape storage and Fibre Channel — where technological problems inspired solutions that then radically changed the entire enterprise data storage environment. The backup and Fibre Channel markets both evolved as a direct response to the slow evolution of Ethernet, offering a case study in the causes of technology innovation and market disruption. Necessity may be the mother of invention, but invention itself may have unforeseen consequences, and the disruption in the data storage market is just beginning to work itself out.
Tape Meets Dedupe
No market has changed more in the last decade than data backup and recovery. A mere 10 years ago, backup consisted of going directly from disk to tape. Tapes were slow, as were networks, maybe 80 MB/sec peak on a good day with a tailwind. LTO-1 tape had just been introduced, and the fastest tape performance was 14 MB/sec uncompressed, and peak disk performance was 67 MB/sec. Within two years, tape speeds increased to 35 MB/sec native, and compressed speeds exceeded network performance. Two tape drives could saturate a network, especially with compression. The storage and networking worlds were out of balance. Given the latency of tape, the data must be streamed to the tape for good performance, and at the time there were no multi-speed tape drives. Because network speeds were painfully slow compared to tape performance, three innovations came about as a result of customer complaints:
- Disk-to-disk-to-tape backups were developed by backup software vendors
- Virtual tape libraries (VTLs) were developed by tape library vendors
- Multi-speed tape drives were developed by tape drive vendors
These innovations addressed the impact of stagnant network performance on backup and recovery. Even vendors in the backup/restore path, both hardware and software, developed technology to address customer complaints. You would assume that the problem was solved, but network performance still did not improve, and by 2005, many customers weren’t able to complete backups within the required window of time. This helped inspire the adoption and development of data deduplication, which reduced the amount of data that needed to be backed up and restored. Some might call dedupe a disruptive technology, but it seems to me that it was just a very innovative technology that was the next logical step after customers could no longer complete their backups on time. Dedupe may be innovative, but it is not much different than standard file compression, except that you are compressing across files rather than within a single file. In fact, when dedupe pioneer Data Domain (now part of EMC) emerged from stealth mode, it called its technology “Global Compression.” It didn’t begin calling it deduplication until three years later.
Dedupe might not have been developed if IT environments were able to complete their backups in the required timeframe. If 10 GbE was available at reasonable prices in 2005, we might not have seen the significant investment in dedupe hardware and software because streaming tape would have worked just fine. But the development of dedupe had another consequence: By lowering the cost of disk so it approached that of tape, it has relegated tape more and more to a deep archiving role, and that may come with its own unforeseen consequences. If tape sales continue to drop, what happens to the backup market segment that still needs tape, and what happens to the huge archiving market that requires tape — and where most, if not all, of the data cannot be deduped? And don’t think of this as a small business problem — some of the biggest organizations on the planet are heavy tape users for archiving. There is a problem when the market for a technology slows, and as strategic planners, figuring out what will fill that market void becomes the guessing part of the planning process.
If vendors fill market voids based on requirements, what are the next steps for tape? The slow evolution of networking opened the door for disk and deduplication in the backup market, but now the networking standards groups are working overtime. 10 Gb Ethernet is taking off, and 40 GbE and 100 GbE standards are not far behind. It is too late to retain the backup market for tape now that the dedupe/disk genie is out of the bottle. This means that there will be continued pressure on the tape market, with some consolidation likely, because a significant portion of the tape market is still for backup and restore. The percentage of tape shipments going to the archive part of the market has increased dramatically over the last few years. But even though archival requirements have grown, they have not grown enough to make up for the loss of tape units in the backup market.
This could have profound implications for the archiving market. It is hard to innovate without significant profits, and with a diminished market, profit is always a challenge. Perhaps the market may be reduced to a few strong players in the coming years, or perhaps the pace of innovation may slow. It would be nice if something like holographic storage came along to solve the problem, but we’ve been waiting decades for that to happen.
Fibre Channel Meets 10-Gig Ethernet
Fibre Channel product development has an interesting parallel with tape, as both have been heavily dependent on the pace of Ethernet innovation. The difference is that Fibre Channel has benefited from the lag in Ethernet performance, but that may be about to change.
Fibre Channel has come a long way since 1996, when I first used it. It was 1 Gb back then, and not fabric, but loop-based. By the time 1 Gb Ethernet was available at reasonable cost, 2 Gb Fibre Channel was about to enter the market. Fibre Channel went happily from 2 Gb to 4 Gb, but then the rug got pulled out from under the technology for two reasons: SATA took off as a technology in the enterprise, and the Ethernet community decided it needed to make a dramatic change.
The SATA interface for disk drives is less reliable, has higher error recovery latency and a host of other things that affect reliability and performance, but it was and is cheaper. The disk drive vendors used Fibre Channel for enterprise technology, but the cost was high and the interfaces were incompatible with SATA. The result has been the combination of SAS and SATA, which use similar connectors, and drive vendors can use a single chipset for both SAS and SATA, significantly reducing costs. We keep hearing about 16 Gb FC, but I doubt it will capture much market share, as FCoE over 10 Gb Ethernet will become an increasingly attractive option. Fibre Channel had a long run, thanks in part to the slow evolution of Ethernet, but that may finally be about to change. Tape and Fibre Channel may have been affected differently by the long lag in Ethernet performance, but their fates may be similar.
IT Innovation Comes from Need, Commoditization
Technology markets can be driven as much by a lack of innovation as they can by innovation (1 Gb Ethernet lasted far too long, opening the door for disk backup and dedupe). The commoditization of technology is another enduring trend contributing to the tenuous state of some technologies. What this means to you depends on your window for technology planning. I didn’t see all the changes coming as a result of 1 Gb Ethernet overstaying its welcome, but I did recognize Fibre Channel’s limitations when it failed to get placed on the motherboard despite the big “Fibre-On” push in the early 2000s. Once that happened, it was clear that Fibre Channel would someday be relegated to the back burner; the only surprise was how long it took the Ethernet folks to make that happen.
Trying to predict the IT market in your strategic planning process is as much of an art as it is a science, and the opinions of analysts often reflect the views of the vendors they’re closest to. The best tools for figuring out and planning for the future are your own eyes and ears, or at least finding independent help if you need to do long-term strategic planning.
Henry Newman, CTO of Instrumental Inc. and a regular Enterprise Storage Forum contributor, is an industry consultant with 28 years experience in high-performance computing and storage.
See more articles by Henry Newman.
Follow Enterprise Storage Forum on Twitter