A lot of claims have been made lately of disruptive storage technologies, but saying a particular company is disruptive is a long way from Clayton Christensen's original definition. Very few individual companies have changed the industry, and one big reason is that everyone wants a standards-based product, and standards require multiple companies to create them. Once a product is created that might be a disruptive technology, lots of other players jump into the mix.
Clearly, disruptive technologies are not an everyday event, nor are they easy to predict. Let's examine some technologies that might significantly change enterprise storage and disrupt the market. I won't adhere to the strict definition, but I am going to suggest some technologies that if adopted could change the enterprise storage market. As I said, I think very few companies are going to be able to create a new technology market from a technology without a standard that others can use. Even Microsoft, for example, supports all types of standards, from SATA (T13) and FC/SCSI (T11) to IETF standards. No company can be an island today.
So without further ado, here are three things that I think will be truly disruptive to the enterprise storage market.
Fibre Channel over Ethernet (FCoE) is my number one pick for a technology that could change enterprise storage in dramatic ways.
Today, any higher performance or higher reliability storage data moves over Fibre Channel. Fibre Channel has been around for 10 years or so as the de facto storage medium in the enterprise. iSCSI, in my opinion, has never taken reasonable market share because of the overhead both for CPU and packetization (the TCP/IP encapsulation uses a significant part of the packet for small I/O requests).
If FCoE happens, Fibre Channel connectivity to storage will be a thing of the past and we will have one network fabric for communications and storage. Even this year, as FC interface-based disk drives are being replaced by SAS, FC chipsets shipped are declining. FC chipsets never achieved the cost factor that Ethernet chipsets achieved because FC was never considered a commodity technology it was always a higher-priced storage interconnect. Every computer from your laptop to a large SMP server has Ethernet built in. That is not true and has never been true for Fibre Channel.
FCoE will reduce costs in a number of ways:
- Cost per port: Although 10GbE is likely a bit higher than 4Gbit FC in the cost per Gbyte/sec, that trend will not last long. I suspect that by the end of the year, this will be changing, and so does most of the industry.
- Personnel: Today you have a storage networking group and an IP networking group in most large organizations. They are separated, as the people must deal with different technologies, training, patches, pricing, and so on. Having a single group of people that can do the same things will save money.
- In my opinion, much of the Fibre Channel community sees the writing on the wall, otherwise they would not have such broad participation in the FCoE community and standards.
I have been writing about object-based storage for several years now (see Let's Bid Adieu to Block Devices and SCSI), and I am a big proponent of T10 OSD, given the problems I see regularly with fragmentation.
OSD has a long way to go before it could be disruptive. There is not as much momentum behind OSD as there is for FCoE. I think part of the problem is that the problems OSD solves are not as easily understood as the problems that FCoE solves, and because OSD is solving bigger, more complex problems, it requires a larger infrastructure change such as file systems, drives, storage controller changes and disk drives. I still believe that OSD solves many of the bigger problems that most sites face for the management of the life of data from creation, to backup/archiving, restoration, deletion and everything in between, including data protection and security. I believe OSD is coming to a system near you, but it is going to take some time.
In today's world we have SAN storage and NAS storage. Everyone knows that SAN-based storage is faster than NAS for lots of good reasons, not the least of which is that the NFS protocol was not really designed to deliver high-performance streaming or I/O. NFS was designed to solve a different problem.
When NFSv4.1 is implemented and released, the ability to have SAN performance on NAS equipment could become a reality. Of course, the NAS equipment would need to be redesigned to deliver SAN performance, and most NAS equipment is not designed that way, as NFS is the bottleneck, but this would allow a merging of the technologies. In addition, many environments are going to shared file systems for clusters of systems. NFSv4.1, if it lives up to its billing, would allow high performance access from many nodes to a file system.
Of course, you will need a high performance file system to support the high-speed access, and that could be a problem for some vendors, but the tools are there. I believe NFSv4.1 will be disruptive, as it will merge the SAN and NAS world over time together (yet another argument in favor of an IP-based storage world). NAS vendors are going to have to build faster hardware and better file systems, and SAN vendors are going to have to team with file system vendors to develop joint products. This will all be very interesting, and I believe it could also help OSD, as larger, higher performance file systems likely will have more of the issues that are solved by OSD.
I am very skeptical of claims by vendors that their technology is disruptive, as I have seen far too many such claims never pan out, but we've covered a few technologies here that could turn out to be genuinely disruptive, and the implications for storage networks are very interesting.
Henry Newman, a regular Enterprise Storage Forum contributor, is an industry consultant with 28 years experience in high-performance computing and storage.
See more articles by Henry Newman.