Another year has gone by, so it’s time once again for another set of predictions for the storage market and a look back at how I fared on last year’s predictions. I probably get as much if not more out of this exercise than readers do, because it makes me take a hard look at which storage technologies are going to work and which won’t and how soon I need to be ready for them.
As always, I’ll start with a review of what I said last year. Given the state of the economy and cutbacks across all industries, you might think that the changes I predicted in the storage industry were delayed. Let’s see how I did.
Last Year’s Storage Software Predictions
By 2009, I said at least one additional vendor would support T10 OSD file systems. Having OSD file systems will likely allow better scalability over most current block-based file systems. I got this one right: Sun (NASDAQ: JAVA) has announced T10 OSD QFS for OpenSolaris and it will be released in early 2009.
Multiple implementations of NFSv4.1 will be available by early 2009 with pNFS support, I predicted. This is a standard to watch, as it has the potential to change a significant portion of what happens in large environments. The economy has hampered development, so I’m putting this one off to mid-to-late 2009.
No significant changes will be available in error management in large system configurations, I said. Though we need a big change in error management, there will be none forthcoming. An easy prediction, unfortunately.
I said that undetectable errors will dictate more software changes in the data path beyond T10 DIF and Sun ZFS. People are starting to realize that undetectable errors are a bigger issue than they thought. Good news on that one — at least one vendor has end-to-end parity, and there have been many papers on undetectable errors, or what some call silent corruption, and a number of public reports of this happening to very large enterprise storage architectures such as Netflix (for more, see this paper: An Analysis of Data Corruption in the Storage Stack).
Last Year’s Storage Hardware Predictions
PCIe 2.0 (5 GB/sec) will become available in servers with AMD (NYSE: AMD) and Intel (NASDAQ: INTC) CPUs, I predicted. It is here and you can buy it today on higher-end home PCs.
By the end of 2008, I said SASdrive shipments would exceed Fibre Channel drive shipments for new systems, and that SAS 2.5-inch drives would become the standard for enterprise drives by early 2009, with shipments exceeding 3.5-inch drives. I said it was a stretch, and it was; I got that prediction wrong.
I said that 8 Gbit Fibre Channel would enter the market for both HBAs and switches late in 2008, with a limited impact until 2009. I got that right: You can buy HBAs and switch ports from leading vendors.
Tape density would finally hit 1 TB uncompressed in 2008, I predicted. Well, if you predict it every year, it will eventually come true. Both IBM (NYSE: IBM) TS1130 and Sun T10000B are at 1TB native.
Disk drive density will continue to grow, but the growth will continue to slow. Correct. Enterprise drives are now at 450 GB and consumer drives at 1.5 TB. And the rate of increase continues to slow.
Flash technology (SSD) will begin to be integrated into enterprise environments to address the IOPS per watt issue. That didn’t take long — EMC (NYSE: EMC) was first just weeks later, and other vendors have since followed suit.
Tallying up the score, I count eight out of ten; I did far better this year than last year, even with significant economic headwinds. I was off on pNFS by a few months and I missed on the transition from FC to SAS for enterprise drives. I think what I failed to understand was the complexity RAID vendors faced building SAS-enabled disk trays, and I just looked at what would be best from the disk drive vendor perspective, since SAS and SATAinterface chipsets are the same. There will likely be a longer transition from FC to SAS than I thought.
2009 And Beyond
Given the state of the economy, 2009 is going to be tough to predict. Some things are too easy to predict, as they are broad industry trends. One of these is FCoE (Fibre Channel over Ethernet), which will begin to have some impact on the storage market, but the limiting factor is going to be that the RAID vendors will need to redesign the front end of their controllers. On to the predictions for 2009, starting with hardware.
As development takes time, many of these products and features have been in development for a number of years and could come to fruition soon.
Ethernet: With the ratification of the 40 GbE and 100 GbE standards, storage vendors will start to move quickly to using enhanced Ethernet and FCoE along with these new standards for high-performance storage interfaces.
Fibre Channel: Development of FC technology will essentially end at 8 Gb. Yes, there will be 10 Gb for interconnects on switches, but 12 Gb and 16 Gb HBA discussion will end.
PCIe 3.0: This new standard will likely be ratified in late 2009 or early 2010. The performance will not double, as was the case with PCIe 1.0 to PCIe 2.0. My guess is at best a 60 percent performance increase, and more likely 40 percent.
Disk Drive Density: Any increase in enterprise disk drive density will be in SAS drives, not FC drives. Densities will likely increase about 50 percent. It is possible that we might see a 2 TB disk drive on the SATA side by the end of the year. The problem will be RAID.
RAID: A fair number of people in the research community, a few bloggers and some in the HPC community believe that RAID as we know it is a dead-end technology. The issues are pretty simple: Density has gone up far faster than performance or reliability, sending rebuild times through the roof. These people claim that RAID-6 is the equivalent of placing a Band-Aid on a gaping wound. Some have addressed the RAID rebuild issue by a variety of tricks, but the real problem is that RAID devices know nothing about the topology of the data. So here is my prediction: In 2009, the industry will realize that RAID has serious reliability problems, and a number of vendors other than just the handful today will have solutions to address the problem. This might include support for T10 OSD.
T10 Data Integrity Field Products: In 2009, you will see products from vendors that support this standard end-to-end. You can purchase HBAs today that support this functionality, but nothing beyond the HBA is currently in storage controllers.
The software horizon is just more of the same. I don’t expect much leadership from SNIA, given the difficulty of getting vendors to work together, and a bunch of vendors are charting their own course for ILM and federated archives, but here is what I think might happen next year.
ILM: ILM is a hot topic, as it allows you to characterize your data. I believe that major vendors will offer products in 2009 that address some of the ILM problem. These products will be addressed at the business continuity market (Sarbanes-Oxley compliance, HIPAA and e-discovery regulations).
File Systems: There will be nothing new on the file system front. It will be the same problems that we have today and the same problems that we have had for 20 years. Though some file systems address things like reliability, they do so at the expense of performance. We have seen a few file systems move to T10 OSD-based implementations, but nothing is really going to change here.
Error Management: Once again, no major changes and this is not necessarily a good thing. With FCoE and storage and TCP/IP networking merging, there are some challenges that need to be addressed. They won’t be.
POSIX Standards: I am going to go out on a limb and suggest that people will begin to discuss the limitations of POSIX for things like ILM, direct I/O, support for T10 DIF and a number of other areas. This is not a prediction of change, but a prediction that some people other than me are publicly going to discuss the problem. I’ll wrote more on this early next year. Until then, Happy Holidays, and best wishes for a prosperous 2009.
Henry Newman, a regular Enterprise Storage Forum contributor, is an industry consultant with 28 years experience in high-performance computing and storage.
See more articles by Henry Newman.