4 Storage Technologies Lost to the Recession

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

With the end of the recession arguably in sight, now is a good time to examine some of the promising technologies that were killed, shelved or delayed because of economic woe. In many cases, some of these technologies were merely delayed; in other cases, development just stopped and roadmaps were changed. Whether the technologies are storage or storage related, the world would be a different place had these technologies become available when vendors said they would be and at the expected price. The “dead” technologies may be revived if the economy recovers, or maybe the world will move on to something else.

1. Object Storage Device Disk Drives

If you are a long-time reader you might remember that I have been a big proponent of the T10 OSD standard for some time and wish it had gotten far better traction in the market. Back in 2004, I wrote about the T10 standard, and I was very hopeful for it for many years. It never came to fruition, however, and I think much of the problem was that no disk drive vendor ever released a disk drive that supported the T10 OSD standard.

We are now down to only two major disk drive vendors (Seagate and Western Digital) and one vendor with a very small market share (Toshiba). With such little competition in the market, I am not sure we will ever see an OSD disk drive. OSD would have changed things in so many ways. For example, file systems would have been able to virtually eliminate much of their space allocations, as that would be done by the disk drive. Some of the concepts of RAID as we know them could be changed — RAID and object so-small objects could be RAID-1, large objects could be another RAID method, and objects could migrate between small and large allocations.

So many things might have been possible with T10 OSD disks, but I see nothing on the horizon. The problem is we really need T10 OSD, as at least one vendor has developed a hybrid disk drive combining NAND and spinning disk. How do you decide what goes in the flash buffer and what stays on the hard drive portion? With standard file systems, you know nothing about the data at the disk drive level. All you know is that a block is read or written, and you can track how many times for each. Maybe you could track it based on the time after power up. This does not help much if you are trying to boot fast, as figuring out what files must be on the flash section to speed up booting is really hard if you are moving a lot of data around, which we all do. I am sure that there are some specialized algorithms to help, but something like a big set of patches could really mess that up, and the algorithm would have to figure out what is access at device power-up again.

This is very inefficient, and all of it could have been solved with OSD disk and a file system that talked OSD, so it could place what was important in the flash. So much promise, so much lost.

2. PCIe 3.0 Arrived, but It Was Late

PCIe 1.0a was introduced in 2003 with 250 MB/sec per lane. In 2005, there were updates to the standard to fix some compatibility issues, which is common with a new standard. In 2007, version 2.0 was released. Vendors shipped 2.0 products in 2007 and the performance was doubled. In November 2010, the PCI-SIG stated PCIe 3. The final specification was set to be released in 2011, and the specification was finally released near the end of the year. PCIe is critical for networking and storage performance. For example, with PCIe 2.0 and eight lanes, you get at-best 4 GB/sec of bandwidth. With 6.0 Gb/sec SAS you can sustain only a little more than five SAS lanes at full rate. With 10 Gb/sec Ethernet, you can have a four-port card, but you cannot run all four ports at full data rate.

PCIe 3.0 is critical to the storage and network infrastructure, as that is the way we get information in and out of systems. As data sizes grow, we must move more and more information off of systems, and PCIe was needed sooner not later. The late arrival of PCIe 3.0, in my opinion, delayed the development of 40 and 100 Gb Ethernet, 12 Gb SAS, 16 Gb Fibre Channel, FDR InfiniBand and after GPUs connectivity. Because PCIe 3.0 was late, PCIe 4.0 was also delayed, and the latest information says it will be ready in 2015. I am not holding my breath, given that it is now almost mid-2012. PCIe was just released from Intel in March, and it is only now making its way into the market. This delay has significantly hurt the rate at which we can process data.

3. 10 Gb Ethernet Market Acceptance

Many of us predicted in 2008 that 10 Gb Ethernet would dominate the market. I was hopeful that we would see significant price drops and broad market acceptance. This is just happening now. The delay of 10 Gb Ethernet combined with the delay in PCIe 3.0 means that 40 Gb Ethernet and 100 Gb Ethernet get pushed out. Vendors need volume to get economies of scale to result in price drops on chipsets. This also did not happen.

4. Migration to 2.5-Inch Disk Drives

These smaller drives used in laptops and at the other end of the spectrum for enterprises have been slow to move into near-line drives for the RAID market and workstations. These types of drives have a far better IOPs per watt and streaming bandwidth per watt rate than do 3.5-inch drives. I expected, as did others, far greater market acceptance of these drives, but what I think happened is that the RAID vendors did not want to re-engineer their disk drive trays during the recession, and who can blame them? It was a big cost for them with limited return on investment in the short term. This, of course, meant the volume was reduced, and the drive vendors did not aggressively manufacture lots of drives without a market, and you know the rest of the story. I am not seeing the marketing moving to 2.5 inch drives, but I have hopes that it will happen during the next few years.

Final Thoughts

This is by no means an exhaustive list. The causes for each of the examples I gave is different. OSD disk drives without support from the disk drive vendors meant there was no potential for other vendors in the stack to do OSD. It is sad that a standard with so much promise went nowhere. With PCIe 3.0, it is hard to point to the reason. Intel has significant influence on the PCIe 3.0 standard. Could it be that the release of the standard was structured to time with the release of a new chip? Only the cynical Henry Newman would think that.

With 10 Gb Ethernet, the delay was directly caused by the market not purchasing enough product, so in some ways we have only ourselves to blame for the price not dropping. The early adopters did not buy enough to cause the price to drop. For the 2.5 inch disk drive market, the blame clearly lies with the RAID controller vendors, who for profit and margin reasons could not afford to re-design their back-end disk trays to accommodate the new technology. Who can really fault that market move?

While there is no blame to be laid for any of what happened, there are long-term implications. The data path is not scaling at the same rate that CPUs are scaling, and even memory bandwidth is scaling. We are way behind. These are examples of major impacts on our ability to process data and create information, and they will be felt in the long-term. We keep marching along, and no one is going to leapfrog to make up for lost time.

Henry Newman is CEO and CTO of Instrumental Inc. and has worked in HPC and large storage environments for 29 years. The outspoken Mr. Newman initially went to school to become a diplomat, but was firmly told during his first year that he might be better suited for a career that didn’t require diplomatic skills. Diplomacy’s loss was HPC’s gain.

Follow Enterprise Storage Forum on Twitter

Henry Newman
Henry Newman
Henry Newman has been a contributor to TechnologyAdvice websites for more than 20 years. His career in high-performance computing, storage and security dates to the early 1980s, when Cray was the name of a supercomputing company rather than an entry in Urban Dictionary. After nearly four decades of architecting IT systems, he recently retired as CTO of a storage company’s Federal group, but he rather quickly lost a bet that he wouldn't be able to stay retired by taking a consulting gig in his first month of retirement.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.