The 'Dumbing Down' of Data Storage


Want the latest storage insights?

Download the authoritative guide: Enterprise Data Storage 2018: Optimizing Your Storage Infrastructure

Share it on Twitter  
Share it on Facebook  
Share it on Google+
Share it on Linked in  

These days, people are using the word "dumb" regularly to describe everything from from sequestration to movies. I figured I should jump into the mix with a bold statement: I think OEM storage vendors are being forced to "dumb down" storage because we do not have the storage talent to manage the complexity we have in our industry.

The question is, is dumb storage good or bad for most of us?

It is certainly good for the customers. It was not that long ago that we had SAN file systems using the VERITAS volume manager and file system (VxVM and VxFS) for many commercial sites and a wide variety of applications. Today, the world is completely different and much simpler. In my opinion, it all started with NFS and NAS storage.

So is the current change to more simplified storage part of a cycle? Or is this the way things will be for the long term?

The History of Storage Simplification

If you are a regular reader, you know my old saying: there are no new engineering problems in IT—just new engineers solving old problems. The current storage trend is a movement to appliances. I suspect this is happening because there is a lack of a storage administrators and architects. Things need to be simpler in order to sell.

The trend could also be due to other market factors, such as a lack of standards. We have the IETF for Internet standards, but for storage from the server side we have little to no leadership. We have the OpenGroup and SNIA, neither of which have been very successful in developing a wide set of standards for management (Note that SNIA came out with Storage Management Initiative Specification (SMI-S), but I think it was too little to late). There is an agreed-upon common framework for network management, but not for all the various file systems from local ones like XFS, EXT-4 and NTFS, all the way to the biggest parallel file systems like GPFS and Lustre.

Honestly, in my opinion, it is a shame that the vendors did not get together in the 1990s when they had the opportunity. But that lack of cooperation spurred innovation, which is why I think that NAS took off in the early 2000s. It was easy to use and easy to configure, manage and upgrade.

I remember back in the late 1990s and early 2000s. SAN administrators and architects were in extremely high demand and could command large salaries. Even after the dot.com blow-up, SAN administrators and architects were still getting higher-than-average salaries compared to others in the IT industry because there were just not enough good people.

Companies like EMC, HP, IBM, Sun, Veritas and many others tried to get more SAN talent by promoting certification and education programs. But this certification cost time and money for the customers, and there were often required classes with each new release or each year. Worst of all, if you got your Sun certification that did not help much with EMC. The only common overlap might be the Fibre channel switch. So if a customer wanted or needed a mixed environment, they had to have people spending a lot of time training.

In the 2000s, the SAN vendors started to get a clue—likely because of pressure from customers. Also, the consolidation of SAN companies began during this period, which reduced the number of training classes needed, along with the vendors trying to develop a common SAN management framework like SMI-S.

Too Little Too Late

During this same period, the NAS market was growing fast. Management was easy. Provisioning was easy. Upgrades were easy. Training was simple. The interface was NFS.

But there were two things lacking:

  1. The performance could not come close to SAN for streaming I/O. However, many found was that most I/O was not streaming, but IOPS, which the NAS vendors addressed by adding read caches.
  2. Scaling NAS beyond a single box was an issue because the performance did not scale. So this limited the file system size to a NAS frame. That covered a good percentage of the market, but did not cover the upper end of the market.

With a few exceptions, the large SAN file system vendors for the most part lost significant market share to the NAS vendors. Today the SAN file system market is quickly disappearing and being replaced. When you want a multi-petabyte namespace, you have a few choices in the market with POSIX file system, but you have a number of choices with REST/SOAP based interfaces. However, becoming a file system expert for today’s file systems still requires significant training, given the complexity of the hosts, networks, storage devices and mapping that to the hundreds of file system tunable parameters.

Submit a Comment


People are discussing this article with 0 comment(s)