The Evolution of Stupidity: File Systems

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

The storage industry continues to make the same mistakes over and over again, and enterprises continue to take vendors’ bold statements as facts. Previously, we introduced our two-part series, “The Evolution of Stupidity,” explaining how issues seemingly resolved more than 20 years ago are again rearing their heads. Clearly, the more things change, the more they stay the same.

This time I ask, why do we continue to believe that the current evolutionary file system path will meet our needs today and in the future and cost nothing? Let’s go back and review a bit of history for free and non-free systems file systems.

Time Machine — Back to the Early 1980s

My experiences go back only to the early 1980s, but we have repeated history a few times since then. Why can we not seem to remember history, learn from it or even learn about it? It never ceases to amaze me. I talk to younger people, and more often than not, they say that they do not want to do hear about history, just about the presentation, and how they are going to make the future better. I coined a saying (at least I think I coined it) in the late 1990s: There are no new engineering problems, just new engineers solving old problems. I said this when I was helping someone develop a new file system using technology and ideas I had helped optimize the design around 10 years earlier.

In the mid-1980s, most of the open system file systems came as part of a standard Unix release from USL. A few vendors, such as Cray and Amdahl, wrote their own file systems. These vendors generally did so because the standard UNIX file did not meet the requirements of the day. UFS on Solaris came from another operating system, which was written in the 1960s, called Multics . That brings us to the late 1980s, and by this time, we had a number of high-performance file systems from companies such as Convex, MultiFlow and Thinking Machines. Everyone who had larger systems had their own file system, and everyone was trying to address many, if not all, of the same issues. They were in my opinion the scalability of:

  1. Metadata performance
  2. Recovery performance
  3. Small block performance
  4. Large block performance
  5. Storage management

The keyword here is scalability. Remember, during this time disk drive density was growing very rapidly and performance was scaling far better than it is today. Some of the vendors began the process of looking at parallel systems and some began charging for file systems that were once free. Does any of this sound like what I said in a recent blog post, “It’s like deja-vu, all over again” (Yogi Berra)? But since this article is about stupidity, let’s also remember the quote from another Yogi, Yogi Bear the cartoon character, “I’m smarter than the average bear!” and ask the question: Is the industry any smarter?

Around 1990, Veritas released VxFS, the first commercial UNIX file system. This file system tried to address all of the bullets points above except storage management, and Veritas added that later with VxVM. VxFS was revolutionary for commercial UNIX implementations at the time. Most of the major vendors used this product in some fashion, either supporting it or OEMing. Soon Veritas added things like the DB edition, which removed some of the POSIX-required write lock restrictions.

While Veritas was taking over the commercial world in the 1990s and making money on the file system, Silicon Graphics (SGI) decided to write its own file system, called XFS. It was released in the mid-1990s. It was later open sourced and had similar some characteristic to VxFS (imagine that), given that some of the developers were the same people. By the late 1990s and early 2000s, a number of vendors had shared file systems, but you had to pay for most of them in the HPC community. Most were implemented with a single metadata server and clients. Meanwhile, a smaller number of vendors were trying to solve large shared data problems problems with a shared name space and implementation of distributed allocation of space.

Guess what? None of these file systems were free, and all of them were trying to resolve the list of the five areas noted above. From about 2004 until Sun Microsystems purchased CFS in 2007, there was an exception to the free parallel file system– Lustre. But “free” is relative because for much of that time significant funding was coming from the U.S. government. It was not long after the funding ran out that Sun Microsystems purchased the company that developed the Lustre file system and hoped to monetize the company purchase cost by developing hardware around the Lustre file system.

At the same time, on the commercial front, the move to Linux was in full swing. Enter the XFS file system, which came with many standard Linux distributions and met many requirements. Appliance-based storage from the NAS vendors also met many of the requirements for performance and was far easier to management than provisioning file systems from crop of file system vendors selling file systems.

Now you have everyone moving to free file systems, not from vendors like in the 1980s but from the Linux distribution or from the NAS appliances vendors. Storage is purchased with a built-in file system.

This is all well and good, but now I am seeing the beginnings of change back to the early 1990s. Remember the saying that railroad executives in the 1920s and 1930s did not realize they were in the transportation business? Rather, they saw themselves as being only in the railroad business and thus did not embrace the airline industry. Similarly, NAS vendors do not seem to realize they are in the scalable storage business, and large shared file system vendors are now building appliances to better address many of the five bullets above.

Why Are We Going Around in Circles?

It seems to me that we are going around in circles. The 1980s are much like the early 2000s in the file system world, and the early 1990s are like the mid-2000s. The mid-1990s are similar to what we are going into again. The same is likely true for other areas of computing, as I have shown for storage in the previous article If we all thought about it, that could be said for computational design with scalar processors, vector process, GPUs and FPGAs today and yesteryear.

So everything is new every 20 years or so, and the problem solutions are not really that different. Why? Is it because no one remembers the past? Is it because everyone thinks they are smarter than their manager was when he or she was doing the work 20 years ago? Or is it something far different, like the market mimics other cycles in life like fashion, food preparation and myriad other things.

Almost 20 years ago, some friends of mine at Cray Research had the idea to separate file system metadata and data on different storage technologies, as data and metadata had different access patterns. Off and on file systems over the past 20 years have done this, but the concept has never really caught on as a must-have for file systems. I am now hearing rumblings that lots of people are talking about doing this with xyz file system. Was this NIH? I think in some cases, yes. The more I think about it, there is not a single answer to explain what happens, and if I did figure it out, I will be playing the futures market rather than doing what I am doing. We all need to learn from the past if we are going to break this cycle and make dramatic changes to technology.

POSIX is now about 20 years old since the last real changes were made. I am now hearing that POSIX limitations from a number of circles are limiting the five factors. If we change POSIX to support parallel I/O, I hope we look beyond today and think to the future.

Henry Newman is CEO and CTO of Instrumental Inc. and has worked in HPC and large storage environments for 29 years. The outspoken Mr. Newman initially went to school to become a diplomat, but was firmly told during his first year that he might be better suited for a career that didn’t require diplomatic skills. Diplomacy’s loss was HPC’s gain.

Follow Enterprise Storage Forum on Twitter

Henry Newman
Henry Newman
Henry Newman has been a contributor to TechnologyAdvice websites for more than 20 years. His career in high-performance computing, storage and security dates to the early 1980s, when Cray was the name of a supercomputing company rather than an entry in Urban Dictionary. After nearly four decades of architecting IT systems, he recently retired as CTO of a storage company’s Federal group, but he rather quickly lost a bet that he wouldn't be able to stay retired by taking a consulting gig in his first month of retirement.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.