The Slow Climb Out of Storage Management Hell

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Henry Newman

Most people I speak with at network operations centers (NOCs) rate today’s network management tools as very good or excellent. However, the same cannot be said for storage management tools. This is not entirely the fault of the vendors, given the complexity of storage management compared to networking. So why is storage management and monitoring so difficult compared to network management and monitoring? What do the next few years holds for storage management? In my opinion, we are moving from storage management hell to storage management purgatory.

The Current Landscape

In the current environment, there is a very big difference between what can be accomplished within the standards process in network management and storage management. Much, if not most, of the network management world is controlled by the Internet Engineering Task Force (IETF) – a standards body with worldwide participation from many communities, not just vendors. The IETF has had, and will continue to have, participation from a wide variety of sources including the research and user communities. Additionally, network management and monitoring is much simpler given that the number of vendors that make networking equipment is small in comparison to the number of vendors making storage equipment. The 7 OSI layers and network complexity is far more straight forward than storage management. The OSI layers are well defined:

  • (7) Application Layer
  • (6) Presentation Layer
  • (5) Session Layer
  • (4) Transport Layer
  • (3) Network Layer
  • (2) Data Link Layer
  • (1) Physical Layer

Take a simple example like using telnet. If you use telnet on a system, it takes a well defined and well documented standards path from start to finish. Each of the pieces of hardware that you traverse is standards-based , both for interfaces and management. What happens end-to-end is controlled by a single standards body – the IETF.

The storage stack is far more complex than the networking stack. One of the reasons is the standards process. There are multiple standards bodies that control the storage equivalent of the 7 OSI layers. The Application layer is controlled by the OpenGroup, as is, to some degree, the presentation layer. The session layer is controlled by individual vendors, in my opinion, as the vnode layer does not necessarily have to be completely documented. It just has to control things like NFS. The data link, network layers, and physical layers, are controlled by the T10 (SAS/FC) and T13 (SATA) groups. That’s three different standards bodies to deal with and one area that is not well standardized.

The other big issues center on hardware in the path, and the lack of available documented details. For example, if you query a RAID device or tape drive you can get some standardized and simple information back. On the other hand, if you want significant details about that device you need to talk with the vendor to find out things like configuration options, tunables, error counters, details about the version and model information, and firmware. In addition, as many of you know, each of the host-side vendors have minor differences in their SCSI stack. Most RAID systems have settings that define the host side communication (Linux, Windows, AIX, Solaris, etc.), which gets confusing for the IT staff. This does not happen with Ethernet and IP traffic, as everything is well defined and well known. If you want things to get even more confusing, add standards from the Storage Networking Industry Association (SNIA), and there you have it.

Today, storage management tools have less functionality and networking tools across different vendor platforms and are far most costly than equivalent network management tools. I will be the first to admit that configuring and managing storage is far more complex than network management and configuration, given the number of different storage devices and their lack of well defined management interfaces. This situation will change a bit over the next few years, but dramatic changes are not in the cards.

The Likely Outcome

The ability to monitor and control network systems from a single location has become a simple process for even the largest networks. All large organizations have a NOC with the ability to manage and control networks around the world from a single location. When storage networks begin to use Ethernet, the protocol used on this commonly managed network will be a foreign (Fibre Channel). Surely, it will not take much to track and manage Fibre Channel is a way similar to IP networks. The underlying problem will be the same as we have today in terms of device and file system management. Storage devices are not very open in terms of management (RAID devices, tapes and disk drives), and file systems can be unmanageable beasts unless you use a tool designed for each vendor file system. The file system problem and the host-side management are not going to change because there is no economic reason for it to change. Why would vendors want to get together and have common interfaces for file system management? The vendors would not want that, given that most file systems have many different tunables and configuration options that are specific to that file system design and implementation. For vendors, it makes no sense to do have a single common interface.

The bottom line is that I do not think that we will ever get to the same level of management with storage as we do is done with networking even if we are using the same network interface cards (NICs), switches and routers. IP and TCP/IP are well-defined standards that are universally accepted, universally followed and universally integrated. There are no such standards available today for end-to-end storage management and I do not believe that there every will be. When the U.S. Government funded, participated in, and sometimes ran, the standards process back in the 1970s, it was either outside the purview of the vendor community or done as part of research funding. The vendors did not control the process, as is the case today. This is not to say that anyone ( government, vendors and/or research institutions) could develop a management standard for total storage management. The industry has gone too long without and at this point imposing standards would be next to impossible. If there was a standard would there have been the same innovation with have today?

Trying to add management frameworks into file systems and/or storage devices today would be – as the old saying goes – closing the barn door after the cows have left. Management of storage devices will continue to be the purview of the vendors. We have no choice. Management of the underlying networks will become more integrated as Ethernet becomes the hardware transport of choice for storage.

Henry Newman, CEO and CTO of Instrumental, Inc., and a regular Enterprise Storage Forum contributor, is an industry consultant with 29 years experience in high-performance computing and storage.

Follow Enterprise Storage Forum on Twitter.

Henry Newman
Henry Newman
Henry Newman has been a contributor to TechnologyAdvice websites for more than 20 years. His career in high-performance computing, storage and security dates to the early 1980s, when Cray was the name of a supercomputing company rather than an entry in Urban Dictionary. After nearly four decades of architecting IT systems, he recently retired as CTO of a storage company’s Federal group, but he rather quickly lost a bet that he wouldn't be able to stay retired by taking a consulting gig in his first month of retirement.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.