Easing the Storage Management Burden

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Some call it On Demand Storage, some call it the On Demand Enterprise. Other terms are Utility Computing, N1, Autonomic Storage, and the Adaptive Enterprise. Whichever label prevails, the basic idea is to offer storage as a service much the same way you deal with your utilities. You use the service, pay for what you use, and leave the supplier to deal with the behind-the-scenes technology. If the service isn’t there when you want it, you scream or change suppliers.

“Organizations should have one bill for storage infrastructure,” said Mark Barrenechea, senior vice president of product development at Computer Associates. “Instead of wasting money by retaining poorly utilized systems, a better model is to only pay for what you use.”

CA’s solution is the On Demand Enterprise. Whether IT is outsourced, or available from an in-house IT department, storage and computing resources would be made available on an as needed basis, and billed accordingly. Such a vision, though, requires a complete rethinking of business processes and a high degree of automation.

“Whatever you call it, the general idea is to increase the value of the work done by your storage personnel by eliminating all the manual entry they must endure today,” said Mike Karp, an analyst at Enterprise Management Associates.

Last year alone, more than five Exabytes of data were stored worldwide. This equates to 500,000 times the amount of data that exists in the Library of Congress. Thus, even in enterprises where the old manual storage processes aren’t already broken, steady growth in capacity will make it a severe problem very soon.

Management Demands Must Be Curtailed

Jens Tiedeman, IBM’s vice president of storage software, believes we are at a crossroads. Despite all the grandiose vendor plans and announcements, we still can’t easily manage and build a heterogeneous SAN. Maximizing the utilization of physical assets continues to be difficult. Multiple file systems cannot share data and must be managed separately. Managing data is a burden, as each component has a unique interface. Even installation and basic storage configuration can still be a nightmare.

“There is no common way to view and manage the environment, so storage management is a real pain,” said Tiedman. “Virtualize? I can’t even visualize it.”

In a few decades, storage will require 30 million storage administrators if manual processes remain.

He makes the analogy of telephone systems in the 1930s. The phone companies observed almost exponential growth in the number of calls — much like storage growth will be over the next decade. Forecasts indicated that by 1980, telecom would need 100 million switch operators. Similarly, in a few decades, storage will need 30 million storage administrators if manual processes remain.

“Autonomic storage means virtualization of block storage so it is easily accessible,” said Tiedman. “We all need it. Even among fiercely competitive vendors, we all agree on open standards in storage.”

Richard Escott agrees. As HP’s director of storage management software, he feels it is essential to get away from one at a time data solving by storage administrators to an environment that runs on best practices.

“Applications have to speak a common language and that means standards such as IP and SMI-S,” Escott said. “Standards will drive automation into the environment, eliminate customization and reduce the number of elements to manage.”

Rich Napolitano, vice president of Sun’s Storage Systems Group, sees the move toward utility computing as a natural evolution. First we had integrated proprietary systems (the storage was built into the system), followed by distributed open systems where storage and servers became independent. These were integrated into racks of DAS, and then racked and stacked storage were hooked into SANs. Currently, blade servers have been integrated into racks to again consolidate the storage and servers. And standardization will accelerate the process further.

“We appear to be returning to the mainframe model once again,” said Napolitano. It’s back to the future. Even EMC is embracing SMI-S.”

He lists the four main storage components:

a) Disks and arrays
b) Access via switching by Ethernet and FC
c) Data services
d) Applications (storage management and data center management, SRM, etc.)

Vendor lock was caused by data services being locked into disks and arrays. Standardization is all about separating these two components so that the individual tools become vendor agnostic. Standards such as SCSI, FC, FCIP, and NFS are what will eventually take us to that point.

Already we are seeing the effects of this evolution. While companies bought a RAID array a few years ago, the value proposition is shifting up the food chain. RAID today is usually one feature within a larger box or an overall system. Few sell it on its own. This is similar to electric motors. People initially bought the components and built their own. Then they built the motors. Today, they are included in everything and only OEMs buy them.

“RAID has become one of those components you don’t need to know about,” said Napolitano. “Innovation has shifted elsewhere into areas such as volume management, striping, snapshot and virtualization.”

Despite the dogfight over which embracive label eventually gains broad agreement, all these vendors are essentially playing the same tune. And it’s one the storage world can get behind: Intelligent storage systems that manage complexity, know themselves, continuously self-tune and adapt to unpredictable conditions.

“While we are seeing some degree of self-healing and self-optimization in some products, it still has a long way to go,” said EMA’s Karp.

Self-healing and self-optimization may take three or four years to live up to the hype, and virtualization could also remain a buzzword for that amount of time. At least by then, we should have one term for the field, instead of half a dozen.

Article courtesy of Enterprise IT Planet

Drew Robb
Drew Robb
Drew Robb is a contributing writer for Datamation, Enterprise Storage Forum, eSecurity Planet, Channel Insider, and eWeek. He has been reporting on all areas of IT for more than 25 years. He has a degree from the University of Strathclyde UK (USUK), and lives in the Tampa Bay area of Florida.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.