Breaking Down the Storage Virtualization Barriers

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Storage capacity is growing at a rate greater than fifty percent per year, but the ability to manage that storage lags behind. Although the hardware price per gigabyte has plummeted, those savings are easily offset by the added management burden that results from having to cobble together storage systems and devices utilizing proprietary methodologies.

“The many manual tasks and lack of centralized management across server, storage, and operating system platforms adds to the complexity,” says Mike Zisman, IBM Corporation’s vice president for corporate strategy. “This complexity results in poor IT resource utilization, and problem identification and resolution is often slow, painful, and costly.”

To fix these shortcomings, Zisman suggests four aspects of storage for improvement, as covered below. IBM has been researching these points, resulting in greater interoperability, a host of self-managing “autonomic” features, and a new file system that virtualizes distributed devices and scales up to hundreds of servers holding billions of files.

Four-Point Storage Management Upgrade

Zisman lays out the four points required in a major shift in enterprise storage management. First, the storage and IT environments need to be integrated so that it is easy to add storage to meet growing requirements. The second point involves operating on open standards both to integrate with existing infrastructure components as well as to seamlessly incorporate new devices from a variety of vendors. The third element is storage virtualization.

“Virtualization lets you reduce complexity by treating your storage and IT resources as a single common pool of resources,” explains Zisman. “This insulates users from the complexity of storage resources and exploits the benefits of storage networks.”

Improved disk utilization is one of those benefits. Organizations currently use only about 44 percent to 55 percent of available disk space, according to IBM research numbers. Virtualization allows exploitation of all that potential storage rather than wasting money purchasing more in order to be sure you don’t run out of capacity.

Finally, Zisman says that the storage systems, like other computing systems, need to be “autonomic” or self-managing. This breaks down into four elements that occur automatically:

  • Configuration – Adding and/or changing features, servers, and software can take place without bringing the system down. Other parts of the system recognize these changes and adapt accordingly, with minimal human intervention.

  • Self-Healing – The system recognizes a failed component, takes it offline, and repairs or replaces it. For example, if a file becomes corrupted, the system can locate a copy on a mirror site and replace the damaged file. If a server goes down, the system automatically routes traffic to a backup server.

  • Protection – The system monitors who is accessing resources. It blocks and reports any unauthorized access attempts.

  • Optimization – Autonomic systems constantly monitor system conditions and tune storage, databases, networks, and server configurations for peak performance.

Page 2: A SAN-Ready File System

A SAN-Ready File System

IBM has crews working on each of these points. In February, it incorporated the Storage Networking Industry Association’s Storage Management Initiative Standard (SMI-S) into its Enterprise Storage Server equipment to improve interoperability between its own and other vendors’ equipment. Its autonomic computing project, formerly called eLiza, is gradually adding self-management features to a number of IBM products.

The big news, however, is in the storage virtualization arena. In 1997 a team at IBM’s Almaden Research Facility started developing a new file system to unite all the distributed devices in a heterogeneous network and that features the ability to scale up to hundreds of servers holding billions of files containing petabytes of data accessed by thousands of users.

“Those of us who came from mainframes expected that a lot of machines should be able to get at and share the same storage resources, but in the open systems that had not happened,” says David Pease, Manager of Storage Software at IBM’s Almaden Research Facility in San Jose, CA. “SANs give the hardware infrastructure to do that kind of sharing and resource centralization, but it was clear to us that it would take a major new file system infrastructure to fully realize all the benefits of SANs.”

This new file system doesn’t replace any existing file systems, but rather supplements them.

“In a Windows environment it appears as just another drive letter, Drive S, while in Unix it gets mounted at a mount point and is part of your file space,” Pease explains.

Called the TotalStorage SAN File System, it will be available as an integrated hardware and software package starting in December 2003, extending the storage virtualization’s function. Among other things, it improves data sharing and the ease and efficiency of access to data, and also reduces the application downtime caused by storage and data management tasks — all of which produces the benefit of lightening the load on the IT staff.

“The new system improves productivity and reduces the pain for IT storage and server management staff,” says Zisman.

Feature courtesy of EITPlanet.

»


See All Articles by
Drew Robb

Drew Robb
Drew Robb
Drew Robb is a contributing writer for Datamation, Enterprise Storage Forum, eSecurity Planet, Channel Insider, and eWeek. He has been reporting on all areas of IT for more than 25 years. He has a degree from the University of Strathclyde UK (USUK), and lives in the Tampa Bay area of Florida.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.