After years of vendor announcements and the promise inherent in standards such as SMI-S, you’d think that storage interoperability issues would be a thing of the past.
Yet a recent user survey found that the same old problems are still there: multiple SAN islands, multiple management domains and numerous tool sets, making it difficult to manage storage assets; a lack of integrated and interoperable solutions; an increasingly complex storage infrastructure; and problems executing even relatively straightforward actions.
Take the case of Tony Jurgens, a storage manager at a multinational energy company. He looks after thousands of Compaq, HP, and Sun Solaris servers worldwide in a storage environment that is a mix of SAN, DAS and NAS. A hodgepodge of equipment from the likes of NetApp, Hitachi, Emulex, Brocade and EMC meant plenty of complexity for the storage department. Even the most basic of functions could cause severe headaches.
“A simple action like changing out a host bus adapter card (HBA) meant that we had to go through a certification process with Microsoft,” Jurgens said. “This becomes very tedious and costly when a small part like this isn’t supported any more.”
While standardization initiatives are gathering steam, the real world reality is that information and applications remain stovepiped. One tool runs backup, another runs storage resource management (SRM), another takes care of disaster recovery (DR), and so on. In the middle sits the beleaguered storage admin, forced to engage in console hopping to get anything done.
“If you are looking for interoperability nirvana, it is little bit more than a myth, but it has not quite reached the real world either,” said Tony Lock, chief analyst at Bloor Research.
Early Storage Management Efforts
To understand why interoperability remains little more than a good idea, it is necessary to look at the way storage management tools have developed over the years. SRM products arrived on the scene in the mid-1990s. At that time, they focused primarily on file-level analysis and reporting. They could tell you who owned the largest files, which documents hadn’t been looked at for six months, or who was storing all those JPEGs. These were more or less reporting tools that lacked active management of physical storage assets, but they were good enough to be gobbled up by larger vendors — examples include HighGround Systems Storage Resource Manager (acquired by Sun), Trellisoft SRM (acquired by IBM), Astrum StorCast (acquired by EMC), and WQuinn StorageCentral (acquired by Veritas).
Then came a variety of tools dealing with device/element management in SANs. Every fabric switch and director and every storage system included a management tool to configure, report on, provision, and monitor the device. Examples include EMC ECC, Brocade FabricTools, Hitachi Device Manager and McDATA EFC Manager.
“These management tools are developed by hardware companies that have a hardware agenda, and thus their tools only do a good job of managing that vendor’s storage device,” said Tom Rose, vice president of AppIQ. “Furthermore, these tools are myopic in that they don’t understand the relationship of the device they’re managing to the rest of the SAN, or that device’s impact on host servers and critical business applications.”
The end result is a mess of point tools for basic SRM and device management functions. None are integrated, so they require a variety of agents, databases and interfaces to operate. Even then, they don’t necessarily provide a complete picture of the storage infrastructure; that’s why you see administrators fiddling with Excel spreadsheets and Visio diagrams to manage and provision capacity, monitor performance and events, and map out connections between applications, host servers, HBAs, fabric switches and storage systems.
“Customers are looking for a panacea that would help them control their storage costs,” said Jurgens. “Vendors promised capabilities on what the customer wanted, but issues with compatibility remain, particularly in rapidly changing environments.”
On the Road to Nirvana
Continuing user headaches cannot simply be written off as being due to vendor malice or a brazen lack of concern for the customer. Efforts are being made, as shown by the examples above. Even more recently, vendor associations such as the Storage Networking Industry Association (SNIA) have successfully reined in conflicting vendor agendas under the umbrella of a Storage Management Initiative Specification (SMI-S) standard.
While it is a nice start, SMI-S is far from a complete solution to user woes. Essentially, it is a common hardware interface that is aimed at integrating the management of products within a multi-vendor storage arena.
“SMI-S helps the user do about 60 percent of the basic daily storage functions such as add a LUN or create a zone, but there is still a lot of functionality to be added,” said Jim Geronaitis of Computer Associates, a SNIA member involved in the development of storage standards.
The other 40 percent of the time, the administrator is forced to view the consoles of the various hardware components. Over time, SMI-S will add more and more of these functions, but that is still only the tip of the iceberg. It does not address huge areas of the storage landscape such as asset management, information management, DR, backup, remote replication and failover.
Nevertheless, Geronaitis regards anything done with SMI-S as a positive. He feels it greatly reduces the qualification time required for product releases, since the company no longer has to work with dozens of vendors to make sure that its software interoperates effectively.
CA, therefore, is fully behind standards such as SMI-S. It is maintaining a hardware-independent market stance and focusing on storage management tools that work across multiple platforms. BrightStor Process Automation Manager, for example, is designed to provide a business view of IT and storage assets.
Obstacles Remain
Such tools, however, can sometimes fall victim to certain minefields that remain in the pre-SMI-S storage world. Despite involvement from all the key players, devices continue to have certain core functionalities that hardware vendors are not making available. So you have a portion of the device available to standard interfaces, and another portion that is vendor only.
Take an every-day event like the reallocation of cache, for instance. No standard can map to all the possible variables, since reallocation of cache is device and vendor dependent. So SMI-S is a good start and one that should be supported, but it doesn’t signify a complete solution.
AppIQ is one vendor that is targeting the interoperability playing field. Its approach aims at integrating SRM and SAN management on a unified platform. AppIQ StorageAuthority Suite uses industry standards such as Web services, Common Information Model (CIM) and SMI-S.
“Our standards-built platform enables us to simplify management of heterogeneous storage infrastructure and support new standards-compliant devices much faster than competitive products designed before SMI-S and CIM existed,” said Rose.
Once again, AppIQ appears to move things forward by making storage management a little easier, but it may not prove to be a silver bullet that instantly dissolves all your storage hassles. The question is, will we ever get there?
“I’m not sure we’ll ever arrive at true interoperability,” said Jurgens. “With the rate of change we’re seeing in the industry, it seems we are always playing catch up.”
He isn’t sitting around idly complaining, though. Jurgens and about 2,000 storage users around the nation have joined the Association of Storage Networking Professionals (ASNP) in the hope of presenting a unified user voice that will be heard by hardware vendors. They want a voice early in the product development process, well before the beta stage when it is typically too late to do much about serious issues.
“Users have every right to be skeptical about the various interoperability announcements,” said Daniel Delshad, chairman of the ASNP. “They should work with organizations like the ASNP to get a clearly defined message out to the industry.”
CA’s Geronaitis, though, is upbeat about achieving an ultimate resolution.
“If you are hoping for interoperability with every device ever created, then yes, it is a myth,” he said. “But if you mean interoperability within major players or a core set of products, then that is already becoming available.”
Lock agrees. “If you are a customer with a relatively straightforward set of demands and you are utilizing hardware and software from a limited number of vendors, interoperability soon may be ready for delivery to your data center.”