PHOENIX — While storage professionals spent much of this week with their heads in the cloud at Storage Networking World, they seemed more concerned about down-to-earth issues like maximizing productivity, minimizing cost, and learning about new data storage technologies to help them do more with less.
Storage management was top of mind for many IT professionals, judging from the packed sessions and hands-on labs. The variety of information and file types, the growing volume of data, and the pace of change are pushing users to find more effective ways to manage storage.
End users continue to consume massive amounts of storage capacity and are “eating storage for lunch,” Wendy Betts of IBM (NYSE: IBM) told a packed house of storage managers. Managers must know their users to get a handle on defining storage tiers (with three to five the recommended number), understand depreciation schedules and maintenance to get a grip on costs, and define usage only by what can be measured, not what is actually used.
Storage resource management (SRM) tools that can report on storage infrastructures, charge back for storage usage and manage storage and data based on importance of information were key points in a presentation by IBM’s Russell Warren. Warren said users should start with reporting to forecast and track storage usage and provide regular asset and capacity reports. Deploying operational capabilities is a necessary first step for an organization to move to a service-based approach — centralizing administration and automating and optimizing configurations. Students in hands-on labs gave high marks to exercises on discovering, viewing and creating reports, and learning how to identify and reclaim unused storage.
Virtual Confusion
Virtualization remains a big issue for storage managers. The technology may be suffering from the three Cs — commercialization, confusion and confliction — remarked a storage manager with a retail company, and the variety of sessions with multiple messages seemed to bear that out.
Users still seem overwhelmed by the sheer variety of approaches and solutions in the marketplace. They expressed confusion about server versus storage virtualization and wondered how to choose the best approach.
SNIA tutorial chair Rob Peglar of Xiotech tried to clear things up, explaining that storage virtualization is a tool for IT administrators to simplify the management of storage resources and reduce the complexity of their overall IT infrastructure.
Peglar outlined an eight-step virtualization checklist to achieve capacity, performance and high availability. Starting from a direct attached storage (DAS) environment, IT should first add the storage area network (SAN) infrastructure, then virtualization, followed by establishing a high availability environment, creating and using a single storage pool, and finally establishing load balancing and multi-pathing. Peglar cautioned that keeping goals top of mind is key to a successful virtualization adoption — aligning the storage infrastructure with business and IT objectives, meeting service level agreements, and implementing disaster recovery and strategic plans in tandem with virtualization.
FCoE, SSDs Draw Interest
In the new technology area, Fibre Channel over Ethernet (FCoE) may finally be coming of age, based on the traffic for a multi-vendor demonstration of 8Gbps Fibre Channel and FCoE in the Fibre Channel Industry Association (FCIA) booth, and the packed labs for FCoE and Converged Enhanced Ethernet (CEE), where users could get their hands on the technology.
Fibre Channel is powering a number of areas in the data center, said Tom Hammond-Doel, vice chair of the FCIA, including tiered storage and ILM, where it helps match data to the most appropriate storage type, and green storage, where it can offer effective performance to obtain the best efficiency. All FCoE milestones have been completed and the FCoE standard has been submitted to the INCITS committee, said Hammond-Doel. FCIA has developed recommendations for the minimum recommended 10GbE physical, Ethernet and FC logical protocol criteria for enterprise data center I/O unification.
Attendees also took an interest in enterprise-class solid state storage (SSD), with DRAM and NAND flash applications on display. According to Peglar, applications require space, time and better I/O. Read/random is typically the best use case for solid state storage, where it can deliver very consistent I/O response times. Solid state storage can potentially eliminate waste in the server and storage infrastructure, in applications and in data centers, and also alleviate human bottlenecks by helping reduce wait times for screen refreshes and queries. Solid state storage might be better for virtualized workloads, but the best practice is to look at all layers of the storage pyramid for the best use of hard disk drives and solid state drives.
The new TCO Calculator developed by SNIA’s Solid State Storage Initiative may help to define and justify SSD replacement of HDDs in some cases, perhaps for high-performance caching devices but not for medium to low I/O performance applications. A presentation by Terry Yoshii and David Stutznegger of Intel (NASDAQ: INTC) described how TCO can be used for comparative analysis of optional solutions from a cost/benefit perspective. The TCO should encompass all costs and how they are offset by the hard and soft value of benefits, and contain parameters on HDD use, total power consumed, total application capacity requirements and cost of maintenance and repairs.
Cloud Security a Concern
Interest in cloud storage came down to earth at SNW, with a real focus on security. With worldwide IT spending on cloud services forecast by IDC to reach $42 billion by 2012, there will be a move to a service-oriented infrastructure, according to Val Bercovici of NetApp (NASDAQ: NTAP) and the SNIA Cloud Storage Initiative. Bercovici predicted a competition between application-centric and resource-centric data management. He expects that resource-centric will win the internal battle through better efficiency, while application-centric will win the ultimate war through open cloud resources.
Cloud computing is viewed as IT as a service, with standards such as the Cloud Data Management Interface (CDMI) serving as the communication vehicle between various applications for accessing and managing the data. Key to the CDMI is a pool of resources that can be accessed in small increments on-demand, which is made possible by storage virtualization, resulting in a more robust system with cost reductions over traditional storage hardware solutions.
Russ Fellows of the Evaluator Group outlined practical ways to secure data using cloud services. He cautioned that moving to a virtual data center implies no physical perimeter or physical controls and demands a greater reliance on information security. Users must assume that cloud resources are publicly accessible for compliance laws, and that access typically uses encryption as enforcement. Storage managers should ask questions about the cloud storage provider’s architecture, controls used during multi-customer provisioning, how data is destroyed in a multi-tenant environment, if data can be seized by a third party, and if there is support for long-term archiving. Fellows recommended that storage professionals create and review their architecture, understand the security checklist, follow best practice guidelines and employ the encryption and key management product options available to best move forward with deployments.
Follow Enterprise Storage Forum on Twitter