Virtualization Gets Real at SNW

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

The “V” word was everywhere to be heard at this week’s Storage Networking World conference in San Diego. Analysts, end users and vendors alike preached the virtualization mantra to the gathered multitudes. Yet its utterance is not viewed as a positive in all quarters.

“According to some analyst studies, SOA and virtualization are the two most despised terms in the current IT vocabulary,” says Gary Berger, vice president of technology at Banc of America Securities. “All it is really about is setting up an abstraction layer to provide more distributed workloads.”

His company experienced poor utilization from having multiple data centers, individual silos of data and over provisioning of capacity. This made the environment hard to manage and led him to adopt virtualization technology in the form of a SAN from 3PARdata, along with IBM blades. He also centralized into two data centers.

“Virtualization and consolidation has given us a 95 percent reduction in storage administration,” says Berger. “We are able to offer each application and business unit their own virtual slice with high performance and availability.”

Other users sang a similar melody. Alex Lopez, storage director at the University of California, Davis Medical Center has been engaged in virtualizing to aid HIPAA compliance efforts. UC Davis has a mixed mainframe, AIX and Windows platform along with Brocade switches, Hitachi USP and IBM ESS and FAStT storage boxes.

No so long ago, he had the mainframe direct attached to one storage array, medical images being burned to DVD and a confused storage foundation. He simplified as the first step. Instead of multiple SANs, he has one SAN. And images are stored on disk. As a result, he says retrieval of long-term archival data from the FAStT has shrunk from 45 minutes to seconds.

“If you don’t build the right foundation, it will all fall apart,” says Lopez. “Keep virtualization simple by building it upon the fabric.”

The Wendor View

Ashok Singhal, CTO of 3PAR, agrees with Lopez. He stresses that while virtualization can be done at many layers, it really needs to be implemented at the base layer to be broadly effective.

“Virtualization has almost become a bad word, as it is subjected to so much hype,” says Singhal. “It is used to mean a wide range of things. You have to figure out the right layer to address.”

In his view, virtualization is a simple concept: provide the user with a single logical view of storage while taking advantage of a complex physical hierarchy. The benefits are resource aggregation and sharing, cost and performance optimization, and improved data availability.

Singhal believes that storage virtualization should not be done in multiple ways by multiple systems. He prefers a streamlined approach at the block storage layer. But where to virtualize block storage: in the host OS, the SAN switch, an appliance or the storage array? He advocates the last option.

“Virtualization should be done in the storage subsystem, as there you can directly address disk drives, power, capacity, etc.,” says Singhal. “It is much harder to achieve at the upper levels.”

A 3PAR array, for example, lets the administrator change the RAID type or drive type while applications are running. Its Dynamic Optimization feature enables users to transition from one service level to another non-disruptively with one command, says Singhal.

In support of his argument, he cites gains from companies adopting virtualization at a higher level. While they have experienced benefits, these typically range in the 10 to 20 percent range in terms of time and cost savings. Done in the array and at block level, Singhal says far greater results are being achieved. As well as a 95 percent drop in storage administration, 3PAR customer Banc of America Securities has slashed its purchasing of raw capacity by 50 percent.

Not everyone agrees with 3PAR’s take on this market, of course. Most notably, the file virtualization vendors prefer global namespace-based technologies. Depending on who you talk to, this is either termed file virtualization or file area network (FAN).

“A FAN is defined as a way to improve management of unstructured data by decoupling that data from specific servers or NAS filers, and offer services such as data migration, load balancing, and replication,” says Rick Gillett, CTO of Acopia Networks, who is also on the Storage Networking Industry Association’s (SNIA) FAN working group.

Other vendors add more elements to the definition. Brocade Communications Systems has issued white papers with half a dozen or more elements of a FAN. Attune Systems offers four parts.

“A FAN is comprised of discovery of the environment, policy management, non-disruptive migration and global namespace,” says Daniel Liddle, vice president of marketing at Attune. “We are the only vendor that does all four elements for Windows-based systems.”

Partially Virtualized

While vendor conflicts and technological debates will continue for some time, the bottom line is that storage is late to the party. Bob Gill, an analyst with TheInfoPro, gave a presentation of virtualization’s rapid uptake in the enterprise at the server level.

TheInfoPro notes that more than 50 percent of enterprises have adopted server virtualization, and they intend to add even more VMs in the coming year.

“Virtualization for dynamic provisioning is next, as well as further development of the connection to virtual storage,” says Gill. “We still need to create such things as a specific path through an HBAsto a specific virtual machine.”

Tony Asaro, an analyst at Enterprise Strategy Group, takes a similar stance.

“Storage is only partially virtualized,” says Asaro. “Without infrastructure virtualization, we will always be less than the sum of our parts.”

He envisions a single logical pool with hundreds of disks operating on I/O simultaneously in order to eliminate disk read/write bottlenecks. As well as wide striping over many disks, he points to clustering, snapshots, de-duplication, internal tiering and logical partitioning as some of the many facets of storage virtualization.

Beware Complexity

Storage virtualization, then, is far from a mature technology. It is hard to say whether one underlying technology will prevail or multiple layers. But one thing is clear — an indiscriminate approach to virtualization could run into serious trouble.

“The challenges related to virtualization are associated with the extra layers of software, and potentially with extra hardware, that threaten to increase both cost and complexity,” says Robert Passmore, an analyst with Gartner.

Understandably, users emphasize caution.

“You need to get into file virtualization gradually,” says Liddle.

“Start small,” says Lopez. “Storage virtualization is something you want to implement slowly but surely.”

Back To Enterprise Storage Forum

Drew Robb
Drew Robb
Drew Robb is a contributing writer for Datamation, Enterprise Storage Forum, eSecurity Planet, Channel Insider, and eWeek. He has been reporting on all areas of IT for more than 25 years. He has a degree from the University of Strathclyde UK (USUK), and lives in the Tampa Bay area of Florida.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.