File Virtualization Heats Up

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Explosive storage growth is driving the need for simplified management of file storage. But how can storage users increase efficiency and control growth without affecting service levels and still respond to changing business needs?

One possible solution is file virtualization, a technology that has suddenly caught fire. According to TheInfoPro’s (TIP) Heat Index survey, file virtualization jumped from the 15th to 6th position in June. As a result, NAS virtualization is projected to double in adoption this year.

Robert Stevenson, managing director of TheInfoPro’s storage practice, said 24 percent of respondents plan to acquire a NAS virtualization solution in the coming year.

They want the technology to deal with their burgeoning unstructured data. Spreadsheets, presentations, e-mail, text documents, video and audio files, and a litany of other file-based information constitute about 80 percent of enterprise data. That’s a headache for the administrator, and a headache for any management body that wants to place controls on retention periods and content type.

“File virtualization addresses a key end user problem — the proliferation of NAS and file servers,” says Stephen Foskett, director of strategy services at GlassHouse Technologies. “Some organizations have tens, or even hundreds, of file servers, and virtualization promises to organize those into a coherent whole and allow a greater level of flexibility for the storage managers.”

What virtualization does is increase data mobility by providing location independence of file systems and files from the applications and users that utilize them. And there are many different approaches. Microsoft’s Distributed File System (DFS) technology is the more accepted method for the moment. DFS-based virtualization is a relatively simple add-in to Windows-only networks, so it is the easier sell. However, Foskett notes that it is somewhat limited in scope.

But it is with in-network virtualization appliances where the real promise may lie. Interestingly, the lingering perception that NAS is the poor man’s SAN is not only helping propel NAS virtualization appliances into the spotlight, but is also beginning to ignite the SAN virtualization market.

“There is the perception that NAS and file services are less critical in terms of performance and stability than SAN, so buyers are more willing to take a gamble on a new product such as an in-band NAS appliance,” says Foskett. “These in-band appliances are very similar to their SAN counterparts, however, so perhaps the success of file virtualization will spark interest in the SAN side of the house.”

One Big Virtual Box

The oldest way to solve the file virtualization problem was to consolidate smaller file servers into a single large file server. The next approach was to use an even larger file server — one that was easier to manage and maintain. At some level of scale, however, it’s either impossible or doesn’t make sense to keep finding a bigger box. Of course, it is possible to perform lots of acrobatics on the client side using auto-mounters, name servers and load balancers. But as system size mushroomed, those solutions have ultimately proved unwieldy and impossible to maintain.

That’s where the current batch of file virtualization tools has become attractive. They allow customers to have more boxes, but make it look like it’s just one big box. That simplifies life for storage users.

As a sign of the potential of the market, most of the startups have already been gobbled up. Rainfinity was bought by EMC, Spinnaker by NetApp, NuView by Brocade, while NeoPath is supported but not owned by Cisco. Some of the remaining independents include Acopia and the newly launched Attune Systems.

Attune calls NAS virtualization “network file management” (NFM). Its Maestro File Manager creates an abstraction layer between where the files and directories are stored and where they are viewed by clients. This allows the storage infrastructure to change as needed without disturbing clients and the applications that use files. Maestro also centralizes monitoring, alerting and preventative maintenance actions so that instead of logging into hundreds of machines, administrators can manage their file servers from a single pane.

“These features dramatically reduce the time it takes to manage large collections of file servers and the daily tasks of adding capacity, migrating content and performing maintenance,” says Daniel Liddle, vice president of marketing at Attune. “The ROI of existing file servers is also improved by consolidating unused capacity and rebalancing the load.”

EMC is another company that recognizes the strategic significance of this market. It has combined EMC Rainfinity with other products in its arsenal such as VMware and Invista to create a far-reaching virtualization approach.

EMC Rainfinity Global File Virtualization virtualizes NAS and file servers by operating at the CIFS and NFS file protocol level. It supports multi-vendor file storage environments, i.e., it works with EMC and NetApp NAS boxes as well as general-purpose file servers based on Unix/Linux and Windows. Prices start at $80,000 per appliance.

Jack Norris, director of virtualization marketing at EMC, lists the benefits of this technology as: identifying poor utilization; minimizing system and network bottlenecks; providing non-disruptive read/write access during migration; enabling administrators to optimize storage environment without impacting service levels; a single interface for monitoring and data movement; and synchronous file replication across sites and heterogeneous environments over IP networks.

“Rainfinity Global File Virtualization provides non-disruptive access and frees up data so there is no impact on user access or network performance,” says Norris. “Other solutions are either incomplete by not providing the capability to manage active files, or proprietary file systems that do not support heterogeneous environments.”

NetApp counters by extolling the virtues of Virtual File Manager (VFM) and its V-Series virtualization hardware. They can be utilized either to add NAS functionality to existing SAN storage or to consolidate and simplify third-party SAN environments.

“With VFM, storage administrators have virtualized all their open systems and NetApp file servers into a single view of their enterprise file system,” says Patrick Rogers, vice president of products and alliances at Network Appliance. “We give our customers the tools to scale their infrastructure without significantly increasing their management complexity.”

In addition, NetApp just introduced Data OnTap GX, the latest version of its flagship OS. It provides a single Global Namespace that can scale to incorporate dozens of servers, many petabytes of data, and provide data to tens of thousands of compute nodes — all in what appears as a single NAS storage system.

Somewhat surprisingly, Rogers isn’t nearly as bullish about the merits of NAS virtualization as Norris. Rogers concedes that file virtualization can hide much of the complexity of spreading data across multiple file servers. But he has reservations.

“File virtualization is not for everyone, and in some cases would introduce an unnecessary layer into the solution stack,” says Rogers. “If a company finds itself limited by the complexity of managing too many file servers, taking advantage of file virtualization is one way to go.”

Block Versus File

What other directions are available? GlassHouse’s Foskett sees several possible options. Vendors such as BlueArc, OnStor and PolyServe, for example, help companies build out massive NAS to replace a diverse set of smaller servers. To keep it very simple, he says, some companies might just get a way with using Microsoft’s DFS without an external product. If this option is preferred, he questions the validity of going with a non-Microsoft solution.

“Windows Server 2003 R2 includes enhanced DFS management tools that call into question the value of other DFS-only approaches,” says Foskett.

And, of course, there is block virtualization in a SAN environment. But NetApp, for one, dismisses it as vastly inferior to file-based methods.

“Compared to the SAN/block virtualization solutions out there, file virtualization is just easier to deploy and easier to use,” says Rogers.

This view is supported by TIP.

“File virtualization is easier, faster and less invasive, with less interruption to storage availability,” says Stevenson. “NAS file virtualization rates are exceeding block rate adoption.”

EMC, however, takes a more neutral stance. It is talking up both technologies.

“Block virtualization and file virtualization are both critical components of an overall information infrastructure,” says Norris. “However, the volume and growth of unstructured data and the number of users needing access to that data make file virtualization the top high priority in many organizations.”

That said, the hype curve may be running a little in advance of reality. Foskett notes that although file virtualization is hot in terms of product offerings, it is just heating up as far as buyers are concerned.

“This is one of the key developing market segments, right along with CDP, archiving and iSCSI, in terms of competing products and a great deal of innovation,” he says. “However, end users are just starting to get the message. I expect some consolidation as the products mature, with purchases coming on strong in 2007.”

For more storage features, visit Enterprise Storage Forum Special Reports

Previous article
Next article
Drew Robb
Drew Robb
Drew Robb is a contributing writer for Datamation, Enterprise Storage Forum, eSecurity Planet, Channel Insider, and eWeek. He has been reporting on all areas of IT for more than 25 years. He has a degree from the University of Strathclyde UK (USUK), and lives in the Tampa Bay area of Florida.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.