Backup Strategies for a Virtual World

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

MOUNTAIN VIEW, Calif. — As enterprises move more heavily into virtualization, they will have to overhaul their data backup and disaster recovery strategies because these won’t apply so well to the new virtualized world.

That’s the case Deepak Mohan, senior vice president of Symantec’s (NASDAQ: SYMC) data protection group, made in a press briefing here at the company’s offices, where he discussed its strategies for disaster recovery, high availability and data protection.

There are two major reasons why virtualization requires a new approach to data backup and disaster recovery, Mohan said. One is virtual sprawl, which is the unchecked proliferation of virtual machines (VMs). “Virtual machines are easy to deploy and propagate like rabbits, and that causes complexity of management from the data perspective,” Mohan explained.

The other reason is the difficulty of protecting and recovering applications in virtual environments. Distributing applications across VMs or across both VMs and physical servers further strains the backup and recovery systems. Finally, VMs can be easily moved from one physical server to another, using applications like VMware’s (NYSE: VMW) VMotion, which makes them more difficult to track and back up.

Mohan recommended that CIOs consider restructuring their data backup and disaster recovery strategies as soon as they begin to virtualize. In the traditional backup approach, where perhaps 20 virtual machines are running on one physical server, IT would have to back up each of those VMs and take one snapshot of the entire environment so it could recover one file or a number of files with a data protection product, Mohan said.

Symantec’s NetBackup enterprise-class flagship product offers a new approach — it lets users take only one snapshot of the environment (instead of many) and conduct granular recovery of files from that single snapshot image.

This sort of granular recovery capability is getting more important as virtualization moves from development and testing labs to production environments where transaction-intensive applications are being used.

“Before, people were virtualizing print and other servers and testing and development, where losing data wasn’t that important, or consolidating legacy applications into smaller, newer servers,” said 451 Group analyst Henry Balthazar. “Now, they’re moving into e-mail servers and transaction-oriented applications, where problems get magnified.”

Enterprises that handle data directly in their VMs and servers, and give data availability priority, “may well have to rethink their data backup and recovery infrastructure,” said Scott Crawford, research director at analyst firm Enterprise Management Associates. That’s because the data will be lost when those VMs or servers crash.

That problem doesn’t arise if the enterprise has its data stored in virtualized file systems or on network storage. The data can still be accessed even if the VM or server crashes because it’s stored separately.

Dealing with Sensitive Data

Virtualization raises another problem — the VM image itself may have sensitive data an enterprise needs to protect. Data architects will “probably be forced to re-think how they manage data in transitioning to virtualized environments and what that means for data storage, backup and recovery in those environments,” Crawford said.

Michael Bilancieri, director of products at Marathon Technologies, which provides high-availability software for physical and virtual servers, said it’s critical for enterprises to know what’s being provisioned so IT can ensure everything is backed up. “A lot of this is understanding where the virtual machines are, what virtual machines are out there, then having the tools to back them up,” he said.

Companies such as Surgient and Embotics provide tools to manage VM sprawl and track VMs in the IT infrastructure. Marathon itself will soon offer a product from InMage that will let users implement continuous data protection (CDP) at the hypervisor level.

CDP involves automatically saving a copy of every change made to data so IT can restore any previous copy. If, for example, your application crashes or is infected by a virus at 4:00 p.m., you can restore it to the state it was in at 3:58 p.m. or earlier. The data captured is stored in a separate storage location instead of the server’s normal storage to ensure it remains safe.

Already, some CIOs are thinking about restructuring their data backup and disaster recovery infrastructures, Mohan said. “Backup redesign is third on some CIO’s lists, after tiered storage buildout and consolidation. This is the first time in the last 20 years that it’s been that important.”

Article courtesy of InternetNews.com

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.