The Backup Conundrum: More Data in Less Time

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

IT managers are always looking for new ways to back up more data in less time. And even though there’s always room for improvement in backup speed, IT managers realize that performing a backup without disabling users would reduce the impact of the backup and therefore reach the same goal as improving backups with more speed. The problem continues to exist, though, with users still complaining that it takes too much time and too many resources to back up data. We decided to ask a few industry experts what advice they can offer in terms of solutions for achieving the goal of backing up more data in less time.

Point-in-Time Copies or Snapshots

One option is to perform backups from a mountable Snapshot, which means that the production data is never offline during a backup procedure. In other words, with mountable Snapshots, a backup window is no longer needed, according to Zophar Sante, vice president of marketing with SANRAD.

“Some storage solutions have the ability to take a Snapshot of a production volume,” says Sante. He explains that the Snapshot is a separate volume and is logically identified to the original production software. “A Snapshot volume does not contain a copy of all the actual data blocks, but rather is a virtual representation of the production volume.”

When a Snapshot is taken, he says, it instantly generates pointers to and protects all the blocks on the production volume. And when the backup software mounts the Snapshot volume and starts reading blocks for the backup process, it is actually being pointed back to the production volume where it reads the blocks and backs them up.

“The key to this,” says Sante, “is that the production volume is also simultaneously still accepting read and write commands without having to be frozen for backup. It keeps running, serving both client read and write commands as well as read commands from the Snapshot volume for the backup process.”

You may be wondering what happens to blocks on the production volume that is being overwritten while the backup process is taking place. The answer, according to Sante, is that the storage system knows that a production volume has an associated Snapshot volume. Before the old block is overwritten, he says, it is moved into the Snapshot volume.

The pointer that once pointed to the old block on the production volume is now pointed to the old block, which has been moved to the Snapshot volume. “So at the time the Snapshot is taken, all the blocks are mapped and protected,” he says. Sante sums it up by noting that you can keep all production volumes online and open 24×7, and take backups through the Snapshot volume without any “backup window.”

Page 2: Option Two: Consolidate Data onto a SAN

Option Two: Consolidate Data onto a SAN

Another way to realize the goal of backing up more data in less time is to consolidate data onto a SAN. Peter Hunter, EqualLogic’s product marketing manager, maintains that moving data to a SAN not only reduces storage management overhead, but also allows for a plethora of backup solutions that result in reduced backup windows and increased service levels.

The key, says Hunter, is that the disk arrays, tape drives, application servers, and backup servers become connected over a storage network. Hunter says he personally recommends iSCSI SANs because of their low cost and familiar IP infrastructure, which makes the task of connecting all of the servers to a SAN much easier.

Disk-to-Disk or Disk-to-Disk-to-Tape?

A third way to achieve the goal, according to Sante, is by using disk-to-disk-to-tape backup. He says IT managers can back up to disk at 100 Megabytes per second (MBps), versus backing up to tape drives at 1 to 12 (MBps), and then archive that disk-based backup to tape.

“Disk storage systems are at least ten times faster than tape drives and don’t require streaming data to maintain that performance,” says Sante. “To decrease the backup window, users should back up files to disk systems.”

His rationale is based on the fact that servers burst data, making it difficult for tape drives to run smoothly due to their need for continuous data streams. By first backing up to disk you can eliminate the bottleneck of tape drives and back up files at disk speeds instead of tape speeds.

“With the price of disk systems dropping and the introduction of IP-based SANs, it is now very affordable to use a shared disk storage system as the primary backup media for multiple servers and then to use tape as the archive media. Most backup software packages support disk-to-disk-to-tape options,” says Sante.

Hunter recommends taking it a step further, by cutting off tape backup entirely whenever possible. He feels the performance of tape drives and libraries is a major bottleneck for backup and suggests implementing straight disk-to-disk backups. “Disk is cheaper and more reliable than tape,” says Hunter.

“By backing up to disk, backup times accelerate to disk speeds,“ he says. However, he admits that most organizations are not ready to abandon their tape so quickly, and tape has certain durability and portability advantages over disk, to say nothing of cost. The good news is that, according to Hunter, disk backups are not necessarily a replacement for tape backups. “By staging backups to disk and then to tape, administrators streamline the backup process and create an additional backup resource,” he says.

Hunter explains that the actual transfer to tape is accelerated because the application server is not involved in that stage, and avoids the “shoe-shining” problem associated with tape backups. “For most failures, administrators can recover from this disk repository, increasing recovery times,” he says.

Page 3: Implement Proxy-based Backups

Implement Proxy-based Backups

Hunter also offers a fourth suggestion. He says that proxy-based backups require a SAN with Snapshot-capable storage arrays. He explains the process like this: Snapshots of volumes used by application servers are taken. A proxy server (a kind of backup server) mounts the Snapshots and moves the Snapshots off to tape. This data movement happens without involving the application server.

The application server is not burdened with the backup, and the backup is performed more smoothly. “Proxy-based backups have the added advantage that they can be performed at a file and incremental level, thus further speeding up the backup,” says Hunter.

It seems that backing up data quickly and efficiently and at the same time being able to eliminate the amount of resources being used continues to pose a conundrum for many IT organizations. As data grows in volume it becomes increasingly difficult to perform backups in what many consider ‘a reasonable amount of time.’ However, to many it’s a simple math problem: the more data there is, the more time it takes to back it all up. So, what’s an IT organization to do?

Part II of this two-part series will offer several additional suggestions for backing up data in less time, including: deploying data archiving systems, deploying gigabit Ethernet, consolidating backups to a data center with data replication, using a dedicated gigabit Ethernet backup LAN, creating network level data mirroring, and using shared higher speed block-based storage devices.

»


See All Articles by Columnist
Leslie Wood

Leslie Wood
Leslie Wood
Leslie. Wood is an Enterprise Storage Forum contributor.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.