The Backup Conundrum: More Data in Less Time Page 3


Want the latest storage insights?

Download the authoritative guide: Enterprise Data Storage 2018: Optimizing Your Storage Infrastructure

Implement Proxy-based Backups

Hunter also offers a fourth suggestion. He says that proxy-based backups require a SAN with Snapshot-capable storage arrays. He explains the process like this: Snapshots of volumes used by application servers are taken. A proxy server (a kind of backup server) mounts the Snapshots and moves the Snapshots off to tape. This data movement happens without involving the application server.

The application server is not burdened with the backup, and the backup is performed more smoothly. “Proxy-based backups have the added advantage that they can be performed at a file and incremental level, thus further speeding up the backup,” says Hunter.

It seems that backing up data quickly and efficiently and at the same time being able to eliminate the amount of resources being used continues to pose a conundrum for many IT organizations. As data grows in volume it becomes increasingly difficult to perform backups in what many consider ‘a reasonable amount of time.’ However, to many it’s a simple math problem: the more data there is, the more time it takes to back it all up. So, what’s an IT organization to do?

Part II of this two-part series will offer several additional suggestions for backing up data in less time, including: deploying data archiving systems, deploying gigabit Ethernet, consolidating backups to a data center with data replication, using a dedicated gigabit Ethernet backup LAN, creating network level data mirroring, and using shared higher speed block-based storage devices.

» See All Articles by Columnist Leslie Wood

Submit a Comment


People are discussing this article with 0 comment(s)