Download the authoritative guide: Enterprise Data Storage 2018: Optimizing Your Storage Infrastructure
Disaster recovery used to be a costly affair. You built a duplicate site in some remote location with its own complete set of hardware, software and systems. You replicated your data to it so that in the event of a disaster, you could stay online. But that entailed complete system redundancy with one set sitting idle 99.99% of the time. In these days of stringent budgeting, few can attain that idealistic dream.
Here, then, are some tips for adding cloud services into your DR plan.
There are plenty of choices out there. As well as the companies listed above, we have the likes of Zerto, HotLink and Veeam, which have come out with sophisticated cloud DR services. A close look at all of these, however, will show many different “flavors” and pricing models. Some are great for bulk storage you are unlikely to need to access much. Others are better for replication, and of course, pricing models and rates vary wildly. What works out cheap and effective for one type of traffic may not be quite so good for something else.
Use the Cloud to Boost RPO/RTO
Recovery Time Objective (RTO) is the length of time you are willing to wait until your site has been recovered. Recovery Point Objective (RPO) is the amount of data you are willing to lose in the event of a disaster. Chris Schin, Vice President of Products at Zetta, pointed out that with traditional DR the RPO only applies to the last backup set that was moved off site to a location outside the disaster zone. Any recent data that was backed up to a tape or flash drive won’t work if it is still on site when the disaster hits.
“If you are using a physical storage medium to move your data off site, the disaster-scenario RPO is not determined by the parameters established by the backup software, but the frequency with which those tapes are sent to an offsite vault,” said Schin. “If that is done weekly, the RPO is one week, even if snapshots are taken hourly.”
Sash Sunkara, CEO of RackWare, encourages companies to test the waters by putting more than backups onto the cloud. She suggested throwing some entire workloads on there to see how they do.
“Yes, it is advantageous to use cloud infrastructure to back up your data,” said Sunkara. “Not only can you backup files and data, but you can move an entire workload (OS, applications and data) of a physical server directly into any cloud, such as Amazon Web Services, RackSpace, or SoftLayer.”
Consider All-Out Cloud DR
Some companies are making a play to become the cloud repository for all backup data. But Google goes further in pushing Google Apps as the way to run your business, storing all your data online and taking care of your DR into the bargain.
According to Rajen Sheth, Senior Product Manager, Google Apps, the RPO target of Google Apps is zero. It has the cloud to replicate all your data behind the scenes. Others provide similar services while allowing you to stick to whatever business apps your prefer as opposed to going all Google.
While the tech titans of this world promote their use of multiple data centers within their one all-encompassing cloud, David Zimmerman, CEO of LC Technology International, proposes using multiple cloud providers for DR to lower your risk.
“Don’t use a single cloud, use several to spread the risk and back them up to each other for redundancy,” said Zimmerman. “The cloud has enormous promise due to falling costs and increasing reliability, yet companies should still tread lightly.”
Another way to lower risk is to continue to retain some kind of physical-based DR element. If an entire cloud gets targeted by hackers or you lose the network, you have effectively put your eggs in one online basket – never a good idea.
“Firms should look at physical storage and other options to complement the cloud,” said Zimmerman. “Cloud storage is dependent on internet access, so critical systems should have some sort of on-premises storage solution as well.”
Target the Under-Protected
Sunkara made another good point. She noted that in many data centers, protected workloads tend to be categorized into two levels: Mission Critical, where workloads can't even afford to be down for a few minutes, and Low Priority, where workloads are archived, but may take hours or days to restore. The former is usually protected by very expensive and complicated high availability and clustering solutions which synchronize duplicated systems in real time. The latter tends to be protected by disk archive or tape backup; very inexpensive, but also slow to recover.
“All the workloads in-between these two categories, those that require a quicker time to recover, but where expensive replication systems are not required, have been under-protected,” said Sunkara. “A solution to these under-protected workloads is to use the cloud as a ‘virtual disaster recovery’ site, where changes to the workloads are synchronized periodically, and where the recovery systems can be brought online in a matter of minutes in the event of a planned or unplanned outage.”
Photo courtesy of Shutterstock.