Finding a Disaster Recovery Solution That Won't Break the Bank


Want the latest storage insights?

Download the authoritative guide: Enterprise Data Storage 2018: Optimizing Your Storage Infrastructure

It's been observed throughout human history that you generally have to get kicked in the teeth before you see value in the art of self defense. It certainly seems to be true in the case of disaster recovery, where vendors experience far more interest in their DR wares immediately following events such as 9/11, Hurricane Katrina, or more recently, floods and tornadoes.

"We see spikes of interest from time to time when an event occurs, and that does drive more communication with our customers," said Brian Regan of IBM's information protection services unit.

Recent alerts from the National Oceanic and Atmospheric Administration (NOAA) to expect a normal or near normal hurricane season certainly didn't result in businesses suddenly rushing to dust off those neglected purchase orders for that state-of-the-art business continuity (BC) setup they began to scope out in the aftermath of Katrina. A recent study by Aberdeen Group found that 34 percent of companies have yet to implement any kind of DR solution. Of the remaining 66 percent, 25 percent don't perform regular disaster plan tests. And when you get down into the SME sector, Jeffrey Hill, an analyst with Aberdeen Group, reported that nearly half of those in the 100 to 1,000 employee category don't have a BC/DR plan in place.

Cost or Complacency?

The situation isn't solely based on complacency. Many of the companies involved want to be able to recover rapidly in the face of an event. But they just can't afford it. Say you wanted to set up a large-scale backup/recovery architecture such as that used by Salesforce.com. Salesforce has to have the type of backup and DR platform that doesn't lose a single transaction no matter what happens. That takes a total of nine copies of the data, each with different recovery points for failure. The company's complex array of tape libraries, disk arrays, servers and databases cost around $20 million. Continuous data protection (CDP) is achieved via regular shadow images of production data shared between mirrored SANs on separate coasts. Oracle databases are also continuously protected. Other scenarios are deployed ranging from 4 hours to recover all the way to 48 hours for less critical systems.

Of course, not everyone needs that level of protection. Accordingly, multiple degrees of DR have evolved, ranging from the "Remain live or die" category through more prioritized approaches on down to economy class.

Let's look at what may be considered basic — some kind of regular backups occurring along with a plan to recover systems and data within a reasonable time. It's important to differentiate these two.

"It's hard to imagine that a company wouldn't have a data backup system, but just backing up data alone doesn't constitute a disaster recovery strategy," said Hill.

Yet as the Aberdeen numbers prove, many don't make it beyond backup to the point of instituting some kind of DR plan. And on the other extreme, you have those who are so fed up with the hassles of backup technology that they have implemented DR without any kind of underlying backup setup.

The Department of Consumer Affairs of the City of New York, for instance, uses StorageX from Brocade specifically for DR. Its main office in Manhattan stores data on an EMC Celerra NS500 NAS box. Its other major site in Queens has another NAS filer. Users at Queens actually write to the EMC box and then everything at Manhattan is replicated to Queens using Microsoft DFS. Brocade StorageX aggregates the file data into one logical file system. If Manhattan goes down, users are directed to Queens automatically.

So confident was the IT department that it conducted the ultimate test — unplugging the EMC box in the middle of a work day. Even more surprisingly, the department does not even use a regular backup system. It relies on this DR set up to safeguard all its data.

"When we tested it, everything worked beautifully," said Matthew Miller, the department's LAN administrator. "The system failed over to the secondary location and users didn't notice a thing."

Page 2: Bunker Mentality

Submit a Comment


People are discussing this article with 0 comment(s)