Defining the Future of DR Storage

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

More and more workloads are being shunted off to the cloud. It appears that the days of having an arsenal of in-house hardware are over. Gone, too, will be expensive offsite mirror Disaster Recovery (DR) facilities – at least for all but the largest, richest and highest-end businesses. So what does this mean to the storage manager?

The future of DR appears to be moving steadily away from the primary site and recovery site concept. It is being gradually replaced with the ability to migrate or burst workloads seamlessly from site to site. As the cloud gains ground, the ownership of the sites involved is becoming less of an issue. Some may be customer owned, such as in a data center, a private cloud, a hosted data center or a colocation facility; others may be completely in the hands of an outside party. The key is that data must be able to shift dynamically on demand between the various sites involved while being able to attain always-on availability.

Sometimes companies will set things up this way purely for DR purposes. But this kind of more loosely coupled arrangement enables them to do other things.

“This is being accomplished for resiliency, peak demand and customer proximity reasons,” said Rachel Dines, Senior Product Marketing Manager, SteelStore, NetApp. “The cloud tends to be relatively inexpensive for DR. It is a good way to extend data protection while adding services such as deduplication, compression, differential snapshots and replication.”

She estimates that these data reduction techniques can reduce backup and DR footprints by up to 30x. This is an important point. Gone are the days when organizations could live with having dozens of backups of essentially the same data sitting in tapes both in-house and at an offsite tape storage depot. Similarly, with the sheer volume of unstructured data that exists, it makes no sense to store 500 copies of the same keynote presentation or video.

What we are looking at, therefore, is a much leaner DR data set, despite there being far more data to protect. By reducing the volume of data to as close as possible to one true copy of all data, regularly updating that with changes to create continually consolidated enterprise data sets, the game is to achieve resilience by having copies of it onsite, in the cloud and ideally in a separate cloud repository. Some cloud providers offer this as a value added service – two copies of the data they backup and store for you in separate locations.

“We are seeing a focus on resource consolidation and lowered Capex,” said Robert Amatruda, product marketing manager, data protection, Quest Software. “Many companies are leveraging the efficiencies of the cloud to erect a DR framework that was previously beyond their resources.”

He believes that it is all about efficiency and scale. Instead of trying to manage everything in house, storage managers will use pointers to content indexes so they can resurrect data. But he doesn’t think enterprises will necessarily go all in with the cloud. Rather, most will leave themselves with multiple options inside and outside the data center, i.e. storing and backing up data to the cloud, but having a copy onsite on some combination of servers and appliances.

While some broadly trumpeted companies go all-cloud, their numbers are relatively few. And the reason is simple – there is just too much investment in on-premise gear and in-house talent. It’s not easy to walk away from all that, even if the cloud provided all the security, performance and resilience an enterprise could ever require. This is giving rise to a variety of products and services, said Amatruda, designed to act a bridge between legacy and cloud architectures.

“Instead of ensuring that data is recoverable, more organizations are concerned with having an always-on architecture, whereby resiliency is built directly into the architecture itself,” said Amatruda. “You’re seeing more products deal with cloud connection capabilities so that users can manage data outside the walls of their physical data center.”

His idea about the future of DR is that redundant data has to be all but eliminated if storage is to become more scalable, regardless of whether that data center is physical or wholly virtual and primarily in the cloud.

“The cloud is creating a level of scalability that didn’t exist in the data centers of the past,” said Amatruda.

Next Moves

Some users may have the impression that they should get ready for immense change or buy into the latest vendor messaging about their specific DR environment. But Greg Schulz, an analyst with Server StorageIO Group advises against haste or rip and replace.

In the face of a rapidly shifting technology landscape, he recommends caution. Perhaps the cloud is attractive, but the value proposition will be there for some and less so for others. It all depends on the existing architecture, personnel resources, the needs of security and regulation and many other factors. The important thing, Schulz advised, is to take a long hard look at when, where, with what, and for how long various types of data are being protected. Contrast that with whether those approaches are factually meeting the needs of the business. Don’t assume what those needs are. Find out from as high an echelon as possible, preferably C-level or business unit heads. Only by doing so is it possible to strategize and spend wisely. It’s all about aligning the right tools, technologies and techniques to the most important problems the business faces.

“Take a step back from the focus on the tools and technologies so you are able to view how they can protect and preserve information in a way that is of greatest value to the business,” said Schulz.

Photo courtesy of Shutterstock.

Drew Robb
Drew Robb
Drew Robb is a contributing writer for Datamation, Enterprise Storage Forum, eSecurity Planet, Channel Insider, and eWeek. He has been reporting on all areas of IT for more than 25 years. He has a degree from the University of Strathclyde UK (USUK), and lives in the Tampa Bay area of Florida.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.