5 Top Data Recovery Trends 

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Backup is vital. But recovery has become even more so. 

Companies no longer have the faith they once had in backup schedules that were supposed to reassure them that their data would be available to them should they ever need it.

See below for some of the top trends in the data recovery market:  

1. Raised profile of recovery 

Databases and enterprise resource planning (ERP) systems, such as SQL Server, SAP HANA, Oracle, and MaxDB, are at the heart of IT operations for many companies. 

For these essential systems, simply backing up data is not sufficient to meet recovery time objectives (RTOs) and recovery point objectives (RPOs) these systems demand. 

“IT teams are now using clustering solutions that fail over across geographically separated nodes or cloud regions and availability zones to ensure low recovery time and recovery point objectives can be met,” said Ian Allton, solutions architect, SIOS Technology

2. Repatriating cloud storage 

The rush to the cloud was assisted with many special offers, discounts, and free packages. 

And as the promise of low-cost storage has run aground on the shoals of complexity, some are now moving data from the cloud back in-house to be more in control of its recoverability. 

“As the first-year free programs end, IT professionals counting on the cloud to store all their backup and archive data are now looking at moving data back on-premises,” said George Crump, chief product strategist, StorONE

“Storing data in the cloud, especially static data, is expensive.” 

Crump advocates for highly dense, cost-effective storage solutions that can also provide long-term data resiliency. His advice is that IT needs to look for solutions that can support 20 TB+ hard disk drives (HDDs) without suffering through week-long drive recovery efforts, while also providing resilient ransomware storage. 

3. Bigger big data

The amount of data we produce continues to increase. 

The growth of big data — which is now measured in zettabytes to hundreds of zettabytes or more — can no longer be backed up with the approaches that have been used for decades. 

The problem is so difficult that enterprises will often forego proper backup when large amounts of data are involved. When that happens, there is no recovery in the event of loss or corruption.

“When the scales reach billions of files or more and petabytes of data or more, backup no longer works,” said Jason Lohrey, CEO, Arcitecta

“Data resilience must, and will, become an integral part of the file system and data fabrics.” 

4. Eggs in many baskets 

There has been a tendency to put all backup data on tape or all data in the cloud or store all backups on disk. 

But in this age of diversity, there is a growing need for diversity in backup and recovery. 

Indeed, the 3-2-1 system has long advocated this — make three copies of your data, store it on two different kinds of storage media, and keep one off site. Sensible advice. Yet, many have ignored it. 

“Long-term backup will move from entirely tape (A and B copies) to either a combination of tape (A copy) and cloud (B copy) or entirely cloud (A copy only), using the cheapest possible storage,” said Lohrey with Arcitecta. 

“Those concerned with diversity will use a mix of storage technologies and vendors to avoid having all eggs in the one basket. That may mean having the A copy in one cloud and the B copy in another. If one fails, then recovery can occur from the other.”   

5. Metadata grows in importance 

Metadata used to have a relatively minor role in file management and storage. 

But as time goes on, developers are finding more ways to harness metadata to improve performance, enhance searchability and analytics, and provide more sophisticated features for storage, backup, and recovery. 

More recently, metadata has been seen as a way to improve resiliency and drive faster recovery. Modern systems contain far more metadata details than ever. These can be used to cross-reference data, analyze it, search it, and more. 

“There will be a focus on the use of metadata to drive the placement of data copies in order to meet data resilience and recovery objectives,” said Lohrey with Arcitecta.

“Metadata will be used to automatically minimize the overall costs for a given recovery objective.” 

Drew Robb
Drew Robb
Drew Robb is a contributing writer for Datamation, Enterprise Storage Forum, eSecurity Planet, Channel Insider, and eWeek. He has been reporting on all areas of IT for more than 25 years. He has a degree from the University of Strathclyde UK (USUK), and lives in the Tampa Bay area of Florida.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.