The New Era of Secondary Storage HyperConvergence

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

This guest post is authored by Jim Whalen, Senior Analyst, Taneja Group

The rise of hyperconverged infrastructure platforms has driven tremendous change in the primary storage space, perhaps even greater than the move from direct attached to networked storage in decades past.  Now, instead of discrete, physically managed components, primary storage is being commoditized, virtualized and clustered, with the goal of providing a highly available virtual platform to run applications on, abstracted away from the individual hardware components themselves.  This has provided dramatic benefits to IT, allowing them to reduce costs, shift their focus towards running business applications and spend less time managing storage and software.

While HyperConvergence has had a noteworthy, positive impact on the primary storage space, its effect on secondary storage has been mixed.  For many decades, secondary storage usage was driven primarily by data protection and DR, both using tape as the secondary media.  That changed with the advent of data deduplication and SATA HDDs in the 2005 timeframe.  This was a fundamental change that brought the speed and random access nature of HDDs into play, reducing the need for tape.  Major enhancements in RTO/RPO were achieved and the reliability of backups and restores improved by an order of magnitude.  But the use case was still primarily data protection and DR.

At the same time, server virtualization came into play.  This provided an additional boost to data protection/DR by allowing the efficient snapshotting of virtual machines and their storage image, along with the ability to migrate the snapshots to other virtualized servers for backup and to spin them up as clones for DR.  In conjunction with low cost SATA HDDs, it was now feasible to keep copies of VMs running on other machines for DR purposes.  No longer did users have to go through the tedious and error-prone process of finding, restoring and rebuilding a backup from tape to resume operating, they could merely failover to a running VM, or quickly start up another clone of a failed VM.

Additionally, the advent of the cloud provided users with even more options for efficiently and flexibly protecting their data and recovering from failures.  They could now copy critical information to the cloud on a pay-for-use basis for long-term archival and storage and they could also utilize the cloud as their DR solution by maintaining warm/hot VMs or purchasing one of the Disaster Recovery as a Service (DRaaS) offerings that many of the cloud vendors made available.

With some notable exceptions, HyperConvergence pushed this secondary storage evolution forward, by incentivizing the move towards virtualization, further enabling the automatic creation and copying of snapshots and making it even easier to spin up and migrate clones, but didn’t fundamentally change it.  Other, growing uses of secondary storage were left almost unaffected.  DevOps was still a separate activity and often used stale copies of production data because that’s all one could expect.  Often, testing was separate, development was separate and production was separate.  Copies proliferated.  Things became unmanageable.  The whole environment was error prone and inherently inefficient.  Adding additional complexity to all of this, data warehousing was also an entirely separate entity.  It used its own set of data that was often days/months old.  Then, as if that weren’t enough, in parallel with all of these we have seen the more recent advent of Hadoop and its own use of secondary data.  It also emerged as a completely separate discipline, where data was moved into specialized Hadoop clusters and analyzed by data scientists for business units.  All of these worlds were as distinct as one could imagine.  It would’ve been comical if it hadn’t been so painful.

In recent years, a number of innovative companies have tried to bring some order out of all this chaos:  Actifio, with copy data management and, more recently, automated DR; Zerto, with replication and DR with incredibly low RTOs/RPOs; Data Domain, with deduplicated storage and backup appliances; Hewlett Packard, with their integrated HP 3PAR StoreServ/HP StoreOnce Backup combo; cloud-based DR providers such as Axcient, EVault and Unitrends.  But the large majority of their focus is still on resolving data protection issues.  There’s nothing wrong with that, but the BIG question is, “is it possible to bring a lot of these other secondary use cases under one umbrella such that data is both protected and used most effectively by providing multi-axis efficiency (capacity, cost, bandwidth, performance – both latency and throughput, etc.)?”

Bringing HyperConvergence to secondary storage will require technology analogous to what HyperConvergence required on the primary side:  scale out is mandatory, as is a web-scale distributed file system that can deal with all these disparate workload issues and still maintain consistency and performance.  But in many ways, it’s a bigger problem to solve.  Hyperconverging primary storage, in very simplistic, first order terms, means making the primary flash and HDD tiers look like a big, fast disk to any VM running on the system.  Certainly not a trivial problem to solve, especially when considering all of the details behind it, but a focused one.  Now consider the issues with secondary storage:  Estimates place as much as 80 – 90% of an organization’s data there and much of it is ‘dark data’, which is not being intensively used and is generally not well understood.   So, in addition to abstracting and scaling the underlying physical storage, hyperconverging secondary storage also means dealing with a much larger volume of redundant, opaque data that needs to be productively used by a variety of non-tier 1 workloads, while still providing seamless protection for primary storage –  cost effectively.  That’s a big job.

Secondary Storage HyperConvergence

Enter Secondary Storage HyperConvergence – this is a new category of storage that we define as residing on the same sort of foundational scale out, distributed file system as provided by hyperconverged primary storage, but which also tightly integrates secondary use cases that may include some or all of the following: Data Protection, DevOps and Analytics.  With Data Protection for primary data being perhaps the most important use case for secondary storage, it simply must be present in any converged solution.  Providing integrated DevOps support allows the rapid and efficient deployment of new test and development environments and, by having developers use virtual, zero space copies instead of fully duplicated data, maximizes storage efficiency and minimizes copy proliferation.  Including in-place Analytics ‘lights up’ all of that dark data and increases its business value.  We believe this covers the full spectrum of what a true hyperconvergence of secondary storage should provide.

This combination of tightly coupled capabilities delivers huge benefits: one system to manage, no separate silos of software to deal with, inherent efficiencies in copy management, less duplicate data proliferation, large storage capacity improvements and dramatically better data insight.

The key player in copy data management, the precursor category to Secondary Storage HyperConvergence, has been Actifio, but two companies have entered this new category of secondary storage with scale out, distributed file systems. Rubrik launched in 1H2015, with their Converged Data Management platform.  With RCDM, Rubrik has integrated Data Protection, DevOps support and indexed search into a scale-out fabric.  By deduplicating and indexing all data upon ingestion, Rubrik is able to efficiently manage backup and development copies and provide global, instant search on all secondary data.  They’ve also integrated the cloud as part of their secondary storage architecture.  To date though, Rubrik only offers some rudimentary data usage analytics on their platform.  So, while you are able to find specific data elements quickly via their instant search capability and monitor some basic storage usage metrics, the dark data residing on the system remains dark.

Most recently, Cohesity, founded by former Nutanix CTO and Founder, Mohit Aron, has entered the market.  At first glance, Cohesity and Rubrik look similar.  Both offer a scale-out platform targeted at secondary storage.  As with Rubrik, Cohesity deduplicates and indexes all data upon ingestion.  This allows them to efficiently store backups and provide a global search capability.  Cohesity also has integrated Data Protection and DevOps support and has effectively tied the cloud into their architecture.  However, the third area of consolidation within Secondary Storage HyperConvergence, and the most intriguing – in-place Analytics – is where Cohesity provides strong differentiation.  As part of their platform, they’ve supplied built-in applications for monitoring storage utilization trending, reporting on user, VM and file data, log filtering and virus fingerprint scanning.  Even if you’re not in the middle of a ‘Big Data’ project and are only looking at buying a platform for data protection, these analytics can deliver immediate strategic value by helping you decide what data can be moved to a public cloud, for example, or enable you to determine if you’re adhering to IT security practices in your organization.

However, what really stands out here is that Cohesity doesn’t just provide their Analytics as monolithic functionality built into the system, they deliver them on top of an open application integration platform, along with an SDK, as part of their offering.  This forward looking approach will allow 3rd party and user created analytics applications to be written and run along with the native Cohesity applications.  This has the potential to dramatically increase the visibility and utility of the data residing on customers’ secondary storage.

Even though Cohesity and Rubrik appear similar on the surface, there are bound to be architectural and implementation differences impacting performance and overall functionality that will need to be teased out via testing.  For example, Rubrik appears to offer backup capability only with their own software for VMware environments, whereas Cohesity has an option to act as a scale-out backup target for customers’ existing 3rd party backup packages. In the final analysis, it may take a head-to-head competition between these two platforms to fully understand their differences.

While Cohesity and Rubrik are the two pure players in this new space, there’s another company that has laid at least a partial claim to it.  SimpliVity, among the leaders in the HCI (HyperConverged Infrastructure) space is taking a different, but related path.  Their philosophy holds that true HyperConvergence needs to include not only compute, storage and networking, but also deduplication, WAN optimization, and complete data protection.  This actually blurs the difference between primary and secondary storage to some extent.  While their focus has been on HCI and primary storage, secondary storage use cases are clearly part of their vision.

Because SimpliVity are trying to provide everything, they may end up being more interesting to the small and medium business markets that are looking for a one stop solution for their IT needs.  In the enterprise, where there may be a more distinct split between primary and secondary storage, and where more sophisticated users want more granular control over their IT centers, solutions provided by Cohesity and Rubrik may be dominant.  It will be interesting to see how all of this plays out.

By providing the same sort of fundamental improvements to secondary storage that HyperConvergence did for primary storage, products in this new category of Secondary Storage HyperConvergence are poised to re-define the space, perhaps finally taming it.

 Photo courtesy of Shutterstock.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.