Download the authoritative guide: Enterprise Data Storage 2018: Optimizing Your Storage Infrastructure
It seems like everyone is running to the cloud. But just how much can you rely on the cloud for truly secure storage?
How much availability can you really expect, regardless of vendor promises? And if things go wrong, how confident are you that you can recover your data in a timely manner?
These questions are the reason for the appearance of cloud availability and disaster recovery (DR) assurance software. This buying guide focuses on the products from VMware (Site Recovery Manager), NetApp (SANscreen), Virtual Instruments (VirtualWisdom), Continuity Software (AvailabilityGuard/Cloud) and VirtualSharp (ReliableDR).
IT executives are feeling pressure to deploy a private, public and/or hybrid cloud for their organization. While its benefits are alluring, its risks loom large.https://o1.qnsr.com/log/p.gif?;n=203;c=204650394;s=9477;x=7936;f=201801171506010;u=j;z=TIMESTAMP;a=20392931;e=i
They worry about the potential loss of access to applications or data, and the inability to recover rapidly during a disaster. Concerns over downtime and data loss, then, can sometimes get in the way of cloud deployment.
Enter AvailabilityGuard/Cloud by Continuity Software. While its flagship RecoverGuard focuses on gathering "configuration drift" information from a DR site, AvailabilityGuard/Cloud gathers information from the production infrastructure.
It addresses cross-domain mis-configuration issues in cloud environments that can lead to failures or performance degradation. It also helps eliminate the need for endless spreadsheets as a tool for manually tracking configuration changes.
“AvailabilityGuard/Cloud is a private cloud automated health check solution that detects cross-domain configuration errors across all layers of IT, from the application server to the database, virtual machine (VM), physical infrastructure, SAN, network-attached storage (NAS) and clustering,” said Doron Pinhas, CTO of Continuity Software. “It enables users to automatically discover and eliminate vulnerabilities before they impact the business.”
This is achieved by leveraging Continuity’s Risk Discovery Engine coupled with a continuously updated Risk Signature Knowledgebase of known and emerging private cloud configuration issues. Storage administrators can use it to provide reports and summaries. Pricing starts at $1,000 per physical server.
Pinhas said that many of the leading disk array and replication vendors offer tools for configuration tracking, which tend to be limited to their own platform. VMware’s Site Recovery Manager (SRM), he claimed, does not verify and audit data center configurations and systems from a DR/High Availability (HA) perspective.
He further opined that NetApp’s SANscreen has limited storage platform, database and cluster support. While it monitors SAN performance and redundancy, it does not detect DR gaps from the database through hosts to storage and replication. Similarly, he said that Virtual Instruments is focused on storage network performance monitoring as opposed to private cloud coverage.
“As practically every app server, database, storage, OS, cluster and virtualization vendor is updating or publishing a hefty best practices guide on almost a quarterly basis, organizations are finding themselves unable to follow and confidently implement reliable private cloud infrastructure for business critical applications,” said Pinhas.
AvailabilityGuard/Cloud allows IT to enhance coordination across domains and increase operational efficiency by preventing problems ahead of time rather than firefighting service disruptions.”
There are well-publicized cloud outages just about every month – usually with big-name players in the headlines. Clearly, DR is a mandatory part of business in the cloud, one that requires proper planning and execution.
Yet relatively few companies do DR testing more than once a year, often utilizing a scripted procedure that may have nothing to do with a real disaster.
VirtualSharp ReliableDR is designed to make DR testing a mundane, predictable task handled by software without human intervention. SLAs are tracked, enforced and audited automatically.
“Many organizations want to implement Disaster Recovery (DR) solutions that leverage their virtualization and cloud investments, but find it difficult or complex to deploy,” said Carlos Escapa, CEO of VirtualSharp Software. “Our ReliableDR Free Edition enables users to experiment or put in production replication-based DR at zero cost.”
This free version has a scheduler that can replicate automatically every 48 hours, and can generate a snapshot of up to 10 VMs in the secondary site. These snapshots can be DR tested manually without having to shut down the primary site (VMware vSphere Replication doesn't allow this, noted Escapa; it requires the primary VMs to be shut down before they can be tested).
In addition, the free version can be used for failover and failback to aid in migration success, and can also be used to generate a pre-production environment with fresh data for patch and update testing.
ReliableDR Free Edition is based on VirtualSharp’s Enterprise Edition, which was designed for the private cloud. It is agentless and supports all versions of vSphere. The paid edition lifts the restrictions noted above, supporting both software and array-based replication.
Testing is automatic, so administrators can set recovery policy and let ReliableDR enforce Recovery Time Objectives (RTO) and application Recovery Point Objectives (RPO) by running recovery exercises as often as needed by the line of business, in some cases once every few hours or every day. It is replication- and hardware-agnostic.
“VirtualSharp is the only company that measures and reports service resilience in private clouds for any workloads running virtualized,” said Escapa. “It does so through multi-data center recovery orchestration, which constantly certifies successful recovery outcomes or reports compliance threats based on business rules aligned with DR policy.’
According to Gartner, eighty seven percent of enterprises have RTOs of 4 hours or less for their mission-critical applications, up from 73% only one year prior.
VMware believes that traditional DR has become too expensive, complex and unreliable, especially for SMBs and tier 2 applications, which sometimes go unprotected.
“DR ends up being a huge investment with questionable returns, and most IT organizations are reluctant to apply these solutions beyond their mission-critical applications,” said Gil Haberman, Senior Product Marketing Manager, Infrastructure, VMware.
VMware vCenter Site Recovery Manager 5.1 aims to provide simple and reliable disaster protection for virtualized applications. It leverages vSphere Replication and supports various storage replication products used to replicate VMs to a secondary site. SRM sets up centralized recovery plans across all infrastructure layers, which can be tested non-disruptively.
At the time of a site failover or migration, vCenter SRM automates failover and failback processes. It is available in two editions – Standard and Enterprise – with pricing starting at $195 per VM.
“Some solutions focus on providing replication at the host or hypervisor layer, and traditional storage-based replication solutions are also positioned as disaster recovery solutions,” said Haberman. “However, vCenter Site Recovery Manager is not focused on replication and offers reliable automation and centralized management of disaster recovery processes.”
Note that vCenter SRM relies on an underlying replication product to move virtual machine files between sites. As such, it supports high-performance storage replication products from the likes of EMC, NetApp and HP as well as vSphere Replication.
VirtualWisdom combines hardware and software instrumentation to provide an agentless infrastructure performance management platform. It can capture more than 300 metrics in real-time without affecting the devices being monitored to provide visibility across physical, virtual and cloud computing environments.
“VirtualWisdom operates on the wire at the protocol level, and thus supports all protocol compliant devices, regardless of the supplier,” said John Gentry, Vice President of Marketing, Virtual Instruments.
“It reduces the time required for performance problem resolution, and eliminates server and SAN over-/under-provisioning by monitoring all transactions from the virtual machine on the host server through the switching network out to the storage arrays. It sees performance bottlenecks, transmission errors and server-to-SAN utilization.”
It is a modular solution with several components: VirtualWisdom Server Enterprise Software License; VirtualWisdom SAN Availability Probe Software (licensed by active FC switch port), VirtualWisdom SAN Performance Probe Hardware Appliance (licensed in standard 8-port or 16-port configurations; and VirtualWisdom Virtual Server Probe (licensed by ESX Server). All can be acquired for less than $150,000.
This product suite was re-vamped in 2011 to expand monitoring capabilities across the virtualized data center, including the private cloud servers, networks and storage. These enhancements included ProbeVM, the Virtual Server probe for VMWare environments, and a 8Gb SAN Performance Probe.
Gentry said that other approaches rely on software-based polling and/or agents that provide aggregate device level views in 5-minute to 60-minute intervals. They compile averages that miss traffic spikes and transmission anomalies. VirtualWisdom, in contrast, collects 300 metrics measured to the microsecond, at 8GBps line rate, at the protocol level, and can report or act on those metrics in 1-second intervals.
In addition, he said that software-based solutions can negatively affect the performance of the devices they are monitoring. They can tell IT managers that performance is slow, but they can’t tell them exactly when the slow-down started or determine the root cause.
“Other currently available software-only solutions cannot provide real-time visibility across physical, virtual and cloud computing environments,” said Gentry. “Without the combination of software and hardware instrumentation, virtualization and cloud computing put mission-critical applications at high risk of performance and availability impacts.”
OnCommand Insight (formerly known as Onaro SANscreen) is part of the NetApp OnCommand product portfolio of management software, which deals with controlling, automating and analyzing a multi-vendor storage infrastructure. This includes capacity planning, assuring availability, reporting on costs for chargeback and planning.
“Policy setting and compliance is a big part of enabling access, availability, and performance in the private cloud,” said Lisa Crewe, Sr. Product Marketing Manager, NetApp. “OnCommand Insight enables you to gain end-to-end visibility into heterogeneous storage infrastructure availability, performance, and utilization service levels in the context of applications and business units.”
In addition, through VMware vCenter, OnCommand can identify orphaned volumes that are often attached to unused VMs as a result of VM sprawl, which leads to lower utilization. These volumes can be reclaimed and added into the pool of available storage. This NetApp tool continuously discovers the physical and logical configuration of devices and helps establish a policy-based service model of how the interaction of these devices delivers a service to an application.
“Competitors generally provide device management and file system utilization information, whereas OnCommand Insight captures and stores deep level details on storage and service analytics in a data warehouse,” said Crewe. “Other vendors generally do not serve the NAS space effectively, while OnCommand Insight works in both SAN and NAS environments.”