SHARE
Facebook X Pinterest WhatsApp

Hitachi Data Says It Can Prevent Storage Downtime

Hitachi Data Systems (HDS) today announced new clustering technology that the company says can deliver 100 percent uptime for critical data storage assets, but the technology will likely be limited to all but the biggest storage users (see Is HDS Set to Take On EMC?). The Hitachi High Availability Manager, announced following hints that the […]

Written By
PS
Paul Shread
May 26, 2009
Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More

Hitachi Data Systems (HDS) today announced new clustering technology that the company says can deliver 100 percent uptime for critical data storage assets, but the technology will likely be limited to all but the biggest storage users (see Is HDS Set to Take On EMC?).

The Hitachi High Availability Manager, announced following hints that the company was up to something big, builds on Hitachi’s Universal Storage Platform (USP) V to offer continuous availability and integrated management for internal storage and externally attached heterogeneous storage.

HDS says the new offering allows for local and remote clustering for migrating and failing over storage pools from one USP V storage platform to another system, boosting availability and limiting downtime.

HDS CTO Claus Mikkelsen called the architecture “active-active storage with full failover.”

“The problem has been solved,” Mikkelsen said. “From this point on, a customer should never have to see an outage for data mobility or data migration.”

Mikkelsen cited Thomson Reuters as an early adopter of the technology, and he said the company is “moving full speed ahead” after evaluating it. Christopher Crowhurst, the company’s vice president of strategic technology for the Professional Division said in a statement that “This design helps remove the impact of potential failures, reduce management costs, and simplify business operations, and was a major reason behind our adoption of the Hitachi USP V platform as our preferred SAN virtualization solution going forward.”

Pricing for the offering is on a per frame basis, and it requires two USP Vs or two USP VMs.

Perhaps because of the hype leading up to the announcement, some storage industry observers appeared unimpressed by the news.

Blogger and storage consultant Chris Evans, who had correctly guessed that the announcement involved “clustered storage arrays,” called the announcement a “complete disappointment.”

“What is on offer is the ability to cluster USPs — a feature called Hitachi High Availability Manager,” Evans wrote. “By cluster, this means connect two USP arrays together and have them work in an active-active configuration, with data replicated in either direction.”

But HDS says the technology will mean big savings and much easier data migration for Fortune 1000 customers who already have a replicated system, are using TrueCopy sync, and will be purchasing a future release of the USP as a replacement system.

If 25 percent of data under management is moved annually at an average cost of $7,000 per terabyte, HDS says a data center with one petabyte of storage under management currently spends $1.75 million a year on data migration operations. Since large enterprise capacities average 15PB, the cost could average $26 million a year, HDS says.

Another blogger and consultant, Stephen Foskett, speculated that the announcement suggests that a new USP is on the way, something to rival EMC’s (NYSE: EMC) new Symmetrix V-Max. Still, he too said he was “underwhelmed” by the announcement.

HDS also announced enhanced support for IBM FlashCopy technology. When used with Hitachi Universal Replicator software, the new functionality boosts business continuity and disaster recovery in two data center point-to-point operations and three data center multi-target configurations, the company said.

Follow Enterprise Storage Forum on Twitter

PS

eSecurity Editor Paul Shread has covered nearly every aspect of enterprise technology in his 20+ years in IT journalism, including an award-winning series on software-defined data centers. He wrote a column on small business technology for Time.com, and covered financial markets for 10 years, from the dot-com boom and bust to the 2007-2009 financial crisis. He holds a market analyst certification.

Recommended for you...

What Is Hyperconverged Storage? Uses & Benefits
Drew Robb
Nov 22, 2023
What Is Fibre Channel over IP (FCIP)
Drew Robb
Nov 16, 2023
Top 10 Tips for Implementing a Virtual SAN
Zac Amos
Nov 15, 2023
Enterprise Storage Forum Logo

Enterprise Storage Forum offers practical information on data storage and protection from several different perspectives: hardware, software, on-premises services and cloud services. It also includes storage security and deep looks into various storage technologies, including object storage and modern parallel file systems. ESF is an ideal website for enterprise storage admins, CTOs and storage architects to reference in order to stay informed about the latest products, services and trends in the storage industry.

Property of TechnologyAdvice. © 2025 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.