Hitachi Data Says It Can Prevent Storage Downtime

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Hitachi Data Systems (HDS) today announced new clustering technology that the company says can deliver 100 percent uptime for critical data storage assets, but the technology will likely be limited to all but the biggest storage users (see Is HDS Set to Take On EMC?).

The Hitachi High Availability Manager, announced following hints that the company was up to something big, builds on Hitachi’s Universal Storage Platform (USP) V to offer continuous availability and integrated management for internal storage and externally attached heterogeneous storage.

HDS says the new offering allows for local and remote clustering for migrating and failing over storage pools from one USP V storage platform to another system, boosting availability and limiting downtime.

HDS CTO Claus Mikkelsen called the architecture “active-active storage with full failover.”

“The problem has been solved,” Mikkelsen said. “From this point on, a customer should never have to see an outage for data mobility or data migration.”

Mikkelsen cited Thomson Reuters as an early adopter of the technology, and he said the company is “moving full speed ahead” after evaluating it. Christopher Crowhurst, the company’s vice president of strategic technology for the Professional Division said in a statement that “This design helps remove the impact of potential failures, reduce management costs, and simplify business operations, and was a major reason behind our adoption of the Hitachi USP V platform as our preferred SAN virtualization solution going forward.”

Pricing for the offering is on a per frame basis, and it requires two USP Vs or two USP VMs.

Perhaps because of the hype leading up to the announcement, some storage industry observers appeared unimpressed by the news.

Blogger and storage consultant Chris Evans, who had correctly guessed that the announcement involved “clustered storage arrays,” called the announcement a “complete disappointment.”

“What is on offer is the ability to cluster USPs — a feature called Hitachi High Availability Manager,” Evans wrote. “By cluster, this means connect two USP arrays together and have them work in an active-active configuration, with data replicated in either direction.”

But HDS says the technology will mean big savings and much easier data migration for Fortune 1000 customers who already have a replicated system, are using TrueCopy sync, and will be purchasing a future release of the USP as a replacement system.

If 25 percent of data under management is moved annually at an average cost of $7,000 per terabyte, HDS says a data center with one petabyte of storage under management currently spends $1.75 million a year on data migration operations. Since large enterprise capacities average 15PB, the cost could average $26 million a year, HDS says.

Another blogger and consultant, Stephen Foskett, speculated that the announcement suggests that a new USP is on the way, something to rival EMC’s (NYSE: EMC) new Symmetrix V-Max. Still, he too said he was “underwhelmed” by the announcement.

HDS also announced enhanced support for IBM FlashCopy technology. When used with Hitachi Universal Replicator software, the new functionality boosts business continuity and disaster recovery in two data center point-to-point operations and three data center multi-target configurations, the company said.

Follow Enterprise Storage Forum on Twitter

Paul Shread
Paul Shread
eSecurity Editor Paul Shread has covered nearly every aspect of enterprise technology in his 20+ years in IT journalism, including an award-winning series on software-defined data centers. He wrote a column on small business technology for, and covered financial markets for 10 years, from the dot-com boom and bust to the 2007-2009 financial crisis. He holds a market analyst certification.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.