Hitachi Data Says It Can Prevent Storage Downtime - EnterpriseStorageForum.com

Hitachi Data Says It Can Prevent Storage Downtime

Hitachi Data Systems (HDS) today announced new clustering technology that the company says can deliver 100 percent uptime for critical data storage assets, but the technology will likely be limited to all but the biggest storage users (see Is HDS Set to Take On EMC?).

The Hitachi High Availability Manager, announced following hints that the company was up to something big, builds on Hitachi's Universal Storage Platform (USP) V to offer continuous availability and integrated management for internal storage and externally attached heterogeneous storage.

HDS says the new offering allows for local and remote clustering for migrating and failing over storage pools from one USP V storage platform to another system, boosting availability and limiting downtime.

HDS CTO Claus Mikkelsen called the architecture "active-active storage with full failover."

"The problem has been solved," Mikkelsen said. "From this point on, a customer should never have to see an outage for data mobility or data migration."

Mikkelsen cited Thomson Reuters as an early adopter of the technology, and he said the company is "moving full speed ahead" after evaluating it. Christopher Crowhurst, the company's vice president of strategic technology for the Professional Division said in a statement that "This design helps remove the impact of potential failures, reduce management costs, and simplify business operations, and was a major reason behind our adoption of the Hitachi USP V platform as our preferred SAN virtualization solution going forward."

Pricing for the offering is on a per frame basis, and it requires two USP Vs or two USP VMs.

Perhaps because of the hype leading up to the announcement, some storage industry observers appeared unimpressed by the news.

Blogger and storage consultant Chris Evans, who had correctly guessed that the announcement involved "clustered storage arrays," called the announcement a "complete disappointment."

"What is on offer is the ability to cluster USPs — a feature called Hitachi High Availability Manager," Evans wrote. "By cluster, this means connect two USP arrays together and have them work in an active-active configuration, with data replicated in either direction."

But HDS says the technology will mean big savings and much easier data migration for Fortune 1000 customers who already have a replicated system, are using TrueCopy sync, and will be purchasing a future release of the USP as a replacement system.

If 25 percent of data under management is moved annually at an average cost of $7,000 per terabyte, HDS says a data center with one petabyte of storage under management currently spends $1.75 million a year on data migration operations. Since large enterprise capacities average 15PB, the cost could average $26 million a year, HDS says.

Another blogger and consultant, Stephen Foskett, speculated that the announcement suggests that a new USP is on the way, something to rival EMC's (NYSE: EMC) new Symmetrix V-Max. Still, he too said he was "underwhelmed" by the announcement.

HDS also announced enhanced support for IBM FlashCopy technology. When used with Hitachi Universal Replicator software, the new functionality boosts business continuity and disaster recovery in two data center point-to-point operations and three data center multi-target configurations, the company said.

Follow Enterprise Storage Forum on Twitter


Comment and Contribute

 


(Maximum characters: 1200). You have characters left.

 

 

Storage Daily
Don't miss an article. Subscribe to our newsletter below.

Thanks for your registration, follow us on our social networks to keep up-to-date