InfiniBand Brings HPC Power to Enterprise Storage

Once limited to the realm of supercomputing, InfiniBand is beginning to catch on as an enterprise storage interconnect.

InfiniBand technology offers low latency, current performance of 10Gbps, and the promise of future performance of as much as 120Gbps as an I/O interconnect.

While the technology failed to live up to its early hype several years ago, analysts and vendors alike believe that InfiniBand’s time may have finally come to move beyond the HPC market and into the enterprise.

Storage vendor Isilon Systems claims that 90 percent of its customers have decided to adopt InfiniBand-based products over GigE-based options. Isilon develops an InfiniBand clustered storage solution called Isilon IQ.

LSI Logic’s Engenio division has also jumped on the InfiniBand storage bandwagon, as has Microsoft, which includes support for InfiniBand in its new Compute Cluster Server 2003 beta. IBM has been deploying InfiniBand on its BladeCenter for the last year.

Not Just for HPC

In HPC environments, InfiniBand has been the interconnect of choice for several years mainly because of its low latency attributes, said Bret Weber, director of architecture at the Engenio Storage group. Weber said HPC users were having to replicate their infrastructure to put in a Fibre Channel SAN and were wondering why they just couldn’t use InfiniBand for storage.

“For the most part, we see that InfiniBand hasn’t really been driven as a storage interconnect, but it’s been pulled by the infrastructure,” Weber said.

The HPC market has also traditionally been where new technologies get their start.

“What happens in the HPC space is indicative of what is going to happen in the general purpose computing space in three to five years,” Weber said. “That’s what we saw with Fibre Channel.”

IBM is also bullish on InfiniBand as an interconnect outside of the HPC space. Big Blue solidified its commitment to InfiniBand with an agreement two years ago with Topspin Communications, which was bought by Cisco last year.

Tom Bradicich, CTO of the xSeries and BladeCenter server division at IBM, says there are four reasons why InfiniBand has become an option for enterprise deployment. The first is that it is based on an open industry standard. The second is performance: 10Gbps on copper wire in deployment and 30Gbps switches. The third is Remote Direct Memory Access (RDMA) capability, and the fourth key attribute is Dedicated Protocol Offload Engine.

Analysts also see promise for IB technology in enterprises.

“Its low latency, lack of overhead and high speeds are all assets in any segment,” said 451 Group Analyst Simon Robinson.

A recent Taneja Group technology brief postulated that InfiniBand will find its way into enterprise clustered deployments as a common switched fabric.

“Certainly it has gotten a start by providing a backplane to clustered solutions like Isilon IQ, but we expect to see IB-based technology reach out from this beachhead and go both wider and higher in the infrastructure,” the report states.

Vendors to Watch

Robinson said storage players to watch include Engenio, DataDirect and Verari, all of which have announced InfiniBand storage systems. The technology has also found support in the solid state storage market with Texas Memory Systems, and is moving into other segments of the market.

Still, it’s too early to declare a winner, he said.

“No real winners in the storage side yet, since they are just testing the waters,” Robinson said. “The developers that stuck with the chip-level development should now reap the benefits of staying with the market.”

The Taneja Group calls Isilon a “clear leader in the IB-based clustered storage market with a screamingly fast and scalable IB-based version of its Isilon IQ clustered storage systems.”

Isilon offers its clustered storage solutions for InfiniBand at the same price as it does its GigE solutions.

To Infinity and Beyond

InfiniBand still has a few barriers to overcome. IBM’s Bradicich said that the few barriers that InfiniBand faces are commonly seen in technology adoption phases. Among those barriers are third-party qualification of InfiniBand-to-Fibre Channel gateways for connecting InfiniBand I/O to FC SAN storage.

“Native InfiniBand storage products are just now being fully committed in the market, and it will take some time for this market to grow,” Bradicich said. “And only recently has an open standard software stack become consistent and fully supported.”

The standard software stack for InfiniBand comes by way of the Open IB organization, which is also seen by Engino’s Weber as a key driver in helping overcome barriers to adoption.

“So with InfiniBand, we don’t have the same situation as in Fiber Channel, with each vendor having their own proprietary driver stack,” Weber said. “With InfiniBand and Open IB, we have really a common driver stack, and by doing that you can get product out to market quicker.”

Isilon marketing vice president Brett Goodwin sees the biggest barrier as one of education and awareness.

“The economics are so compelling that you can get the benefits at no additional cost to GigE,” Goodwin said. “It really becomes a no-brainer for customers to try this new technology.”

For more storage features, visit Enterprise Storage Forum Special Reports

Sean Michael Kerner
Sean Michael Kerner
Sean Michael Kerner is an Internet consultant, strategist, and contributor to several leading IT business web sites.

Latest Articles

Ultimate Storage Area Network (SAN) Security Checklist

Securing storage area networks (SANs) has always been necessary, but it's even more important in the current business cybersecurity climate. SANs connect multiple storage...

Storage Software Q&A With Chris Schin of HPE

Storage software technology continues to undergo rapid shifts. As enterprises' data needs multiply, storage providers have scaled their software products, so customers can optimize...

What Is Virtual Memory? Ultimate Guide on How It Works

Virtual Memory allows a computer more memory than physically available. Learn how it works & how it differs from physical memory. Click here now.