IBM, NetApp Take on Virtual I/O Bottlenecks

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Virtualization has done wonders for server utilization and consolidation, but packing all those virtual machines onto a single server has created something of a mess on the storage side.

Storage I/O bottlenecks in virtual environments are a growing and persistent problem for data centers, as storage controllers and heads can become overwhelmed by I/O requests from virtual machines, creating processing delays.

The quest to find a solution to the I/O mess has become something of a holy grail for data storage vendors. IBM (NYSE: IBM), NetApp (NASDAQ: NTAP), BlueArc and Panasas are among the innovators developing unique approaches to the problem.

The core issue with virtualized servers and NAS or SAN storage systems is that the servers run more applications than physical servers, causing storage I/O constraints, said Jim Sangster, senior director for virtualization solutions and alliances at NetApp, a company that offers both NAS and SAN solutions.

Each application requires a separate I/O resource, said Sangster. “A typical virtualized server might require six to 10 ports, which is two to three times more than the I/O requirements of physical servers,” he said.

More ports typically means more connections — and a whole bunch of cables. Adding more storage capacity is not the answer because what is needed is better connectivity.

Most virtualization solutions support NAS, Fibre Channel and iSCSI storage, but not all virtual machines can connect directly to the storage system. When a virtual machine can’t connect directly to the storage, the results are usually higher response times and, in the worst cases, failure for storage operations.

There are four main storage-related bottlenecks in virtual environments: oversubscription within virtualized servers; oversubscription within the disk drives and target storage systems; oversubscription in the SAN or NAS fabric; and oversubscription at the target storage ports. While oversubscription is a normal practice for IT, a miscalculation in oversubscription can cause serious I/O traffic jams.

Big Blue’s SAN Solution

IBM’s approach to solving I/O challenges in virtualized environments has been to invent a new type of SAN storage system, said Kem Clawson, XIV federal storage specialist at IBM.

“What we’ve done with XIV is to create a system optimized for dynamic, virtualized environments built with a holistic approach to storage,” Clawson said. “A big part of what we’ve done is to build our solution on top of commodity hardware so it will work anywhere.”

Taneja Group analysts said the IBM XIV storage system “drives performance from the bottom up, starting with disk modules that are aggregated together as distributed, intelligent, I/O orchestrating, high-performance building blocks designed to leverage every I/O from every disk for maximum performance,” according to a November 2008 technology brief from the analyst firm.

According to the brief, “From a single rack of XIV, using 15 12-disk Data Modules, or 180 SATAdrives, IBM can deliver 100,000 IOPS from cache, 20,000 IOPS from disk, and 2.4GBps and 1.4GBps of sustained sequential read and write bandwidth respectively. The secret behind delivering this level of performance is XIV’s optimization of every I/O in combination with intelligent caching algorithms that rapidly parse and refine cached data.”

BlueArc, Panasas Turn to pNFS

In contrast to IBM’s holistic approach, BlueArc, NetApp and Panasas attack the I/O challenge differently. BlueArc and Panasas primarily leverage the evolving architecture of the Parallel Network File System (pNFS), which allows clients to access storage devices directly and in parallel. NetApp’s secret sauce is something called a performance acceleration module (PAM).

The pNFS architecture eliminates the scalability and performance issues associated with older NFS servers in deployment today, said Gerard Sample, senior manager of product marketing at BlueArc, whose Titan products support both NAS and SAN.

Sample said pNFS is important because it brings together the benefits of parallel I/O with the ubiquitous NFS standard.

“The main benefits of parallel I/O are that it delivers very high application performance and enables massive scalability,” said Sample, noting that BlueArc’s Titan 3200 hums along at 1600 megabytes per second and delivers 380,000 I/O operations per second (IOPS).

Panasas’ implementation of pNFS focuses on using Intel’s (NASDAQ: INTC) latest high-performance solid-state drives (SSDs) in its ActiveStor product range, said Brent Welch, director of software architecture at Panasas.

Unlike single-dimensional storage solutions, which offer either high-bandwidth performance or optimized IOPS, ActiveStor uses multiple storage technologies in a synchronized architecture to produce both, said Welch.

ActiveStor achieves new performance levels by combining three tiers of storage — cache, SSD and SATA — on each blade, said Welch. This architecture provides distributed and balanced data path I/O, eliminating performance bottlenecks, while achieving performance in the hundreds of GB per second, he said.

NetApp Sprays PAM on the Storage I/O Problem

NetApp’s PAM innovation is raising some eyebrows with its ability to boost performance and reduce I/O pressure on NetApp’s FAS storage systems.

The vendor’s PAM modules are 16GB cards that plug into PCIe slots and function as a read cache to eliminate bottlenecks between high I/O applications and servers.

PAM “optimizes the performance of a NetApp storage system by improving throughput and latency while reducing the number of disk spindles,” said Sangster.

“The key I/O benefit of deploying one or more PAMs is that a hit to the PAM’s cache will considerably reduce the time it takes to fetch the data as compared to the same process occurring from disk,” he said.

He added that in many high-performance computing environments, the bottleneck is not capacity but I/O throughput — how fast data can be written or read to disk.

“If a data center is I/O constrained, IT will choose this card every time, not only for throughput but performance,” said Sangster.

And HP (NYSE: HPQ) is another company working on the problem. HP says its LeftHand P4000 SAN solutions automatically balance data volumes across all storage resources and identify performance issues by server, virtual machine and SAN volume.

The engineering innovations at IBM, NetApp, BlueArc, Panasas and HP are going a long way toward ensuring that storage systems no longer have to be the weak link in virtual environments.

Follow Enterprise Storage Forum on Twitter

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.