Virtualization and the Need for Storage Acceleration Software - EnterpriseStorageForum.com
Hot Topics:

Virtualization and the Need for Storage Acceleration Software

Server refreshes give organizations virtualization hosts with more processor cores and more power than ever before. That means it's possible to pack more virtual machines onto each host, but storage systems often struggle to cope with the resulting increase in I/O operations. The good news is that the software that controls on-server storage caches is becoming more effective in virtualized environments.

The Impact of Virtualization on Storage

The attraction of more powerful processors in virtualization hosts is clear: if you double the processing power, you should be able to run twice as many virtual machines as you did previously. That has important financial implications when it comes to per-processor virtualization software licensing costs.

Unfortunately, server virtualization is not as simple as that. A physical server running twice as many virtual machines produces twice as many I/O operations, and these become random operations rather than sequential ones thanks to the "I/O blender" effect. In fact, the amount of I/O operations is probably more than double as virtual machines are moved around.


The result? Storage systems that struggle to cope and applications that run slowly in their virtual machines.

Possible Solutions

When it comes to addressing the "write" side of the problem, one solution is to use a storage hypervisor such as VMware's Virsto. The storage hypervisor takes over the handling of I/O traffic from the standard virtualization hypervisor, sending writes to a high-performance staging area which immediately acknowledges them. The staging area then optimizes the writes—essentially making them sequential streams rather than random ones for each virtual machine—and sends them on to a storage pool for final storage.

A storage hypervisor such as VMware's Virsto can be very effective at speeding up virtualized server infrastructures as well as breathing new life into older SANs that are added to the storage pool thanks to increased write speeds.

The problem is that for most companies, I/O activity is very far from balanced. "It very much depends on the applications concerned, but most are biased towards reads," says Mark Peters, a senior analyst at Enterprise Strategy Group. "That means most organizations end up with a 60/40 split or even an 80/20 split in favor or reads."

There are several things you can do to speed up reads, including buying a large number of new disks for your SANs, short stroking and increasing the amount of RAM available. But it's probably more effective to carry out some form of storage tiering by introducing an on-server read cache using SSD storage. "Read caching is a very effective tool," confirms Peters.

Storage Acceleration Software

But there are still problems with adding an on-board SSD cache of the sort sold by companies like Fusion-io, IBM, EMC, OCZ and SanDisk, says Peters. One is the fact that the SSD hardware—often hundreds of gigabytes of cache—does have a cost. That may not be an issue in smaller organizations with a limited number of physical hosts, but in complex virtualized environments where workloads are moved from host to host using technology such as VMware's vMotion, it becomes more important. Unless every single physical host is equipped with an SSD cache, the presence of storage caches on some servers but not others restricts which workloads can be moved to which machine.


Page 1 of 2

 
1 2
Next Page
Tags: cache, virtualization, acceleration, Storage, software-defined storage


Comment and Contribute

 


(Maximum characters: 1200). You have characters left.