Storage Virtualization Helps Alleviate Virtual Server Bottlenecks

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Server virtualization has been good business for virtualization software and storage networking vendors, but for data storage users, it can be something of a struggle, as proliferating virtual machines (VMs) require additional storage capacity and other workarounds to alleviate the utilization and performance degradation that can come with a virtual server environment (see Server Virtualization Drives Storage Networking Sales).

One solution to the problem, according to some in the storage industry, is storage virtualization.

According to Steven Murphy, CEO of unified storage vendor Reldata, there are some storage solutions that can alleviate such problems because of their ability to virtualize unused assets. Murphy suggests that one area to look at is how well any storage vendor allows the customer, in the full virtualization of both servers and the alignment of storage, to leverage storage assets, especially multi-vendor assets.

“It’s important for companies to recognize that with the implementation of new virtualization technology, they must have a strategy for aligning their storage capacity,” said Murphy. “Many companies are finding that one of the most effective strategies is implementing multi-vendor storage consolidation as part of their virtualization architecture.”

Compounding the mismatch between server virtualization and storage resources is a tough economy that has brought increased pressure to maximize virtual infrastructures by increasing virtual machine density. This increased density puts additional pressure on an already strained storage infrastructure.

“IT managers are being forced to do more with their existing infrastructures with less resources being made available, and this is forcing them to look for new ways to reallocate free space on their storage to save on hardware,” said Koka Sexton, manager of business development at Paragon Software. “IT managers are learning how to analyze their virtual infrastructures more closely and implement projects of migrating data to lower-performing drives to free resources.”

Problems typically occur when virtual machine performance is hindered by a server-to-storage I/O bottleneck because many traditional storage architectures cannot effectively manage the random I/O patterns created by virtual machines.

“Virtual machine servers generate I/O patterns that are different from single servers,” said Murphy. “And while it is true that the patterns are different, storage array products have long dealt with this issue because they had to service multiple physical servers, which effectively creates similar patterns.”

Michael Williams, director of marketing for Nexenta, cautions that not all applications should be virtualized. “Let’s say a customer has a physical server with a nearly full hard drive filled with OS files,” Williams said. “With traditional storage, virtualizing the server with seven VMs means that the customer needs seven copies of the OS. As VMs move among servers, it becomes very difficult to tune performance. So some applications, such as transactional applications that are the lifeblood of the organization [such as Sabre for a travel agency], very well should not be virtualized.”

Choosing the Right Storage Networking Environment

Some of the key factors in choosing a storage system for a virtual environment, according to Sexton, include ensuring that storage customers possess a complete understanding of the virtual platform that they are planning to implement and that they are sure that the system can scale to meet their needs.

“In general, the more VMs you have on a host, the more NICs you’ll want,” said Sexton. “However, the network workload of these VMs is the biggest influence. For example, if VMs have light workloads, you’ll need fewer NICs; if VMs have heavier workloads, you’ll need more NICs. As a rule, you’ll probably experience other resource bottlenecks before the network becomes an issue on virtual hosts.”

Murphy said there are two critical factors to consider in choosing storage for virtualized environments. The first, he said, is the ability of the new architecture to be easily implemented and aligned with virtual servers without disruption to the application architecture. The second is to improve storage utilization by consolidating both block and file storage capacity while also improving performance.

Storage virtualization also allows for hardware savings and tiered storage for less critical data, noted Williams.

“Storage virtualization provides huge cost savings to the customer because it can use any combination of hardware, including generic white boxes, thereby not only minimizing initial costs but ongoing costs as well because the asset life is increased,” said Williams.

“Storage virtualization allows for highly available data to be recovered quickly, provides built-in metric tools that can help analyze storage use and create benchmarks, and has the ability to move VMs to a lower level disk so high performance drives are used for current projects,” said Sexton.

Emerging Storage Virtualization Technologies

Storage vendors also highlighted some new and emerging technologies that could further alleviate virtual bottlenecks.

Vish Mulchand, director of software product marketing at 3PAR (NYSE: PAR), cited array controller-based block virtualization, as it allows for the development of technologies such as thin provisioning and dynamic optimization, including volume-level non-disruptive moves of data between different disk drives, non-disruptive RAID level changes, and non-disruptive relaying of data over additional drives.

“Recent breakthroughs in block-level virtualization have provided additional thin capabilities: Fat-to-thin conversion, space reclamation at the block level for both user data and copy/snapshot data, and space reclamation via intelligent host file system integration,” said Mulchand. “In addition, block-level virtualization also lays the groundwork for further advances in non-disruptive sub-volume data migrations.”

Williams sees solid state drives (SSDs) catching on because of their ability to support virtualization approaches such as virtual desktops. He also likes Sun’s (and now Oracle’s) ZFS file system, which the Nexenta product is based on, and he also sees a bright future for 10Gb Ethernet because it has placed storage virtualization running on iSCSI in the same performance range as Fibre Channel if engineered correctly.

Follow Enterprise Storage Forum on Twitter

Leslie Wood
Leslie Wood
Leslie. Wood is an Enterprise Storage Forum contributor.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.