Five Tips for Hyper-V Storage

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

While VMware once thoroughly dominated the server virtualization field, other vendors are eating into its market share. Leading the way is Microsoft’s Hyper-V which, according to IDC, had a 27.6% share last year, up from 20.3% in 2008.

“In addition to its undeniable technical benefits, it doesn’t hurt that it comes free with Windows,” says Kelly Murphy, co-founder and Chief Strategy Officer of Gridstore. “However, it’s not without its drawbacks.”

One factor to consider is to make sure that your storage architecture is designed to work well with the Hyper-V platform. Here are five tips to follow when deploying such storage.

Eliminate the I/O Blender Effect

Traditional storage severely impacts the performance of VMs and creates complex provisioning issues that result in expensive storage resources being over-provisioned to try to address the performance problems.

“Users can eliminate this issue manually by: separating the I/O into a channel per-VM basis, identifying the I/O signature of the VM and optimizing the I/O pattern, and then allocating appropriate compute, network and storage resources to ensure QoS on a per VM basis,” says Murphy. “Or, better yet, IT professionals should find a solution that will do all of this for them in an automatic and continual fashion.”

Then the performance must be monitored to ensure that the application I/O has actually been accelerated and prioritized as expected. This monitoring should be done at the server level.

“Unlike traditional storage that can only implement QoS inside the array (where it’s too late to prioritize),” says Murphy, “users need to seek a solution that automatically prioritizes I/O before it leaves the server in order to eliminate the performance impact of ‘noisy neighbors’ on the most important apps.”

Select the Right File System

Calvin Nieh, NetApp Product Marketing Manager, says that designing storage for Hyper-V isn’t much different from designing for other types of environments.

“You design the storage (spindle count, raid levels, etc.) for the applications inside the virtual machines,” he says. “The difference is how you deploy the storage, and the process is what really matters.”

He says that companies deploying Hyper-V can go with independent LUNs, a Cluster Shared Volume (CSV), or a Common Internet File System (CIFS) share.

LUNs are good for high speed access from a single server to a container and keep a VM and its load separate from all other VMs. CSVs combine multiple VMs in a shared space and are good for high speed access from a Windows Failover Cluster. The disadvantage is that it requires WinFailoverClustering and Live Migration inside the cluster. Going with a CIFS share allows Live Migration out-of-cluster, yet the multipathing not as strong as when using Microsoft’s Multipath I/O (MPIO).

Once the file system type is determined, the connection type can be selected: Direct Attached Storage or SAN for a LUN; a block protocol for CSV; or Network Attached Storage for CIFS.

Use DRAM Cache to Solve I/O Issues

“When Windows Server administrators consolidate applications onto a Hyper-V infrastructure, the first thing they notice is an increased mix of random I/O,” says Bill Schilling, marketing director, Imation’s Nexsan solutions. “This typically puts traditional disk-based storage arrays at a performance disadvantage; hard disk drives struggle to keep up with highly randomized operations, impacting both read and write performance.”

Switching to a pure solid state storage system will speed up the I/O; but this approach is costly and solid state drives have a shorter life span than disks. Since only a portion of the data is in active use at a given time, using DRAM and SSDs to cache active data is less expensive than a pure SSD system, while providing similar performance.

“A DRAM write cache can aggregate writes and deliver them to disk in a single IOP, greatly increasing system performance and efficiency,” says Schilling. “A little write cache can go a long way if you have, for example, an e-commerce database with rapidly changing data.”

Avoid Single Points of Failure

Virtualization can increase the high availability of mission-critical workloads, but only if the storage architecture contains the necessary resiliency.

“When servers are consolidated, the shared storage array becomes the focal point that must serve many Hyper-V hosts simultaneously and therefore, storage performance is squeezed,” says Parissa Mohamadi, HP Storage Marketing. “To address these new, increased demands on the underlying infrastructure, storage must be designed with resiliency features where single points of failure are minimized within the array to properly handle the numerous hosts depending on it.”

Mohamadi recommends deploying as simple of a storage design as possible to reduce the number of components that could fail. Companies should also use a robust high availability mechanism like Microsoft’s Multi-Path IO (MPIO) to take advantage of High Availability. In addition, he suggests that copies of data be kept on separate storage devices so they are not all placed on a single hardware component that could fail.

“The underlying storage infrastructure must deliver high levels of service during unplanned hardware and software failures as well as routine hardware maintenance—without performing disruptive failovers,” he says. “This way no midnight outage windows are needed when changing servers or performing firmware upgrades.”

Scale Out, Not Up

With storage consuming as much as 40% of storage budgets, Kevin Brown, CEO of Coraid, says that companies can save a lot of money by dropping Fibre Channel or Fibre Channel over Ethernet, and still boost performance.

“Virtualization platforms such as Hyper-V have transformed compute infrastructure to a scale-out architecture,” says Brown. “Using traditional controller-based scale-up storage with this scale-out compute fabric creates a mismatch that leads to bottlenecks and scaling complexity.”

Enterprise storage arrays use “scale up” designs, with proprietary storage controllers driving daisy-chained shelves of drives. As deployments grow, the processors and disk connectivity become performance bottlenecks due to the I/O blender effect, forcing forklift upgrades to handle growing capacity.

In contrast, scale-out architectures utilize massively parallel architectures with off-the-shelf hardware and intelligent software to deliver maximum scalability and elasticity. No forklift upgrades are required as data volumes grow — capacity is added just in time and performance scales linearly.

“The goal should be to leverage massive amounts of commodity hardware in both compute and storage to create elastic pools of processing power and resilient storage,” says Brown.

Drew Robb
Drew Robb
Drew Robb is a contributing writer for Datamation, Enterprise Storage Forum, eSecurity Planet, Channel Insider, and eWeek. He has been reporting on all areas of IT for more than 25 years. He has a degree from the University of Strathclyde UK (USUK), and lives in the Tampa Bay area of Florida.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.