VVOLs and VMware

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

The definition of VVOLs is simple but the effect is ground-breaking. Here is the simple definition part: Virtual Volumes (VVOL) is an out-of-band communication protocol between array-based storage services and vSphere 6.

And here is the ground-breaking part: VVOLs enables a VM to communicate its data management requirements directly to the storage array. The idea is to automate and optimize storage resources at the VM level instead of placing data services at the LUN (block storage) or the file share (NAS) level.

VMware replaces these aggregated datastores with one Virtual Volume (VVOL) endpoint whose data services match individual VM requirements. VVOLs enable more granular control over VMs and increase their visibility on the storage array. Note however that the array still operates within its own limitations. If an administrator has applied a policy to the VM with a specific snapshot schedule and the array cannot comply, then the VM doesn’t get that schedule. 

The Challenge of Tailoring Storage to VMs

The traditional state of affairs is for storage and VMware teams to jointly establish an application’s storage and protection requirements. They would create a storage pool on the array with matching data protection features and present the features to the VMs via ESXi hosts.

Here is the basic process: the VM administrator meets with the storage administrator to lay down the application’s storage and data protection needs. Sometimes the conversation is straightforward; sometimes there is conflict over what data services are available and what the VM needs. One way or another they work it out, and the storage administrator creates the storage pool on the array with the agreed-upon characteristics. These include sufficient capacity as well as RAID, service levels/QoS, replication, snapshots, and so on. The administrator then assigns the storage pool into Fibre Channel or iSCSI LUNs or NFS shares, and presents them to the ESXi host. The VMware administrator then directs vCenter to deploy the new VM to the array.

The disadvantage to this process is that the LUN or mount point settings do not change according to individual VMs – they exist at the array level and must apply to all the VMs on that LUN or file share. If the settings are not optimized for any given VM then the administrator either has to create a new LUN or file share with more appropriate settings, or live with it. 

All VMs needs their storage to be available, to have sufficient capacity, to be recoverable within acceptable time periods, to access proper block sizes, and to offer sufficient performance. However, within these general requirements applications differ by block size, service levels, data protection settings, and RPO/RTO recoverability. Yet traditional VMware storage implementations depend on a single set of policies administered from the datastore.

For example, there will likely be a number of VMs on a LUN-based datastore. In this environment it is difficult to pinpoint storage issues, such as a VM that is consuming large amounts of resources. And it is quite impossible to assign different storage services to the VMs within the datastore; it’s all or nothing as far as previous versions of vSphere are concerned.

VVOLs

vSphere 6 changes all that with VVOLs and granular storage management for VMs. Administrators can now set storage policies according to application needs and provide them directly to the participating VM. Specific workloads receive specific policy settings to optimize performance. Depending on the array’s capabilities (VVOL can’t create array functionality), the storage system can now provide levels of availability, data protection/recoverability, optimal block sizes, and performance directly to the VM.

There are three major VVOL components: the vendor/storage provider, the protocol endpoint and the storage container.

1. The Vendor (or Storage) Provider (VP) manages data operations. VP is a plug-in that storage vendors engineer for their specific VVOL-enabled arrays. This plug-in uses VMware’s next-gen version of VASA: “vSphere Storage APIs – Storage Awareness.” VASA lets storage arrays integrate management functions with vCenter. (vSphere’s other major API is VAAI: “vSphere Storage APIs – Array Integration.” VAAI primarily offloads cloning and migration operations to the array. It co-exists with VASA and VVOLs.) VVOLs uses the newer VASA APIs to surface the array’s storage management functions to vCenter and the VMs, and also pushes VM-specific information to the array. This is the capacity that lets storage and the VMs interact dynamically.

2. The Protocol Endpoint (PE) manages communications and access control. PE is the access point between the host and the array. It manages paths and policies for both block and file-based data.  They are necessary because ESXi hosts do not have direct visibility into the VVOLs. ESXi still issues data requests and the protocol endpoint directs the IO to and from the VMs and the virtual volumes. This enables the tight integration between storage functions and individual VMs. This VASA-enabled bidirectional communication is new for vSphere: first generation APIs are unidirectional between vSphere and the array. An array may have more than one PE.

3. The Storage Container (SC) manages data capacity. SC is essentially a datastore but instead of being LUN- or file-based it assigns chunks of available physical storage to VMs. Instead of creating a traditional storage pool to present to ESXi hosts, the storage container becomes visible to the hosts once the VP and PE are in place. Administrators can create VVOLs within the storage container at will. Administrators can set up multiple policies and each policy may have one or more rule sets. Upon VM creation, the administrator assigns a storage policy and picks from a list of SCs matching the policy. At present SCs are limited on the array, usually 20 containers or less. vSphere limits the number of SCs per host to 256.

Vendors

It’s not an easy task to engineer a storage array to support VVOLs. About 30 storage vendors have signed up as VVOL partners; only four completed certification by the vSphere General Availability announcement. These four were first out the gate: HP (3PAR StoreServ), IBM (XIV), NEC (iStorage M series), and SANBlaze (VirtuaLUN).

Many other array vendors are VVOL development partners. The list includes HDS (heavy adopter; NAS now, much more to come), EMC (VNX), NetApp (design partner; Data ONTAP 8.3) Dell (EqualLogic PS series; Compellent SC series), Nexenta (NexentaStor software), Nimble (Nimble OS), NexGen (N5), SanDisk (all-flash arrays), Tintri (VMstore), Tegile (hybrid arrays), and Violin (all-flash arrays).

They won’t all be created equal. The differentiator between the arrays will be the data services they support for VVOLs. It’s up the array vendor to offer services to the VVOLs like dedupe, cloning, snapshot, replication, compression, and more.

Even assuming that most of them will offer these basic services, how efficient will the services be? What limitations will they have? How much of the array will be reserved for SCs? These issues plus Quality of Service will decide storage array differentiators in the VVOL marketplace.

Photo courtesy of Shutterstock.

Christine Taylor
Christine Taylor
Christine Taylor is a writer and content strategist. She brings technology concepts to vivid life in white papers, ebooks, case studies, blogs, and articles, and is particularly passionate about the explosive potential of B2B storytelling. She also consults with small marketing teams on how to do excellent content strategy and creation with limited resources.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.