Data Storage QoS: Still Emerging, But Inevitable

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Quality of service (QoS) is a powerful storage system feature. Adoption so far has been limited, but it is becoming clear that storage QoS is a capability whose time has come. And in the not too distant future it will be part of just about every storage system.

That’s certainly the opinion of Henry Baltazar, a senior analyst at Forrester Research. He says that while storage QoS is still early in its adoption cycle – although major vendors like IBM and HP are already adding it to their product offerings – its proliferation is inevitable. “I am pretty sure that it will become standard in most products – especially when used in the cloud or in highly virtualized infrastructures,” he says.

In fact storage QoS is useful not just in these environments but in any server room or data center – virtualized or not – where workloads utilize shared storage resources. That’s because it can help provide predictable and controllable storage performance levels, and it can solve the so-called “noisy neighbor” problem.

That’s when an application or virtual machine starts consuming more than its fair share of storage resources to the extent that it negatively impacts the performance of other applications or virtual machines that use the same storage resources. It can happen in an enterprise environment, but it can also affect individual cloud customers if they happen to be sharing storage resources with other customers whose workloads are noisy neighbors.

QoS gives storage staff the tools they need to mitigate the noisy neighbor problem either by limiting the resources these applications or VMs can hog or by guaranteeing a minimum level of storage resources that other applications can access when they need them.

“The main point of storage QoS is that it provides you with superior management of the storage resources you have. So at the high end, it provides you with the ability to guarantee that if you have an important database and it needs a certain number of IOPS, then your storage system will consistently provide them.”

In that respect storage QoS is similar to the type of network QoS many organizations use to ensure the quality of their VoIP calls. But when it comes to storage QoS in cloud environments Baltazar believes that it has other uses as well.

“An interesting nuance is that the power of QoS is not just to guarantee the high end, but also to limit the low end,” he says. “A bronze level customer on a shared array without QoS is not bound by any restrictions. But you don’t want bronze customers getting gold performance – that applies in cloud and in the enterprise too.

“If there are no boundaries then these customers can eat up cache and outbound bandwidth – and there you have the noisy neighbor problem. You have to have a way to ensure that the people who need (and perhaps pay for) high storage performance get it.”

The way that storage vendors implement storage QoS is likely to vary widely – from software bolted on to existing hybrid arrays, to high end, purpose built flash-only systems for the public cloud provider and very large (think eBay) Internet-based organization market.

At the SME-end of the market, it’s likely that QoS will appear in hybrid storage offerings, with the QoS software helping to automate the allocation of data on faster SSD storage or slower but cheaper and more abundant spinning disk media. That’s the approach that’s being taken by Utah-based Fusion-io with its ioControl Hybrid Storage, which includes QoS technology the company acquired when it purchased NexGen Storage a little over a year ago.

“The problem companies face is understanding which data needs to be on SSD and which needs to be on disk,” explains Chris McCall, senior director of ioControl marketing at Fusion-io. “Our software manages how much data needs to be on a disk or SSD and manages that dynamically.”

Using Fusion-io’s system, storage staff label every storage volume they create as either mission critical, business critical, or not critical – with the proviso that no more than half the total system capacity can be labelled as mission critical.  

“We have a filter stack, and every I/O block that is sent or requested goes through some intelligence to understand it. Then if it is a mission critical request for data it gets priority at any storage bottlenecks,” says McCall.

Staff can also give applications a speed requirement rating from 1 to 5, which affects the priority of storage traffic, so they actually have two ways of controlling application performance: an application’s criticality and its speed rating. The software then juggles the data onto solid state or spinning media, and prioritizes it through the I/O bottlenecks to try to ensure that the application performance is what it needs to be.

At the other end of the market, Colorado-based SolidFire offers a range of three all-flash scale-out storage systems with QoS for use in public and large private cloud infrastructures. These allow administrators to set capacity and performance separately, and “dial up” or “dial down” performance for each application as necessary. Cloud providers can even expose these controls to customers so that they can alter performance themselves.

“This lets service providers deliver exactly what they promise to customers,” says Jay Prassl, SolidFire’s marketing VP. “Cloud providers are all about application density, and as you step that up, if you can’t control application performance any other way then you have to over-allocate storage, which  is very inefficient.”

SolidFire’s system stripes every application across every SSD, and when performance is dialed up for a particular app, more IOPs are called from each drive. “That means I can deliver 100 IOPs for one app, and get 1,000 IOPs for another,” he explains.  He adds that while hybrid systems work for small and mid-market customers, he believes that at the high end the unpredictability in performance when data is migrated back and forth between spinning and solid state disks means that the only practical option is to go all solid state.

Right now storage systems that offer QoS are the exception rather than the rule, and Forrester’s Baltazar guesses it will be some years before they become commonplace. “Vendors are only now getting to point of delivering QoS, and customers and consultants will have to be educated. Then there will have to be testing, policies drawn up and methodologies devised, so there will be a QoS learning process,” he says.

That means that despite the availability of products such as Fusion-io’s and SolidFire’s, the storage QoS revolution has only just begun. “It will almost certainly become standard eventually like thin provisioning, but don’t expect it to happen overnight,” he concludes.

Photo courtesy of Shutterstock.

Paul Rubens
Paul Rubens
Paul Rubens is a technology journalist based in England and is an eSecurity Planet and Datamation contributor.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.