Data Storage QoS: Still Emerging, But Inevitable - Page 2


Want the latest storage insights?

Download the authoritative guide: Enterprise Data Storage 2018: Optimizing Your Storage Infrastructure

Share it on Twitter  
Share it on Facebook  
Share it on Google+
Share it on Linked in  

Using Fusion-io's system, storage staff label every storage volume they create as either mission critical, business critical, or not critical – with the proviso that no more than half the total system capacity can be labelled as mission critical.  

"We have a filter stack, and every I/O block that is sent or requested goes through some intelligence to understand it. Then if it is a mission critical request for data it gets priority at any storage bottlenecks," says McCall.

Staff can also give applications a speed requirement rating from 1 to 5, which affects the priority of storage traffic, so they actually have two ways of controlling application performance: an application's criticality and its speed rating. The software then juggles the data onto solid state or spinning media, and prioritizes it through the I/O bottlenecks to try to ensure that the application performance is what it needs to be.

At the other end of the market, Colorado-based SolidFire offers a range of three all-flash scale-out storage systems with QoS for use in public and large private cloud infrastructures. These allow administrators to set capacity and performance separately, and "dial up" or "dial down" performance for each application as necessary. Cloud providers can even expose these controls to customers so that they can alter performance themselves.

"This lets service providers deliver exactly what they promise to customers," says Jay Prassl, SolidFire's marketing VP. "Cloud providers are all about application density, and as you step that up, if you can't control application performance any other way then you have to over-allocate storage, which  is very inefficient."

SolidFire's system stripes every application across every SSD, and when performance is dialed up for a particular app, more IOPs are called from each drive. "That means I can deliver 100 IOPs for one app, and get 1,000 IOPs for another," he explains.  He adds that while hybrid systems work for small and mid-market customers, he believes that at the high end the unpredictability in performance when data is migrated back and forth between spinning and solid state disks means that the only practical option is to go all solid state.

Right now storage systems that offer QoS are the exception rather than the rule, and Forrester's Baltazar guesses it will be some years before they become commonplace. "Vendors are only now getting to point of delivering QoS, and customers and consultants will have to be educated. Then there will have to be testing, policies drawn up and methodologies devised, so there will be a QoS learning process," he says.

That means that despite the availability of products such as Fusion-io's and SolidFire's, the storage QoS revolution has only just begun. "It will almost certainly become standard eventually like thin provisioning, but don't expect it to happen overnight," he concludes.

Photo courtesy of Shutterstock.

Submit a Comment


People are discussing this article with 0 comment(s)