Software-Defined Storage: Where It’s Headed
While storage marketing and technology labels come and go, software-defined storage (SDS) continues to be used as the preferred term to define a trend towards data storage becoming independent of the underlying hardware. The term itself is somewhat loose but generally encompasses policy-based provisioning and data management in a hardware agnostic fashion.
Features such as pooling, automation or tiering, for example, may also be part of the mix. Some go as far as to consider anything cloud to be in the software-defined camp. But regardless of the various opinions on how to define it, there are some definite views on where software-defined storage is heading.
“Easily programmable, elastic hybrid and multi-cloud infrastructures will continue their emergence in 2017,” said Ihab Tarazi, CTO at Equinix. “We can expect to see some companies deploy distributed storage in some locations, leveraging their network hubbing and hybrid cloud interconnection.”
Let’s look at some of the top trends and predictions for software-defined storage.
SDS Catches Up
2016 was the year SDS caught up with traditional storage on several fronts, said Lee Caswell, vice president of Products, Storage and Availability, VMware.
Firstly, he added, key enterprise-class storage features like inline deduplication, compression and quality of services, were introduced and became mainstream across the leading software-defined storage solutions. And then due to the rapid decline in flash storage costs—especially when looking at server-side flash versus the premium sometimes charged for array-based flash—the performance and reliability of software-defined storage made it well suited for any virtualized workload.
“These changes in 2016, led to the rapid growth in software-defined storage and drastic shift in how IT was architected, especially in the innovator and early adopter customers,” said Caswell.
As we move into 2017, he sees software-defined storage democratizing IT resource management so IT generalists can manage infrastructure as an integrated solution platform, rather than as resource silos.
“Combining storage management with compute and network will enable more efficient resources and accelerate troubleshooting, freeing resources to focus on more strategic, business initiatives,” said Caswell.
By mid-2017, all of the major server vendors are expected to begin shipping hardware based on Intel’s latest CPU microarchitecture, known as Skylake. These upcoming server processors represent a significant performance upgrade from the previous generation of chips and some expect them to trigger the largest server refresh cycle of the last five years.
“Software-defined storage solutions will be able to take immediate advantage of them while traditional storage systems face much longer hardware refresh cycles and target new systems based on the new microarchitecture in 2018 or beyond,” said Caswell.
Some predict that SDS will herald a new multi-cloud era in 2017. Leveraging the power of software-defined infrastructure that is not tied to a specific hardware platform and configuration, users will be able to embrace a defined cloud strategy that is evolutionary to what they are doing today.
As a result, IT has to be prepared to support new application models designed to bring the simplicity and agility of cloud to on-premises infrastructure. At the same time, new software-defined infrastructure enables a flexible multi-cloud architecture that extends a common and consistent operating environment from on-prem to off-prem, including public clouds, said Caswell.
Primary Meets Secondary
Kate Davis, Manager, HPE Storage Marketing, sees several trends evolving this year related to software-defined storage. The first concerns primary and secondary storage being brought closer together.
“This consolidation and integration reduces the complexity of having a separate backup server as well as speeds up the backup process and window,” said Davis. “Flash has made applications and primary storage work faster, and secondary storage needs to work at the same speed.”
She makes a couple of additional predictions:
· Increasing the Hybrid IT capabilities with data mobility between SDS and dedicated storage systems and/or public cloud instances.
· On-going management integration into hypervisor tools, compute platforms, hyper-converged systems, and next-generation composable infrastructures
For decades, many have talked about breaking down the silos that exist between data stores. Yet it never seems to happen. Or maybe that’s unfair. Perhaps it has been happening but the volume of data has grown so much that there appear to be more silos than ever. So could this be the year?
“I see 2017 as being the year when software defined storage starts to break down traditional silos,” said Paul LaPorte, Director of Products, Metalogix.
For example, content storage and archiving are still frequently viewed as separate functions. The reality is that artificial barriers between the two will fall as more connectivity is established. More connectors, APIs, and integration are being created as organizations push forward and look for ways to get more organizational value from content, said LaPorte.
Another prediction for the year is the hybridization of software-defined storage uses. Many organizations have heretofore used SDS in an on-premises way, using it to manage content within the physical bounds of an organization. As more and more work loads are shifted to the cloud, organizations start to view content as borderless. Some content will reside inside the company, other content will float throughout the cloud in a nomadic way seeking the cheapest storage for low value persistent content.
“Traditional boundaries between on-premises and the cloud will fall, similar to what has happened in the application or SaaS space,” said LaPorte. “More than just shifting content storage to the cloud, content will start to become disassociated with a ‘here or there’ mindset. Content remains fluid, always seeking a balance of best use, lowest cost, and proper compliance.”
Software-Defined Data Center
What is the ultimate end game for all things software defined? Beyond the storage aspect, it is the actualization of the software-defined data center (SDDC). Hyperconverged is one of the key elements of achieving that goal. But to attain that, hyperconverged infrastructures (HCI) must deliver predictable performance – beyond storage – to all elements of data center management.
“As more companies adopt HCI for the Capex and Opex benefits, they are also looking to deploy across a wider range of application workloads, including mission critical and multiple, diverse application workloads,” said George Wagner, Senior Product Marketing Manager, Pivot3. “This is creating an increased demand for solutions that offer predictable application performance to guarantee the business results, along with the expected simple deployment and economic advantages of HCI.”
Finally, this could be the year when SDS comes of age and moves from high potential into a realization of at least some of its many possibilities.
“I see 2017 as the year of maturity for SDS,” said Davis. “It’s at a point where adoption is fully accepted, and there is a moderately sized variety of product choices in the market, all with enterprise-class storage features and functionality.”