Software Defined Storage: Getting Down to Specifics

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

The PR roar surrounding software defined storage (SDS) and the software defined data center (SDDC) has gotten louder of late. This month’s Storage Networking World show featured many presentations on these subjects. Yet many users are still struggling to come to terms with exactly what it is.

Anil Vasudeva, an analyst with IMEX Research explained that the runaway success of virtualization opened the door to software-driven provisioning of pooled resources tailored to workload requirements. The software defined data center is the next step in this evolution by taking virtualization beyond the server to also encompass networking and storage. For example, data protection, encryption, compression, snapshots, deduplication and auto-tiering can become integrated and dynamically linked to the applications they serve.

EMC Chief Technologist John Cooper called software defined data storage the foundation of push-button IT. He makes the case for automation as follows:

Compared to today when the flying of planes was wholly manual, today’s planes are 2,000 times safer. Yet the pilot of a 747 operates a machine with over a million parts. Therefore, that pilot is wholly dependent on automation to handle such complexity.

Similarly, on the automotive front, human error comprises 94% of accidents. We are already seeing automation in features such as GPS, self-parking and braking when an obstacle is detected. The Google driverless car has clocked half a million miles with two accidents – one due to being rear ended and the other when it was being manually operated. Cooper believes that with the number of vehicles per thousand people due to soar from 40 today to over 300 in 20 years, automotive automation is coming.

“These examples solve the challenges of scale and complexity by realizing we must give up direct human control and rely on programmable software,” said Cooper. “IT automation is here today and its off-the-shelf tools have matured.”

Examples include VMware vCloud Automation Center, newScale, EucalyptusService Mesh and Servicenow. Cooper noted that running your own cloud tends to tie up IT in internal management of existing resources. One simpler approach is to use VCE vBlock as a fast way to deploy to the cloud and SDDC while freeing up IT for more strategic tasks. Yet as noted in the list above, there are many options.

“Virtualized software is replacing specialized hardware in the data center,” said Cooper. “This makes IT into more of a broker of services such as virtual desktops, SaaS applications, public cloud apps, and private cloud apps.”

Getting Down to Specifics

Stuart Berman, CEO of Jeda Networks, got down to brass tacks on how all this should be implemented. Extraction of the control plane from the physical storage would simplify switch requirements and enable IT to deploy lower cost switches. As those controllers would be virtualized, they could scale better by supplying them with the CPU, memory and networking resources they need.

In a SAN setting, the SDS controller would be able to communicate with servers and storage arrays in order to provide global services such as naming and zoning.

“You have to decouple the storage network control plane from the underlying hardware,” said Berman. “The benefits include reduced time to provision network resources, as well as lower Capex and Opex.”

He advocates that in a SDDC world, the storage network should be taken care of by existing network LAN management tools. That means further convergence of LAN and SAN management and simplification of networking overall.

Vasudeva takes this further by saying that traditional storage block and file tools – which have hardware-based resiliency and little tolerance for data loss – are giving way to HDFS (Hadoop Distributed File System) and object storage. That means the resiliency will migrate to the software, which will provide greater tolerance for data loss.

In order to get there, he said, storage has to address the performance issues it faces in server virtualization. For example, some virtual machines (VMs) produce random write intensive I/O, which tend to get blended with sequential read-heavy IO from other VMs. This can cut storage performance by up to 50%. The solution, he said, is to create a storage abstraction layer similar to the hypervisor used in server virtualization.

“This allows you to provision storage as fast as VMs can be created, improves storage performance, makes thin provisioning and snapshots easier and reduces capacity requirements,” said Vasudeva. “The storage hypervisor must integrate seamlessly with existing hypervisors.”

This storage hypervisor would deploy in each host. It would help to remove random write IO. The end goal is to virtualize the entire infrastructure so everything can be delivered as a service. The control of the data center is entirely automated by software. In such an environment, heterogeneous storage resources are abstracted into pools for allocation and consumption.

This vSAN architecture could aggregate local flash and hard drives into various tiers, eliminate any single points of failure and integrate VMware tools such as vCenter. To make this work, VM storage policies would be built in advance of VM deployments to tie in closely with application requirements.

“Software defined storage is key to IT as a Service,” said Vasudeva. “It provides a services-based infrastructure for automation, unified control and greater efficiency. Provisioning would be accomplished via policies and workload-aware service levels to match application requirements.”

Drew Robb
Drew Robb
Drew Robb is a contributing writer for Datamation, Enterprise Storage Forum, eSecurity Planet, Channel Insider, and eWeek. He has been reporting on all areas of IT for more than 25 years. He has a degree from the University of Strathclyde UK (USUK), and lives in the Tampa Bay area of Florida.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.