Download the authoritative guide: Enterprise Data Storage 2018: Optimizing Your Storage Infrastructure
Just as data centers are seeing efficiency boosts from virtualizing their servers, the same can be achieved by virtualizing storage. Hot on the heels of cloud computing, the latest step-change in storage virtualization appears to be software-defined storage (SDS).
“SDS is an architecture for storage that allows cloud operators to deploy scalable, flexible infrastructure with automated storage operations,” said Kevin Brown, CEO of Coraid. “In the old days, we ran small data on big boxes. Today, we’re running big data on lots of small boxes, and that’s a fundamentally different computer science problem.”
These systems treat storage as a single pool which can be allocated as needed without having to manually define LUNs and RAID groups. When an application is deployed, the SDS system will map the application to a storage profile and automatically configure the storage from the hardware pool. They distribute the key metadata to eliminate single points of failure and virtualize the controller logic to prevent performance bottlenecks. They also automatically migrate storage between layers to improve performance. Since they are software based, they are easier to scale as needs grow.
In addition to the major storage players, such as EMC, IBM and NetApp, here are four SDS vendors to consider when looking to implement SDS.
Coraid Inc.’s Coraid EtherCloud is a software-defined storage platform that delivers a flexible storage infrastructure while maintaining control of every aspect of storage deployment, provisioning and management. It uses a set of Representational State Transfer (REST) APIs. In addition, it offers a REST interface that administrators can use to automate workflows, build self-service provisioning portals and provide a range of cloud-computing services. Since storage provisioning and management is automated, application owners without storage expertise can get required resources directly via self-service provisioning portals.http://o1.qnsr.com/log/p.gif?;n=203;c=204655439;s=10655;x=7936;f=201806121855330;u=j;z=TIMESTAMP;a=20400368;e=i
“Using EtherCloud, end users can request storage according to their applications’ performance, availability and resiliency needs without having to understand how storage is configured,” says Brown.
EtherCloud also offers multi-tenant access control with LDAP/AD-based authentication that allows storage administrators to allocate resources to specific groups and delegate management of those resources to tenant administrators. The EtherCloud base product is available as a 1U appliance with software included. It is priced at $5,000, and provides complete storage management capabilities.
Nexenta Systems’ NexentaStor software leverages the commoditization of the hardware market to enable enterprises to scale up to meet expanding storage needs. Built upon ZFS technology, it is said to provide unified storage management — including inline deduplication, integrated search and inline virus scanning — at less than half the cost of traditional systems.
“Nexenta is in competition with traditional storage vendors EMC and NetApp, which lock buyers into building out storage using their branded hardware,” says Evan Powell, CEO of Nexenta. “By making use of commodity hardware, Nexenta is able to offer its customers increased flexibility, openness and performance at a previously unattainable cost.”
NexentaStor also enables one-click virtual machine and other storage provisioning and has plug-ins for managing SANs and high-availability clusters.
“NexentaStor helps customers with big data storage challenges by empowering them to grow and manage their storage capacity cost effectively, while ensuring all of their information is secure,” states Powell.
Nutanix Complete Cluster
Nutanix Inc.’s Nutanix Complete Cluster software architecture pools storage resources across appliance nodes, and manages them as a single namespace. Customers can add Nutanix nodes one at a time and precisely match storage capacity and I/O performance with VM deployments. Every VM seamlessly accesses physical storage resources across all storage nodes without having to manually define storage volumes, LUNs or RAID groups when you provision a new VM. The Nutanix architecture includes data tiering, snapshots, cloning, DR, compression and error detection.
“Enterprises no longer need to make expensive, step-function increases in storage-array investments just to keep pace with rapid virtualization,” said Dheeraj Pandey, CEO and co-founder of Nutanix. “The flexibility of Nutanix’s software-defined architecture ensures that customers continue to get value from functional hardware.”
The Nutanix Complete Cluster eliminates the network connecting the data tier to the storage tier. This results in better performance and more predictable scale out. All storage features and data management capabilities are delivered in software. Nutnanix also provides a converged solution that marries server (compute) with enterprise class storage in a single tier. As such, Nutanix does not directly compete with storage-only vendors.
SimpliVity Corporation’s OmniCube is a 2U VM-optimized building block that combines compute, storage and management.
“We see SDS as a sub-category of the broader ‘software-defined data center’ (SDDC), and this is where we see the market heading,” explaines Tom Grave, SimpliVity’s VP of Marketing. “OmniCube has a lot of storage functionality but is in fact a complete infrastructure offering that includes server and networking resources, and is therefore truly delivering the softwar- defined data center.”
OmniCube’s Data Virtualization Engine provides inline, accelerated, global deduplication and compression, making it easy to move data between systems and across data centers. Multiple OmniCubes join together in a global federation, creating an elastic pool of resources within the data center and at remote sites, allowing for massive scalability. Virtual machines and all associated data can be managed globally from a single point.
“Traditional data structures prevent proper data mobility because they rely on large data ‘containers’ which are difficult to mobilize,” says Graves. “In developing OmniCube, we realized the data structure needs to change in order to create a truly flexible pool of shared computing resources.”