Storage automation has been around for a couple of decades. It originated as a way to save storage administrators time when it came to moving data around, provisioning or decommissioning storage, splitting up disks into volumes and LUNs (logical unit numbers), and a myriad of other manual tasks that used to consume the day.
These features were gradually built into storage with vendors such as EMC and NetApp among the leaders in advancing the field of storage automation. These days, the number and sophistication of automation features has multiplied significantly.
Here are five of the top trends in storage automation:
1. Automation of Larger Data Stores
Automation was vitally needed when storage admins only had to look after gigabytes (GBs) of storage. But as things advanced to terabyte (TB) size and beyond, the need for automation expanded exponentially.
“Customers are looking for efficient scale-out NAS solutions designed to store PBs of capacity,” said Brian Henderson, director of product marketing for unstructured data storage at Dell Technologies.
“Various studies have shown that a single admin can manage PBs of storage in these environments because of the rich and powerful storage management features, like replication, performance management, data management, and snapshots.
“These modern NAS solutions need to deliver an array of interfaces like CLI, webUI, scripts, and API to automate many of the tasks and data, using tools that provide enterprise-grade management, reporting, monitoring, and troubleshooting capabilities.”
2. Object Storage Automation
Object storage usage has exploded in recent years. As more data has been dumped into object repositories, the demand for automation capabilities has accelerated. Top object storage workloads include archiving and content applications, both on-premises and in the public cloud.
Users want their object storage to provide the features they have come to expect in network area storage (NAS) and storage area network (SAN) environments. That includes integration with accelerated compute technologies, quality-of-service (QoS), integration with artificial intelligence (AI) software stacks, support for higher-performance storage tiers leveraging flash media, ease of deployment in the cloud, and automated data life-cycle management, according to Eric Burgener, an analyst at IDC.
DataCore Swarm object storage software, for example, is designed from the ground up to securely manage billions of files and petabytes of information. Swarm provides a foundation for hyperscale data storage, access, and analysis, while guaranteeing data integrity and eliminating hardware dependencies. This is achieved through liberal use of automation.
3. Ease of Use in Storage
The cloud has brought a consumer mindset to storage. People now want their cloud resources to function just as though they were operating a personal service on their tablet. This mindset has, in turn, worked its way into the entire storage field.
People want their storage, services, and extra capacity now. And so, storage and cloud administrators need ease-of-use features at hand, which can only come from greater levels of automation.
Easier cluster management, for example, makes life simpler for a storage administrator. That’s why investment in automation and orchestration tools, like Ansible and Kubernetes for easier data life cycle management, is rising.
“CIOs are looking to reduce the number of independent storage silos by consolidating workloads onto fewer high-density platforms,” said Burgener with IDC.
“This drives new requirements for supporting multiple access methods, storage tiering, flexible QoS controls, and automation that functions across bare metal, virtualized, and containerized environments.”
4. Hybrid Storage
Storage arrays and NAS filers used to include only hard disk drives (HDDs). But the rise of flash has given rise to equipment which contains vast quantities of storage consisting of flash and HDDs in a hybrid arrangement. In more and more cases, they are going all flash.
Storage personnel, therefore, have to deal with a great many more configurations, changes to configurations, and demands to add more flash and must possess ways to provision rapidly in these hybrid and increasingly flash-dominated environments.
The vendor community has responded with software that is up to the task. NetApp, for example, has consistently updated its NetApp ONTAP 9 data management software to provide greater levels of functionality and automation. These features bring greater simplicity and flexibility for cloud and data center storage. It also helps IT to deploy a range of storage architectures, including hardware storage systems, software-defined storage (SDS), and the cloud.
NetApp FAS storage arrays, for example, take advantage of this software to build storage infrastructures that balance and automate the provisioning of capacity and performance. It is optimized for easy deployment and operations, while also having the flexibility to handle future growth and cloud integration. The FAS family has unified capabilities for SAN, NAS, and object workloads.
5. Augmented Functions
It could be said today that storage alone is no longer enough for a vendor. Just as NetApp has evolved from NAS filers into being able to automatically provision any type of storage and orchestrate it across the cloud and on-premises, other vendors have realized they need to build in more functions, many well beyond traditional concepts of storage. This includes areas such as security, ransomware protection, data protection, and archiving.
FalconStor, for example, has moved well beyond its original backup roots to include in-depth data protection, archiving, and secure data containers that can take advantage of the various capabilities offered by the major object storage offerings, both on-premises and in the cloud.
FalconStor StorSafe has added automation to enhance its metadata management capabilities of object storage to access the most applicable data. It harnesses the immutable storage of WORM-compliant offerings to provide a perpetual, always-available archive. By breaking data into fragments and dispersing them automatically throughout the cluster, availability goes up, while a data center breach resulting in a stolen machine yields no data loss and no complete dataset can be mounted.