Storage provisioning has been going on since the earliest days of recorded history. As soon as records began to appear on tablets and scrolls, a place had to be allocated for them to reside. You can imagine the chief administrator of Egyptian pharaoh Ramesses II screaming for more capacity to store all the edicts and architectural plans of the ruler. The great library of Alexandria, too, no doubt had constant storage problems.
The only difference, these days, is that the capacity is digitized. Storage professionals either store data: on-premises in storage area networks (SANs), network attached storage (NAS) filers, appliances, and servers; or they send it to tape repositories or to the cloud. Either way, there is usually a steady need to provision more capacity and to add efficiency and speed to the provisioning process.
Here are five top trends in storage provisioning:
See more: Why Storage Tiering Still Matters
1. Near-Synchronous Rates
Kirill Shoikhet, CTO of Excelero, notes that provisioning of secondary copies of data increasingly has to happen at near-synchronous rates, so that they can respond to any outages at machine, data center and, in some cases, regional levels and still meet the recovery point objectives (RPO) of zero at low recovery time objectives (RTO).
“This is an issue with core business applications since compute and networking resources are essentially stateless, and storage is left as the stateful bottleneck for applications running in the cloud preventing fast, resilient access to persistent data,” Shoikhet said.
“Technology to get around this doesn’t exist in the public cloud today, and it’s driving a host of innovative technology improvements to make the public cloud the platform for such applications.”
2. Eliminating Over-Provisioning
With the massive data growth that is overwhelming organizations across the globe, enterprises have learned over the years a bad habit: They tend to over-provision their storage at initial deployment to save embarrassing problems of lack of capacity at a later date — or needing to apply for more budget later in the year due to poor capacity planning.
Getting it right can be challenging, since it is often difficult to gauge how much storage and what type of storage is needed to run the workloads of today and to be able to accurately predict the workloads of tomorrow.
One solution is efficient scale-out NAS. It enables a storage administrator to deploy the right storage with the right performance and capacity specifications but only when needed. Done correctly, adding storage is simple and only takes minutes to bring online. Not only is the storage available quickly, but the existing data is auto-balanced for capacity and performance into the newly added storage.
“Flexible storage addition and removal is key to future-proofing storage infrastructure with flexible set of tools to add or remove storage, including the use of CLI, API, scripts, or web interface, so that it can integrate with any storage provisioning frameworks,” said Brian Henderson, director of product marketing for unstructured data storage at Dell Technologies.
3. SSD Tiers
Storage tiering is nothing new. The basic concept is to have higher tiers for high performance and lower tiers for capacity or archiving or low-priority data.
In the past, it was done using faster hard disk drives (HDDs) backed by more memory. But these days, solid-state devices (SSDs) have added greater granularity of tiering. Some SSDs are for the highest performance write-optimized traffic. Others are best for lower cost, higher capacity, and less frequently accessed traffic to support different tiers and types of application data needs, said Greg Schulz, an analyst with StorageIO Group.
SSDs can be combined with HDDs to provide a great many tiers if desired. SSDs handle the top-level data, and HDDs address the rest — which is often the bulk of data.
See more: The Storage Tiering Market
4. Changing Locations for Provisioned Storage
The traditional architecture for the above tiering set up is to have all tiers on-premises in the data center.
However, a growing trend is to have tiers in different locations. SSD and high-speed disk might be retained on-premises, while lower priority and the bulk of data might be stored in the cloud. Positioned there, it typically provides sufficient performance for most applications.
For example, some HDDs or tape may no longer be in an on-premises environment. Yet, they remain a part of some data storage and provisioning solution. Even the big cloud providers use tape in their lower or archive tiers.
5. NVMe Tiers
Another recent wrinkle is the incorporation of Non-Volatile Memory Express (NVMe) into the storage provisioning catalog.
A small tier can be established using NVMe in which only the very highest performance is offered, as such tiers are expensive. Once data ages, or else is not deemed of high importance, it can be relegated to lower tiers.
The benefit of this arrangement is that the company can determine how much it wants to spend on high-performance traffic and cut costs by deciding what and when the remaining traffic is hosted on lower tiers.