These new advances could boost performance, improve efficiency and support modern applications and architecture.
Many technologies are billed as hot, exciting and revolutionary. But which ones are really deserving of that moniker? Which ones are destined to change — or are changing — the storage universe?
Enterprise Storage Forum asked the experts.
Several of our sources cited object storage as an enabler of major change.
“Object storage will be the dominant storage system as we move more applications to the cloud due to its rich metadata capabilities, which enables it to scale beyond the limitations of file hierarchies and block devices,” said Hu Yoshida, chief technology officer, Hitachi Data Systems. “As it is essentially stateless, object storage is ideal for mobile and cloud access and enables data lakes where applications and analytics can work on one consistent source of data, rather than disparate silos of inconsistent data.”
Traditionally, memory technology, such as RAM, and storage technology, where data resides, have been two distinct things. But over recent years, the lines of separation have blurred. Huge databases and datasets can now exist in memory. This facilitates much speedier access. Meanwhile on the storage side, the development of solid state devices (SSD) has boosted the performance of storage, providing it with memory-like characteristics.
What is emerging now is a new category known as persistent memory that sits in between the rigid classifications of memory and storage. It bridges the gap between the two, offering users nonvolatile, low-latency memory that can be positioned closer to the processor. Persistent memory primarily resides on a DRAM bus to facilitate fast data access. As distinct from pure memory, it brings the reliability of storage and the latency of memory together.
This is opening the door to new designs, applications and data management models. Micron’s 8GB DDR4 NVDIMM is an early example of this technology beginning to enter the market. There is certainly more to come.
“Persistent memory is a very disruptive form of hardware that is on the immediate horizon, having the potential to introduce new application models rather than merely accelerate existing workloads,” said Ranga Rangachari, vice president of storage, Red Hat. “Today, the biggest barrier is still hardware availability and the fact that most of the industry is still trying to win the flash wars.”
Peter Godman, CTO and founder of Qumulo, is another who is excited by the appearance of new classes of memory. Over the next ten years, he believes storage will be redefined by the emergence of practical forms of storage-class memory.
“Consider what happened over the last ten years with NAND flash, and then consider that NAND flash is really just a faster hard drive — way too slow to compete with RAM,” said Godman. “Storage that's as fast as RAM is emerging: storage-class memories are going to make possible storage and database systems with radically more performance, and will push the storage systems to co-reside with servers.”
Rangachari also called attention to Non Volatile Memory Express (NVMe) as a technology that is bringing about major change to the traditional storage hardware space. It provides a data transport protocol that is helping to phase out older protocols which tend to bottleneck processors by queuing traffic. The result is a bump in data transfer rates by several orders of magnitude and the elimination of many storage choke points.
“Storage can be held back by slow I/O performance, which caused expensive compute resources and memory to be consumed,” said Greg Schulz, an analyst with Server StorageIO Group. “NVMe reduces wait time while increasing the amount of effective work, enabling higher-profitability compute. The storage I/O capabilities of flash can be fed across PCIe faster to enable multi-core processors to complete more useful work in less time.”
The NVMe standard brings with it an optimized register interface and command set that uses the minimum number of CPU clocks per IO for higher performance and lower power. It is also scalable, comes with built-in data protection and security features, and has been designed to lower power consumption.
Rangachari sees the applicability of NVMe mainly in the storage backend. NVMe over Fabric protocol (for host and intra-node connectivity), in combination with 3D NAND flash in large capacity/dense footprints, and persistent memory will enable ultra-low-latency and higher-bandwidth storage platforms, he said.
“Adoption will be slow as application and data services adapt to effectively leverage the new capabilities,” said Rangachari.
Containers, too, are set to bring significant change to storage. Solutions with container-native/ready storage and container orchestration (Kubernetes, Diego in Cloud Foundry, etc.) help to accelerate hybrid cloud deployment, enabling data/application mobility between on-prem and public cloud by minimizing interdependency. They can also speed the deployment of platform-as-a-service (PaaS) and the development of new distributed applications using the Kubernetes framework.
“The integration of stateful storage into the container ecosystem is enabling not only legacy workloads to be containerized but allowing more sophisticated cloud native applications,” said Rangachari. “Orchestration and compute workloads are now having a more profound impact on storage than hardware, with vendor strength being aligned with the ability to integrate deeply into these technologies and adapt to rapidly changing software projects.”
Storage tiering is nothing new. It’s been with us since the nineties and has evolved considerably over time. About ten years ago, EMC tried to give it more appeal by broadening its scope via what it termed Information Lifecycle Management (ILM). While that term didn’t really catch on, the fact is that storage tiering continues to evolve steadily and expand its reach. Flash, for example, quickly became Tier 0 in many storage tiering architectures. But further change is coming to the tiering space.
“Advanced tiering is beginning to encompass the combination of local and remote cloud platforms along with micro-tiering such as Enmotus and Microsoft Storage Spaces Direct (S2D) as a way to boost productivity, effectiveness and storage efficiency,” said Schulz. “The focus over the past decades has been towards storage efficiency in terms of space savings, but for the next decade, this expands to effectiveness and productivity, i.e. getting more work done, lower latency and faster performance.”
Photo courtesy of Shutterstock.