5 Top Memory Management Trends

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Memory has become far more important over the past decade. As computer architectures and technology evolved, it became possible to pack more and more memory into laptops, PCs, servers, and the high-performance computing platforms that power the world’s supercomputers. 

The field of memory represents one of the most vibrant areas in the IT landscape.

Here are five of the top trends in memory and memory management

1. Flash to Supplement Memory 

Flash storage has revolutionized storage over the last decade or so. It has sped up systems enormously and opened the door to the lightning-fast pace of technology that we have all come to expect. In essence, flash storage is non-volatile memory.

Hard disk drives (HDDs) are comprised of all kinds of moving parts. Arms, spinning disks, heads are all subject to wear and malfunction. Consequently, it was not uncommon for a hard drive to simply stop working one day. Most failures were in the early stages of their life or toward the end. Nevertheless, any long-term user of HDDs has probably experienced an HDD failure.

Flash storage eliminates the moving parts and so has a much higher level of reliability. Its electronically programmable memory cells are used to store data in a way that is much faster than disk storage. 

Flash comes in a variety of form factors such as M.2, U.2, and PCIe card. Another one growing that is now gaining in popularity is the Enterprise & Data Center SSD Form Factor (EDSFF). It makes use of an x4 interface and can fit vertically in a 1U enclosure. This provides greater flexibility, storage density, the ability to modularly scale, and boost cooling efficiency. 

“EDSFF allows SSDs to break free from the constraints of legacy HDD form factors to deliver new form factors optimized for the future that will offer a further increase in SSD rack density,” said Arthur Lent, SVP of data protection, Dell Technologies

2. NVMe to Augment Memory 

Non-Volatile Memory Express (NVMe) is a communications interface and driver that takes advantage of the increased bandwidth of PCIe.

It gives a further boost to the already impressive performance of flash. NVMe helps solid-state drives (SSDs) communicate between the storage interface and the CPU using high-speed PCIe and can transfer about 25 times more data than traditional SATA. NVMe IOPS also exceed one million by utilizing parallel, low-latency data paths to the underlying media. 

“NVMe flash offers the highest performance and lowest latency, and SAS flash offers higher-capacity lower cost,” said Ian Clatworthy, senior product marketing manager, Hitachi Vantara.

“Customers can consolidate more applications on a single architecture, thereby simplifying management, automation, and analytics, without worrying about application performance.” 

That has given rise to NVMe over transmission control protocol (TCP), allowing storage to be shared among several data centers using basic internet protocol without the need for physical modifications these servers or storage.

See more: What’s New in NVMe 2.0

3. More Efficient Memory Management

Enterprises demand flexible configuration to run their workloads. Adequate performance is driven by an appropriate combination of CPU, memory, network and storage needs.

Within the storage infrastructure, NVMe, SSDs, and hard disks provide the flexible combination to support multiple workloads. The use of hard disks combined with cache layers of memory helps to optimize the storage workflow. Modern network attached storage (NAS) systems are now able to provide the flexibility to configure hardware as needed, including taking advantage of L1, L2 and L3 caches for frequent data access. 

“We are seeing customers use sophisticated software techniques to keep the data set in the cache for fast access,” said Brian Henderson, director of product marketing for unstructured data storage at Dell Technologies

“Workloads like NVIDIA GPUDirect can take advantage of protocols like RDMA over NFS on a system where offloading delivers faster throughput.”

See more: Three Key Memory Technologies Driving Data Management

4. Memory-as-a-Service

We’ve had software-as-a-service, DR-as-a-service, backup-as-a-service, and many others. So why not memory-as-a-service (MaaS)? MaaS has been gathering steam. 

“The trend towards memory-as-a-service (MaaS) is expected to accelerate in 2022 as products compatible with Compute Express Link (CXL) start shipping,” said Yong Tian, VP of products at MemVerge.

CXL is an industry-supported cache-coherent interconnect for processors, memory expansion, and accelerators. It is overseen by the CXL Consortium that was formed to develop technical specifications that facilitate breakthrough performance for emerging usage models, while supporting an open ecosystem for data center accelerators and other high-speed enhancements.

Tian noted that since dynamic random access memory (DRAM) was invented in 1969, memory has been an expensive, scarce, and volatile piece of hardware that is not composable. In 2020, memory virtualization software subscriptions began to emerge to transform DRAM and Intel Optane Persistent Memory into pools of composable memory. 

“The result is the performance, capacity, availability, and mobility, of the software-defined memory are now be provisioned as-a-service and paid for on a price-per-gigabyte basis,” Tian said. 

5. Petabyte-Class Memory

Later this year, CXL-compatible processors, memory, chips, and servers will arrive on the scene.

This will allow petabytes of physical memory to be interconnected in data center racks and shared by heterogeneous CPUs, GPUs, and FPGA-based DPUs. 

“At this point, memory will scale to petabytes and demand sophisticated memory-as-a-service capabilities to handle traffic between the processors and to protect the massive memory blast zone,” said Tian with MemVerge.

See more: Best Software-Defined Storage (SDS) Companies

Drew Robb
Drew Robb
Drew Robb is a contributing writer for Datamation, Enterprise Storage Forum, eSecurity Planet, Channel Insider, and eWeek. He has been reporting on all areas of IT for more than 25 years. He has a degree from the University of Strathclyde UK (USUK), and lives in the Tampa Bay area of Florida.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.