CXL Memory Interconnect’s Impact on Enterprise Storage Architectures

Sometimes, a small innovation can lead to a revolution. The immediate motivations for the development of HTML, for example, were both short-term and pragmatic — scientists at CERN just wanted an easy way to communicate and had no clue that their personal, idiosyncratic system would one day demand the creation, and dictate the architecture, of huge server facilities.

We may be on the cusp of a similar revolution when it comes to enterprise storage infrastructures. In just the last few years, several technologies and approaches have emerged that promise to fundamentally change how we store and process data.

One of these, and an often overlooked one, is CXL. In this article, we will take a look at what CXL is and how it will change enterprise storage architectures.

What is Compute Express Link (CXL)?

CXL stands for “compute express link,” and at the most fundamental level, it’s a memory interconnection technology. It aims to provide high-performance connections between the various forms of memory that are now in use in the average data center – between CPUs, TPUs, GPUs, and other processor types. CXL is designed, in other words, to replace PCIe in high-performance contexts, which will soon be most contexts.

The new technology has several advantages besides raw performance, though. First, the technology is open-source, unlike Micron’s 3D Xpoint, which until now has been the closest thing we’ve had to a truly high-performance memory interconnect. Now, both Micron and Intel are moving away from that proprietary technology to focus on CXL.

The most important outcome of this shift will be that – at least eventually, and at least in principle – CXL will completely replace proprietary memory interconnects. This is important because, at the moment, there are many different types of enterprise storage, and each processor uses a proprietary connection to access these. This can make enterprise storage infrastructure very complicated and makes predicting the performance of this architecture all but impossible.

With CXL, every processor will be able to access every type of memory using the same method. This will allow different processors to utilize the same memory pools in a dynamic, adaptive way. This will be important not just for data centers but also for emerging technologies such as neural networks, which are generally run through GPUs that have occasional needs for large amounts of memory.

For those of a particular generation, you can think of CXL as being like NVMe was for PCIe flash SSDs. Before NVMe, every flash drive used a different driver, and it was a nightmare to use multiple SSDs. After NVMe, everyone was working from the same page.

The Benefits of CXL for Storage

It’s immediately apparent that CXL will benefit computing because it will allow a broader range of servers to take advantage of a similarly wide range of storage devices. It’s less obvious to think about the impact of the technology on storage itself. But it will likely have impacts in this area as well.

The clearest example of this is caching. CXL will allow storage systems to use a much broader and deeper pool of memory for temporary caching. This is important because, at a time when many ISPs are periodically capping speeds, temporary storage of data is becoming a key factor in the performance of data centers.

At the moment, 3 TB is the largest DRAM cache in a commercially available system, but organizations with a “dynamic” public cloud connection speed (i.e., an erratic internet connection speed) will regularly see this cache reach its maximum capacity.

Although it’s possible to use some exotic solutions – such as Optane PMem – for extending this out to 4.5 TB, at the moment, this imposes a hard limit on cached memory for all but the wealthiest organizations. CXL circumvents this problem by allowing storage software to use multiple storage media for caching.

New Storage Architectures

Realizing those benefits will depend, of course, on the storage software in place and specifically on whether it can take advantage of the advantages offered by CXL. As such, CXL will not only drive changes in software-defined storage but also changes in the hardware architecture we use for storage.

These changes will already be apparent for some administrators. For example, the top composable infrastructure platforms already allow admins to use dynamic caching and spread caching across different memory types.

Similarly, the process of mapping out a hybrid multi-cloud strategy now implicitly recognizes that we are heading for a future in which storage systems are more connected – and more dynamic – than ever before.

That said, it’s likely to be some years before we see wholesale change. So while we might have to wait for the full impact of CXL, rest assured that changes are on their way and that they will be huge.

Nahla Davies
Nahla Davies is a software developer and tech writer. Before devoting her work full time to technical writing, she managed—among other intriguing things—to serve as a lead programmer at an Inc. 5,000 experiential branding organization whose clients include Samsung, Time Warner, Netflix, and Sony.

Latest Articles

Dell Boosts Flexibility, Security of PowerScale Storage Systems

The new challenges of managing unstructured data has inspired the creation of Dell’s PowerScale line up of storage products.

KIOXIA Introduces PCIe 4.0 Storage Class Memory SSDs

The new FL6 Series bridges the gap between DRAM and TLC-based SSDs by leveraging KIOXIA XL-FLASH Memory.

LTO-9 Products Launched by HPE, FujiFilm, Quantum, Spectra Logic

LTO-9 proponents are pointing to tape storage as an effective way for protecting data against the scourge of ransomware.