What’s New in NVMe 2.0

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

To software developers who are just entering the industry, it might come as a surprise to learn that PCIe SSDs were not always the flexible, powerful, easy-to-use devices they are now. Ten years ago, in fact, they were not only hard to work with, but they were so ruinously expensive that even thinking about using them set off alarm bells for managers. 

There were a few key pieces of technology that changed that picture. One was a development in the drives themselves, so that they now cost a lot less than the $20,000 they did ten years ago. But more fundamental was the game-changing specification that allowed us to work with them. It’s no exaggeration to say that little spec — coming in at under 100 pages — changed mobile storage forever. It’s name was NVMe.

All of which is to say that the release of a new NVMe specification, predictably called NVMe 2.0, is a big deal. The new specification builds on the key features of the original NVMe, but re-imagines the core structure of the specification for contemporary development environments. 

Here’s a look at the process that led to NVMe 2.0, and how it will change mobile storage.

nVMe Challenges

It’s worth recognizing just how far NVMe has brought us, and the immense challenges that any update of the specification must face up to. As I’ve mentioned, the first specification was absurdly short, given how much it ended up achieving. According to authors Amber Huffman and Peter Onufryk, this was simply because they designed the specification for their own use, and had no idea it would end up underpinning drivers for every major OS and eventually give rise to the concept of the data fabric.

This was a happy accident. The (relative) simplicity of the original specification meant that developers from many different fields, and those as diverse as mobile app development and data center architecture, immediately saw how it could be adapted to suit their needs. 

And, because bespoke implementations of NVMe were then developed for each and every software development niche, the standard not only saved developers time, it also saved their companies a lot of money. That’s because NVMe cut out the cost of developing interfaces for PCIe SSDs by many times. At a time when it costs around $85,000 a year to hire an experienced back end developer, that was certainly welcome. 

Unfortunately, however, it is precisely this process of adaptation that is now causing difficulties with NVMe. Though the original specification was simple to use, it was never designed to be a complete tool set to work with PCIe SSDs. As a result, even simple processes like pulling data from compressed local storage media are implemented in completely different ways in different companies. This can make NVMe difficult to work with for developers doing a lot of freelance work, and increases the risk of costly errors.

Also read: PCIe vs. NVMe

A New nVMe Vision

The process that has led to NVMe 2.0 takes this into account. The process has been led by one of the original authors of the standard,Huffman, and he has been keen to involve as many stakeholders as possible in the development of the next-generation specification. Indeed, the foundation that manages NVMe now has more than 130 members, an impressive number for any software project.

This consortium has been keen to preserve the key features that made NVMe such a success in the first place – such as how fast it is. However, they’ve also recognized the need for a major re-working of the way the specification handles data. As such, at the core of the new specification is a complete re-factoring — something which has been overdue for a while, but that was held up by the sheer number of teams using the original standard.

This refactoring does much to make NVMe more flexible, and less hardware dependent. The original specification worked through a separate base and fabric stack that assumed a PCIe base. The new standard adds a level of abstraction above this base, a bottom layer for RDMA, and potentially other forms of hardware that are yet to emerge.

Also read: The Vital Role of Data Storage in Digital Transformation

Built for the Future

This refactoring is the biggest single change made in NVMe 2.0, because it has impacts on the way that the standard is used in many different environments. It permits, for instance, the drive management interface of the new standard to be completely hardware independent, and allows for currently exotic use cases such as having DHH on NVMe. 

More generally, the additional abstraction layer moves NVMe away from the core drive technology that it was developed for, solid state drives, and should allow it to become a much more flexible, much more widely used standard. It will allow developers, for instance, to combine flash storage with other types of data storage, and work with both simultaneously through the same interface. 

This ability is likely to become increasingly important in the coming years. As we’ve pointed out many times before, the data storage industry is now moving toward software-defined storage and data fabrics, and NVMe 2.0 has been designed to work seamlessly with these new technologies. As Huffman put it in an interview with NextPlatform recently, “With NAND and people building their own drives, usage continues to fragment. We’ve given the ability to mix and match different flash and other technologies.”

It should go without saying that NVMe 2.0 is not going to be the final iteration of the standard. But equally, it’s not entirely certain that it will continue to enjoy the success of the earlier standard. Though the new specification has been developed with new approaches to data storage in mind, it will ultimately be judged according to the same standards as earlier specifications — whether it can resolve bottlenecks, improve performance, and lower the development cost of storage solutions. 

Read next: 5 Storage Needs of Modern Data Centers

Nahla Davies
Nahla Davies
Nahla Davies is a software developer and writer. Before devoting her work full time to technical writing, she managed—among other intriguing things—to serve as a lead programmer at an Inc. 5,000 experiential branding organization whose clients include Samsung, Time Warner, Netflix, and Sony.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.