NVMe Enterprise SSDs Over Fabrics Can Conquer Latency - EnterpriseStorageForum.com

NVMe Enterprise SSDs Over Fabrics Can Conquer Latency

A revolution is underway in the way enterprise SSDs are used in the data center. This revolution was boosted by an announcement that was made with little fanfare earlier this month, and its repercussions for enterprise SSD users could be titanic.

The announcement in question detailed the new NVM Express over Fabrics specification which NVM Express, Inc. released in June. This is the organization which developed the NVM Express (NVMe) specification for accessing solid state storage such as enterprise SSDs connected to the server's fast memory bus over a (slower) PCI Express (PCIe) bus.

Slower is a comparative term: the PCIe bus is much, much faster than using SATA or even SAS connections. NVMe takes advantage of the parallelism of flash storage to lower the I/O overhead, improve performance and reduce latency.

The NVM Express over Fabrics specification deals with accessing NVMe storage devices such as enterprise SSDs over Ethernet, Fibre Channel, InfiniBand and other network fabrics.

"The NVM Express over Fabrics specification extends the benefits of NVM Express beyond rack-scale architectures to datacenter-wide Fabric architectures supporting thousands of solid state devices, where using a fabric as an attach point to the host is more appropriate than using PCI Express," the organization said in its announcement.

Enterprise SSD Storage Closer to the Server

So why is this announcement part of something revolutionary? The answer boils down to latency, and the problem of getting data from the place it's being stored to where it's needed – a server's processor – when it's needed.

The latency problem stems from the fact that ever since IT's year zero, processor performance has increased at a faster rate than storage performance. It's been compounded by the fact that data storage requirements have exploded, meaning that reliance on direct attached storage (DAS) within a server is no longer possible. That means data is getting stored further and further away from server processors. The introduction of all-flash arrays populated by high-performance enterprise SSDs in the last few years has helped to reduce the latency problem, but in no way can it be said to have solved it.

To understand why not, it's important to get a clear understanding of how different data storage media affects latency. Typically latency is measured in nano-, micro- or milliseconds, with processor L1 cache memory capable of providing a processor with data in half a nanosecond. That's one half of one thousand-millionth of a second, an amount of time so small as to be impossible to comprehend.

Latency Measured in Days for Enterprise SSDs

Speaking at the Flash Forward storage conference in London in June, Chris Mellor, a storage expert at The Register, proposed a simpler way to understand latency in different storage media. Rebasing L1 cache memory to have a latency of 1 second, he said DRAM access latency (which is of the order of 100 nanoseconds) becomes 3 minutes 20 seconds. Reading a fast server-attached NVMe enterprise SSD such as a Micron 9100 NVMe PCIe SSD involves a latency of 2 days, 18 hours and 40 minutes, while an IBM FlashSystem V9000 all-flash array read involves latency of 4 days, 15 hours, 7 minutes and 20 seconds.

But if you think those latency periods are long, then consider this: an EMC Isilon NAS access would involve latency of more than 115 days, and a typical DAS disk access would take more than 6 years. And when it comes to SAN storage, the latency times are far, far, worse: a typical SAN array access involves a latency wait of a staggering 19 years, 5 days, 10 hours and 40 minutes.

So here's the problem: today's servers have super-powerful processors with multiple cores, but there's no way to move the data to the processors fast enough to keep them busy. You can upgrade your storage systems to all-flash arrays with as high-speed enterprise SSDs if you like, but if the rest of the infrastructure can't cope then what, asks Mellor, is the point? "It's like taking a 20-minute flight across the country, and then spending four hours driving to the office from the airport."

The obvious solution is to put all your data in the processor's L1 cache (or better still the processor's registers), but sadly that's not an option. For one thing, the heat generated by anything but a tiny cache would be enough to melt the processor. Another partial solution is to use more fast, expensive DRAM, but DRAM is volatile and space-limited. So if you want to keep your data as close as possible to the processor that leaves solutions like NVDIMM or PCIe enterprise SSDs within the server.

These solutions are also expensive, and space is constrained. More importantly, these solutions don't deal with the need for shared storage.

Page 1 of 2

1 2
Next Page

Comment and Contribute


(Maximum characters: 1200). You have characters left.



Storage Daily
Don't miss an article. Subscribe to our newsletter below.

Thanks for your registration, follow us on our social networks to keep up-to-date