It's no surprise that NVMe speed is impressive: A blue-ribbon consortium of storage and server vendors developed NVMe as a high-performance interface specification that accelerates NAND SSDs using the PCIe bus.
NVMe's logical device interface takes advantage of SSD’s low latency and parallelism to improve IOPs and throughput and reduce latency. This is not the first time vendors used PCIe to speed up SSDs, but it is the first standardized approach.
NVMe stands for non-volatile memory express, and protocol provides high bandwidth and low latency with flash-specific improvements. It supports current NAND flash and will scale to support future high-performance devices that depend on persistent memory technologies.
Storage Resource: Smarter Storage Management
How fast are NVMe performance and throughput? Clearly enough to get the attention of the data storage industry.
- Up to 64K Queues: NVMe is purpose-built for speed with an architecture that uses PCIe to map operations through shared memory, simplifies internal software, and optimizes I/O with up to 64,000 queues.
- Speeds top other formats: NVMe's features make it significantly faster than legacy SAS and SATA SSD protocols, not to mention SAS/SATA HDDs.
- Top rates The fastest NVMe drives, which are usually only available via OEM or to large enterprise customers, read 3 GB/s and write at 1 GB/s. The same drives deliver 300,000+ random read IOPs and 40,000-50,000 write IOPs.
- SSD transfer rates: The best NVMe drives for the mid-sized data center do not reach these stratospheric speeds, but are also far more affordable. For example, Samsung’s 983 DCT NVMe drive has 1.92 TB capacity. The drive achieves sequential write speeds of 1900 MB/s, random reads of 540k IOPs, and random writes of 50k IOPs.
NVMe's technology allows its multitasking speed to be far faster than related formats.
NVMe Speed Comparison: NVMe vs SATA
In the NVMe vs SATA speed comparison, SATA does have some advantages over NVMe. It’s widely deployed, and its SSD speeds are fast enough for many applications. There is no need to immediately rip and replace SATA, although customers might want to upgrade to SAS or NVMe during technology refreshes.
Vendors are still developing for SATA. Samsung developed a consumer level SATA SSD interface with tested reads up 3500 MB/s and writing speeds up to 2500 MB/s. Although customers won’t be adding these devices to the data center, they will accelerate consumer devices.
What SATA does not do well is the upper performance levels of high transactional applications. For these, NVMe performance is your better choice. If your storage platform offers a high-demand application that requires NVMe's high SSD transfer rate, it is well worth the cost.
The Architecture Behind NVMe Speed
NVMe SSDs display throughputs at the rate of 32 GBps (gigabytes per second). Half a million IOPs are common and higher-end drives range up to 10 million IOPs. Despite these high speeds, latency rates generally stay below 20 microseconds and some at half that number. By legacy standards, these numbers are deeply impressive.
NVMe Form Factors
The NVMe spec is delivered in standard-sized PCIe expansion cards, or a 2.5” form factor with a four-lane PCIe interface going through a U.2 connector. The most popular choice for its easy deployment, U.2 connects SSDs to a host and works with PCIe, SAS or SATA. NVMe U.2 drives often feature four PCIe lanes, two SAS lanes and one SATA lane for broad interface support in the 2.5” form factor.
The M.2 mini-board specification for PCIe, SATA or USB form factors is also growing in popularity for consumer-level NVMe usage. M.2 boards come in several sizes including the smallest available PCIe footprint.
Storage capacity on NVMe disks starts at consumer-sized 450GB and rises to 11 TB and up for the data center.
Features That Boost NVMe Speed
· SSD parallelism. The architecture exploits SSD parallelism to reduce IO overhead. HDDs, and tape for that matter, are sensitive to access patterns. Sequential data enables faster performance while random data slows down data access. SSDs run in parallel so random or sequential data have little effect on SSD performance.
· Updated bus. Hybrid flash arrays often bottlenecked since the SSD tiers were capable of more speed than the HDD storage interfaces could support. IT compensated to a point with multicore processors and a lot of RAM. But it is more efficient and less expensive to deploy NVMe that is engineered to take advantage of SSD speeds.
· Performance enhancements. Additional enhancements include the ability to support a single queue capacity of 64K, and to process about 64K of these long queues at the same time. Along with latency reduction, this accelerates performance for busy servers processing simultaneous requests.
· RDMA. NVMe employs remote direct memory access (RDMA) using the PCIe bus. This enables the interface to map IO commands and responses to host shared memory, which frees CPU resources. NVMe also streamlines its command set, issuing less than half of the CPU instructions as SATA or SAS. (10 admin commands are required and 5 are optional; 3 IO commands are required and 8 are optional.)
· Additional advanced features. NVMe supports features such as security container commands, power management and command enhancements. A host memory buffer helps to support client and mobile NVMe.
· Controller Memory Buffer. The NVMe buffer enables the host to formulate commands in controller memory instead of depending on fetch commands through the PCIe. NVMe passes memory blocks instead of SCSI commands, which is results in lower latency. NVMe also arbitrates priority commands by observing service level agreement parameters.
· Reservations. NVMe supports multi-host reservations in Windows Clusters that coordinate host access by managing shared namespaces.
An example of NVMe speeds: the Samsung unit boasts impressive read write times.
NVMe-oF (NVMe over Fabrics)
However, for all its performance benefits NVMe speed is dependent on direct attachment to an individual host. A storage controller is a necessity for shared storage, and manages capacity provisioning, some data protection, physical addressing and protocol translations. But add a storage controller and NVMe slows down, defeating the justification for its high cost.
NVMe over Fabrics (NVMe-oF) is a way of solving the problem. The spec enables NVMe message-based commands to transfer data over Ethernet, Fibre Channel (FC) or InfiniBand fabrics without going through slow storage controllers. Storage admins can take NVMe SSD out of the server and connect it over the fabric. Remote SSD storage services operate at the speed of memory to memory transfer with extremely low latency.
The spec uses RDMA for InfiniBand, Converged Ethernet, and Internet Wide Area. It uses a second method to transport across Fibre Channel.
NVMe Bottlenecks and Limitations
Nothing is perfect, including NVMe speed.
- Heat throttling. It’s possible for M.2 PCIe SSDs to generate heat with sustained use, making throttling necessary. However, vertical mM2 form factors ventilate better and solve the problem. And in practice, these drives perform so quickly that transfers usually complete before heat becomes an issue.
- NAND SSD speeds. NVMe released the bottleneck between SSD speeds and older storage interfaces so speed bottlenecks are primarily in NAND flash. The technology is continually improving, and the market expects more performance gains in SSD speeds. NVMe is ready to support them.
- SSD-SSD slow data transfer. Data movement loses speed when transferring data SATA or SAS SSD between to NVMe SSDs. The system cannot write data faster on the NVMe SSD than the slower interface can supply.
- Lacks advanced storage functionality. All-flash arrays have been improving their advanced storage functionality including encryption, dedupe and compression, replication and snapshots. Newer NVMe drives lack this level of functionality yet, although NVMe drive manufacturers usually add functionality that protects end-to-end data integrity on the drives.
- Expensive. Although SSD prices are dropping, NVMe is expensive and unnecessary for every storage environment. Its primary usage is to support intensive performance for high transactional databases and business critical applications. If IT uses NVMe for other applications with lower performance needs, they will be waste budget and IO bandwidth. IT can save money by pairing them with SATA or SAS flash in a tiered storage architecture and migrating aging data to HDDs, tape or the cloud
- One more note about NVMe cost. For most data centers, mid-level NVMe drives will be more than adequate. Few data centers process so much data that they even need the extreme performance and throughput speeds of top-of-the-line NVMe. Buy conservatively so you do not overspend on inactive drives. Also make sure that NVMe drive’s capacity is enough for your storage needs.
Top Benefits of NVMe Speed: Use Cases
Life sciences, financial services and energy companies all depend on extremely fast HPC with high performance and low latency. Life sciences and energy leverage NVMe speed for fast complex calculations since it virtually eliminates processor wait times when reading from storage. Financial services use NVMe as secondary memory to accelerate extremely high numbers of transactions. In life sciences, industry testing displays 6x performance improvement over SATA. The industry pays for it – about a 50% price increase – but total ROI is very favorable.
OLTP relational databases and big data also benefit from high performance reads. In databases, DBAs can use SSD cache to pin metadata, data and indexes without slowing it down. Queries speed up considerably for much improved database performance. Big data’s intensive workloads no longer encounter storage bottlenecks, which lets business analysts make real-time decision with immediately available data. And since NVMe is not limited to a specific type of workload, it accelerates performance for other applications.
And in a less known usage case, NVMe lets admins optimize virtualized environments by increasing the number of VMs the virtualized network can support. VMware and Hyper-V admins are often forced to optimize VM performance by partitioning the network by workload, latency or IOPs. This adds to expense and management complexity. NVMe is purpose-built to manage clusters and optimize performance across workloads. This enables admins to boost network speed and performance and dispense with complex partitions. This is not a cheap proposition, but by applying NVMe to critical VMs – and keeping a close watch on falling prices -- admins can easily justify the added expense of NVMe.