SSD and Flash in Enterprise Storage Environments


Want the latest storage insights?

Download the authoritative guide: Enterprise Data Storage 2018: Optimizing Your Storage Infrastructure

Share it on Twitter  
Share it on Facebook  
Share it on Google+
Share it on Linked in  

Solid state drives (SSD) have risen to dominance over server hard drives, outselling them in the enterprise market.  Enterprise SSDs are now found everywhere from laptops to enterprise storage arrays. While they used to be deployed mainly to support high-end applications and small amounts of data, their reach has expanded to the point where server hard drives are becoming rare in the newest machines.

Jump to: 

Understanding the Rise of SSD

So why has the enterprise SSD grabbed so much market share from the hard disk drive (HDD) in so short a time? The SSD vs HDD debate is best understood by looking at factors such as basic design, performance, capacity and cost.

Let’s begin with design. HDDs consist of platters that spin at speeds of anywhere from 7,5000 rpm to 15,000 rpm. Their moving parts wear out and fail. HDD makers provide Mean Time Between Failure (MTBF) figures for their devices, which make it look like they last for many years or even decades. But the reality is that some HDDs fail at inopportune times during their service lives. Enterprise SSDs, on the other hand, have no moving mechanical components. Although they do have a finite lifespan, they don’t suffer mechanical failure.

Further, they are far more resistant to physical shock. Dropping a device that contains an HDD is likely to be many times more serious than dropping an SSD. A smart phone is a good example. They get dropped all the time, yet it is screen breakage that is the persistent problem – never the flash storage inside.

SSDs have many advantages over HDDs:

·  No moving parts

·  No mechanical reasons for failure

·  No need for fans to dissipate

·  Enterprise SSDs run silently

·  Applications are accessed far more rapidly

But there are a few downsides. Enterprise SSDs are not a good venue for archival data. Left without power for long periods, they can leak data. Some types of flash, too, don’t deal well with data that is accessed repeatedly. They wear out after being accessed many thousands of times. However, alternate types of enterprise SSD have been developed to mitigate this problem.

ssd, solid state drive

An example of one of the countless SSD offerings in today's storage market.

SSD Form Factors

Some vendors advocate SSD everywhere for everything. The technology is mainstream, the performance benefits are well-chronicled, and cost concerns are largely in the past. As a result, major storage vendors have made enterprise SSD an integral part of their storage platforms. Instead of applying it only to VDI or the most performance critical applications, it is now being deployed in small, medium, and large data centers, and for block, file, and object data, as well as every class of workload.

· Many types Along with its expanded role, there are many different types of flash and many form factors. Some vendors champion a particular SSD form factor and interface, claiming it’s the best fit for the enterprise. However, the reality is that there are different options for different needs.

· SATA and SAS SATA SSDs are slower and are often used for the highest capacity SSDs. In contrast, SAS SSDs harness the SCSI interface and are faster than SATA SSDs. SATA drives are generally designed for server-side deployments. SAS SSDs give better performance than SATA SSDs. SAS SSDs often come with dual ports, which means that each drive can be mapped to two separate controllers for the fail-over and multi-path IO that are often required in enterprise storage.

Read further on the differences between SAS and SATA.

SSD Innovation

NAND is a type of non-volatile storage where power is not needed to retain data. NAND flash is popular in MP3 players, cameras, USB drives and SSDs. In the early stages of flash innovation, manufacturers increased density in two dimensions of the same-sized silicon wafer while steadily lowering cost per gigabit. This was achieved by reducing the size of the data cells.

·  Single-level cell (SLC) architectures with one bit per cell were quickly followed by multi-level cell (MLC) and three-level cell (TLC) designs. But as cells continued to shrink, maintaining endurance became more challenging, performance gains started to slow, and the outlook for further gains seemed poor.

·  But then flash moved from two dimensions to three. 3D NAND of Vertical NAND (V-NAND) flash chips stack layers of cells vertically, enabling massive leaps in density, lower power consumption, faster read/write times and greater endurance.

·  But 3D NAND is not the only area of SSD innovation. A new class of persistent memory technologies now includes storage class memory (SCM) such as 3D XPoint, ZSSD and other brand names. It sometimes is also referred to as Persistent Memory (PMEM). It enables data being written in smaller sizes, and facilitates faster and more efficient read/write processes. This memory technology promises to be ten times denser and up to 1000 times faster than conventional flash.

·  Another important development is the appearance of non-volatile memory express (NVMe) technology. The NVMe specification enables an SSD to use the high-speed PCIe bus to reduce latency, boost IOPS and cut power consumption. When combined with dual-port systems, NVMe provides both scale-up and scale-out enterprise SSD architectures. This brings nonvolatile memory as close as possible to the processor and this is said to be up to 10x faster when compared to a single enterprise SATA SSD.

This kind of fast, nonvolatile storage goes some way toward bridging the gap between RAM and SSD, with a performance-cost ratio lying somewhere in between. It can be accessed by the OS like any other permanent storage device or deployed in DIMM slots, accessed by the OS as memory.

Highest Capacity SSDs v HDDs

It doesn’t seem so long ago that a 1 TB HDDs was big news. HDDs were traditionally bigger than SSDs. But that has changed dramatically. 3D NAND SSD drives larger than 1 TB (in a 2.5-inch form factor) are now commonplace, with Samsung having recently unveiled a 30 TB SSD, and a roadmap to a 100 TB SSD by 2020. By that time, 3D NAND will have captured the lion’s share of the market for enterprise SSD. Enterprise SSD capacities are doubling in capacity each year, outpacing the capacity increases of HDD. Performance numbers such as 2,000 MB/s on sequential reads and up to 120k IOPS in random read operation are now commonplace with NAND and V-NAND technology.

High capacity SSDs make sense for many, but not all, applications. They are good for cloud applications, for example, that support content sharing traffic, such as video and media streaming, as well as active archiving applications where highly sensitive information isn’t being overwritten. But read-intensive workloads may need right-sized endurance to provide users with consistency of data throughput to ensure fast delivery of the information being requested for reading, hearing or watching.

This has given rise to enterprise SSDs that strike a balance between capacity and performance. The Western Digital Ultrastar SN100 family, for example, is targeted at cloud, hyperscale and enterprise hyperconverged systems. It is available in a 2.5-inch or U.2 form factor, and uses the PCIe interface and a NVMe driver to deliver low latency under even heavy loads. It has a capacity of 3.2 TB. It performs very well under mixed read/write workloads (delivering up to 310K IOPS). That makes it a good fit for large scale-out databases like MySQL, Cassandra, MongoDB or Hadoop’s HDFS as these databases favor devices inside the server rather than traditional SAN or NAS network-based storage.

SSD Performance Versus Cost in Enterprise Environments

As can be seen by the chart above, tape is the cheapest form of storage followed by HDD. The downward trajectory of HDD pricing, however, is not being matched with speed gain. HSSs are not getting any faster. Enterprise SSDs are different. Their capacity and performance are steadily improving. SATA SSDs are used for lower cost applications while PCIe, 3D NAND and NVMe come at a price premium.

Over the past decade, flash chip prices have been dropping at about 30% per year on average for the past several years. 

·  1 GB of flash cost $8 in 2007. By 2012, it was down to $0.71 – that’s less than 9% of its 2007 price. Today it’s going for around 25 cents per GB.

·  HDD prices are dropping, too. For raw capacity, performance-intensive HDDs (15K, 10K RPM) are going for around 25 to 27 cents per GB. But capacity-sensitive HDDs (7200 RPM and slower) are at more like 2 or 3 cents per GB, according to IDC.

·  However, some have realized that the switch to SSDs isn’t so much about initial expense per GB. Yes, they will cost more, but users can save money in the long run due to much higher performance, much lower power consumption and the lower amount of space required for enterprise SSDs compared to HDDs. By swapping out HDDs for the latest SSDs, it is a simple matter to boost app performance, end user complaints and extend the life of existing hardware.

SSD Applications in Enterprise Environments

Regardless of further innovation, enterprise SSD is eating up much of the storage pie. Some believe that the only realistic area left for non-flash storage is archiving.

There is a similar trajectory between SSD adoption in the enterprise and server virtualization. Just as server virtualization reached a tipping point years ago where admins had to justify why they were NOT going to virtualize a certain workload, the deployment of HDDs for enterprise storage is becoming increasingly difficult to explain.

Enterprises are particularly looking to enterprise SSD platforms to implement analytics. With ever growing piles of unstructured data requiring real-time analysis, SSD offers a way to control costs while delivering insight at the speed required by management. Thus vendors are developing software to support flash analytics. This enables ‘what-if’ scenario modeling, where you can determine the impact of changes to your environment before implementation—that’s how you avoid surprises and unnecessary costs.  

Over the long term, however, it looks like the all-flash data center may become a reality.

Related Posts

Submit a Comment


People are discussing this article with 0 comment(s)