Top Trends in High Capacity Enterprise SSDs

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Flash has gone from being ridiculously expensive to largely affordable over the past five years. And a staggering drop in price has been paralleled by another spectacular occurrence. The capacity of Solid State Drives (SSDs) has gone through the roof.

“Higher-capacity, lower-cost flash has resulted in 1 TB to 16 TB 2.5-inch SSDs,” said Greg Schulz, an analyst for StorageIO Group. “There are already some proof of concept technology demonstrations that should enable current densities to be able to double within the same footprint with reduced price over the next few years.”

As a result, SSDs continue to eat into the market share of Hard Disk Drives (HDDs). Here are some of the top trends, tips and considerations related to high-capacity SSDs. This includes where the market is heading, when these huge SSDs are useful and when to avoid them, and more.

3D NAND

Let’s review a few basics. NAND is a type of non-volatile storage where power is not needed to retain data. NAND flash is popular in MP3 players, cameras, USB drives and SSDs. 3D NAND is one of the big drivers of high capacity. It is a form of flash where memory cells are stacked vertically in order to achieve higher densities and reduce cost.

“The advancements in 3D NAND have resulted in scaling and cost reductions that are outpacing Moore’s Law,” said Danny Cobb, Corporate Fellow and Vice President of Technology Strategy, Dell EMC. “Current generation flash chips are being used to create 2.5-inch SSDs that hold over 16TB of valuable, instantly retrievable business information.”

As this technology has evolved, flash makers have been able to push the envelope on capacity, resulting in huge leaps in capacity over a short period.

 “High capacity SSDs are emerging with the transition to 3D NAND, delivering high capacity without sacrificing reliability or endurance,” said Julie Herd, Director Storage Product Management at NetApp. “Customers are investigating high-capacity SSDs in order to minimize their storage footprint in already crowded datacenters.” 

Doubling Up

How rapidly are SSDs likely to go in terms of capacity? That’s hard to say. But Ivan Iannaccone, Director of Product Management, HPE 3PAR, said that as SSD technology evolves with the introduction of new ways of packaging NAND flash such as using 3D NAND and vertical technologies, SSD expansion will continue at a predictable rate for some time to come.

“We are seeing the overall capacities of solid state disks doubling about every 12 months,” said Iannaccone.

Outgrowing HDDs

As well as doubling in capacity each year, SSDs are also outpacing the capacity increases of HDD of late. Samsung announced 16TB drives in April this year followed by a preview of 32TB drives in August 2016 (planned for release sometime next year). Compare this with HDD drives that are still around 10TB today and expected to grow to 20 TB by 2020, said Satinder Sharma, Senior Manager in Product Management, Tintri.

“And while there is a big cost difference between the two options, the gap is narrowed by space savings technologies like deduplication, compression and cloning,” said Sharma.

Data Management

As capacity rises, the chances of something going wrong rise, too. That’s why data management is so important. So flash OEMs and storage array makers are increasingly focusing on software features to make flash arrays more reliable.

“There are new data management features being introduced for SSDs to optimize endurance and data management, including a new industry standard Multi-Stream Write introduced to the market with NetApp ONTAP 9 and Samsung SSDs,” said Herd.

Saving Space

When does it make sense to introduce high-capacity SSDs? For those wanting to pack as much high-performance storage into as small a space as possible, high-capacity SSDs are obviously one answer. Another reason to implement them is lack of space.

“If an organization is space constrained and needs a very dense rack unit (RU), then high capacity SSDs are the right choice,” said Herd. “This would include those who need to consolidate legacy performance HDD-based systems in order to shrink their footprint from 2-3 racks of HDD down to 2-3 shelves of SSD.”

Smaller Datasets 

But that might not be the right approach for every user. Herd added that those focused on the best performance with smaller datasets would be better suited with smaller capacity SSDs. 

“Maximum system performance can be achieved with only 1 or 2 shelves of SSDs (regardless of size) and many datasets (especially transactional datasets) are not large enough to fill a high capacity SSD,” said Herd. “The only concern would be that after the controller has reached its performance limit, then adding more SSDs will not increase the performance of the system any further. However, adding more SSDs will not negatively affect cost or performance of the system.” 

Think in Futures

It is vital, therefore, to understand what the key criteria are for purchase. Best density will drive the discussion to high-capacity SSD. Best performance per system (with smaller datasets) would keep a customer on a smaller capacity SSD. 

“Also, understanding the rate of growth of your data will help with planning initial and future purchases,” said Herd.

Fewer SSDs or Larger SSDs?

The main gains of all-flash comes from latency reduction and this can be achieved with fewer high capacity SSDs just as it can with multiple small capacity SSDs. However, in the former case, you are going to be able to lower the cost per IOP due to greater drive density.

“There is always a tradeoff when it comes to IOPs density,” said Iannaccone. “The only caution is around the overall amount of IOPs/throughput that a single SSD can produce. The fewer drives, the lesser the overall system performance until the controller bottleneck is hit.”

SAS SSDs for Failover

There are various kinds of SSD such as SAS, SATA and PCIe. SATA and PCIe drives are generally designed for server-side deployments. Therefore, IT managers who want to build their own all-flash array for shared storage with capacity should opt for SAS SSDs.

“SAS SSDs are the only drives available today with dual ports,” said Walter Hinton, Director of Client and Enterprise Solutions Marketing, Western Digital. “Dual port means that each drive may be mapped to two separate controllers for fail-over and multi-path IO.” 

Look Beyond SSDs

It is quite possible to implement high-capacity, high-performance SSDs and yet fail to realize the expected gains. This could be caused by a failure to understand your workloads and access time and throughput requirements.

“In some cases, the bottleneck will be the network or CPU, not storage,” said Hinton.

Controller Bottleneck

The controller, too, can be the bottleneck. Therefore, organizations need to be thoughtful about the relationship between their SSD capacity and their controllers. Back in the day, the compute power available in storage controllers to extract IO from HDDs was much higher than what a collection of drives could deliver. Getting more performance was all about using as many drives as possible. 

Today, however, the performance of SSDs has outpaced what a storage controller can deliver. Each flash drive is capable of delivering >100,000 IOPs. Which means, for instance, that 24 of them can deliver >2.4M, but the storage array is limited to what controllers are capable of delivering.

“Organizations need to ensure that they are not putting too much capacity behind a storage controller,” said Sharma. “They need to pick systems that can at some point scale-out instead of scaling up in order to ensure a good IOPs/TB ratio. Using a small number of high capacity drives or even too many smaller capacity drives behind the same controller may result in poor IOPs/TB, which is not good for consolidation.”

Rule of Thumb

As a rule of thumb: If a storage controller can deliver 400,000 IOPs, you don’t want to have more than 400-600 TB of logical capacity behind it, so as to maintain high IOPs/TB. 600 TB of logical space translates to around 150 TB of physical space at a modest 4x data reduction. That is only ~10x 15.36 TB drive. In this case, it may be better to go with 20 x 7.6 TB SSD to get the performance benefit of flash. One can go lower on IOPs/TB and store more data behind the same controllers if the goal is to store archive data rather than manage performance oriented workloads.

“Organizations should pick a size of SSD that jives with the performance capabilities of their storage controllers,” said Sharma.

Heavy Writes

There are many other areas to pay attention to when determining if high-capacity SSDs are right for you. If you have heavy write activity, a high-capacity SSD may not have the endurance required. You may need a tiered architecture with write-intensive caching on the front end and a persistent back-end data store (capacity SSD or capacity HDD). A good example of this approach is VMware Virtual SAN, suggested Hinton.

Don’t Put All Your Eggs in One Flash Drive

One huge SSD drive might be great in terms of capacity and performance. But it also represents a single point of failure.

“You want to have enough drives such that the failure of 1 doesn’t impact application service levels. More drives can enable more performance, given the right array design,” said Danny Cobb, Corporate Fellow for Technology Strategy at Dell EMC.

 

Drew Robb
Drew Robb
Drew Robb is a contributing writer for Datamation, Enterprise Storage Forum, eSecurity Planet, Channel Insider, and eWeek. He has been reporting on all areas of IT for more than 25 years. He has a degree from the University of Strathclyde UK (USUK), and lives in the Tampa Bay area of Florida.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.