Where Do M.2 High Capacity SSDs Fit in the Data Center?

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

There is, they say, more than one way to skin a cat. And if you are after high capacity SSD storage there’s more than one way to get it. Put simply, you can go for one big high capacity SSD or lots of little high capacity SSDs. Both approaches have their fair share of pros and cons.

If you want to take the “one big high capacity SSD” approach then there’s never been a better time to do so. That’s because the combination of triple-level cell or even quad-level cell technology and 3D NAND  that can now be reliably made with 48 or even 64 layers mean ridiculously high capacity SSDs can now be produced at a (fairly) reasonable cost in a standard form factor. Samsung’s 15.36TB PM1633a SAS 2.5″ high capacity SSD drive is available now for around $10,000. A 32TB Samsung  SSD drive and even a 60TB Seagate SSD drive have also been announced and are months rather than years away from release.

But the other approach – lots of little high capacity SSDs – is also an interesting one. That’s because the evolution of high capacity SSDs has largely been based on the premise that they have to look,  and behave, like conventional  hard disk drives. That means that they have been built in the traditional 2.5″ or 3.5″ form factors which were chosen to accommodate rapidly spinning platters, despite the fact that they contained multiple, much smaller NAND storage chips.

So the question is this: What is so special about these legacy form factors when it comes to making high capacity SSDs? A 15.36 TB high capacity SSD is quite a technical achievement, but only because it has been made within the constraint of a legacy form factor. The real technical achievement is the ability to cram vast amounts of storage into each of those smaller NAND chips.

M.2 high capacity SSDs

Let’s forget about standard disk drive form factors for a moment. Here’s where things get interesting. Once we’re freed from the constraint of 2.5″ and 3.5″ boxes we can take a close look at high capacity SSDs with a newer form factor: what used to be known as the Next Generation Form Factor (NGFF) but which is now called M.2 (pronounced “M dot 2”), and sometimes affectionately known as the “gum stick” form factor.

Talking about the M.2 form factor is actually quite misleading because the M.2 specification allows for gum sticks of various different widths and lengths. It also allows different bus interfaces – namely PCIe 3.0, SATA 3.0 and even USB 3.0 – and different logical interfaces: legacy AHCI for backward compatibility with legacy SATA devices and operating systems, NVMe for PCIe connections. It was originally designed to allow  for high capacity SSDs that would fit into the limited space on motherboards (particularly laptop motherboards) but now it is potentially more interesting for servers that require very high performance as well as high capacity SSDs.

Given that SSDs built around the M.2 form factor constraints are much smaller than traditional form factor SSDs, it follows that a M.2 high capacity SSD will not have as great an overall capacity. That means that for now 15 TB M.2 SSDs are off the table. In fact M.2 high capacity SSDs currently max out at 2TB with Seagate’s  2TB Nytro XM1440 NVMe enterprise SSD, which was announced two months ago and will be available in November.

While 2TB may not sound that impressive for a high capacity SSD, it offers double the storage density of previous M.2 offerings. It has been designed for cloud (and data center) environments where space and power is at a premium, and where high performance is needed for applications including online transaction processing, high-performance computing and even big data analytics.

For now the increase in storage density for this high capacity SSD has been achieved through denser NAND packaging (aka cramming more NAND into the limited real estate afforded by the M.2 form factor) rather than a move to higher capacity NAND chips. The NAND itself is Micron MLC NAND, which suggests that even higher capacity M.2 SSDs will be possible in the near future either by switching to TLC or QLC NAND, or by using 3D NAND with more layers, or both.

Why bother with M.2 high capacity SSDs?

Now to the elephant in the room. Why should anyone bother with an M.2 high capacity SSD, which can only store 2TB, when you can get a 15TB high capacity SSD like the Samsung unit mentioned earlier?

Especially when you take into account that M.2 devices are not hot-swappable: you need to bring a whole system down to replace a failed device. In fact the hot-swapping issue may actually be a non-issue precisely because M.2 high capacity SSDs are only likely to be a maximum of a few TB for the foreseeable future. That means that an individual device failure will have less of an impact than the failure of a 15TB or even 30TB or 60TB 2.5″ or 3.5″ SSD.

What’s more, the issue might be significant in an old-style monolithic storage system where there is rarely a convenient time for a complete shutdown. But that’s not the case in today’s  scale-out storage solutions where a single node can be shut down to replace one or more failed M.2 high capacity SSDs relatively easily.

But is it better to have one or many high capacity SSD devices? The answer is not clear. “The more devices you have, the more ways you have to replicate data, but if you have more devices you also have more points of failure,” points out Jim Handy, solid state storage expert and semiconductor analyst at Objective Analysis.

M.2 high capacity SSDs in parallel

We’ve already mentioned how M.2 high capacity SSDs can offer high performance, but there’s an additional reason to run numerous 2TB high capacity SSDs rather than one much higher capacity one. Specifically, with multiple M.2 high capacity SSDs you can run multiple I/Os in parallel to multiple devices simultaneously.

“An interface has a speed limitation, with a huge amount of storage behind it, so the question is how long would it take to read the data through the interface,” says Handy. “At some point you are going to saturate the processor  and it is going to be starved of data. If that happens, it does make sense to use dozens of SSDs in parallel.”

But that relies on the ability to get enough M.2 high capacity SSDs into a single server chassis, and because they are so small it turns out that this is relatively easy. More than a year ago South Korean semiconductor manufacturer SK Hynix built a custom system that managed to fit 204 TB of M.2 NVMe high capacity SSD storage into a 2U chassis.

To install M.2 high capacity SSDs into existing servers that lack dedicated M.2 sockets it’s necessary to use a PCIe adapter card that sports M.2 sockets on each side. Some cards can support up to 16 M.2 sockets, and depending on the length of the M.2 high capacity SSDs used, it may be possible to pack 60 or more of these cards in a single 2U chassis, with a total directly attached storage capacity of 400 TB or more.

As M.2 SSDs get even higher capacity and server vendors get more used to the idea of building M.2 slots into their server offerings then we are likely to see much more storage available in a similar chassis size.

High capacity SSD costs

When it comes to cost, when these technologies are mature it’s not clear yet whether one conventional high capacity SSD is going to be cheaper or more expensive than multiple smaller M.2 high capacity SSDs. In theory a single 60TB unit should be cheaper because it needs just one controller, while 30 2TB M.2 sticks would need 30 controllers – one for each M.2 stick. 

But in practice it’s not really quite that simple. That’s because a 60TB high capacity SSD would have thousands of NAND chips to make up the 60TB of storage, which would make a single controller very slow, due to capacitive loading, Handy points out. So you would have to chop the controller up into bits and replicate it around the drive, with each one controlling say 20 address lines. That would increase the cost significantly – especially as these controllers would be manufactured in far smaller numbers than standard M.2 controllers.

Heat issues with high capacity SSDs

A possible drawback of using large arrays of tiny M.2 high capacity SSDs is the fact that heat becomes an issue when they are very densely packed. That’s because unlike traditional SSDs, which are padded with thermal material to draw the heat away from the NAND chips and controller to dissipate through the exterior of their cases, M.2 high capacity SSDs are just bare boards with surface mounted components  (like sticks of DRAM.) Pack them together too densely and you risk overheating or controller throttling, which may handle the temperature problem but only at the expense of performance.

Certainly heat is a critical issue, and Handy says that one of the original designers of the Violin flash storage platform has said that if the company could start again from scratch it would consider thermal issues first. (Violin storage modules are very little more than NAND chips on a DIMM, much like a M.2 stick.) “The question is, what’s the most efficient way of getting rid of heat: tightly clustering NAND chips in a box (as is the case in a conventional SSD) or open to airflow like an M.2? I imagine that with M.2 the airflow would be better,” he says.

So far the jury is out on which approach is going to be the most successful: one big high capacity SSD or lots of little M.2 ones. That’s because it’s too early to know what the cost, performance, reliability and even availability of servers with compatible sockets will be.  The most likely scenario is that both approaches will have their place, but for the moment we’ll have to wait and see how the high capacity SSD market evolves.

Paul Rubens
Paul Rubens
Paul Rubens is a technology journalist based in England and is an eSecurity Planet and Datamation contributor.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.