Solid State Drives in Enterprise Applications

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Henry Newman Flashed-based solid-state drives (SSDs) are becoming a big issue for enterprise storage users; a number of customers I work with are planning for this new “tier 0” data storage for a number of reasons. It could be as simple as IOPS per watt, IOPS per dollar, or for some applications, bandwidth per GB/sec of storage.

SSDs have a number of disadvantages compared to traditional disk storage, the biggest by far being cost. There are those who claim that spinning hard drives will soon be a thing of the past because of flash SSDs, but I can’t see that happening anytime soon, and if it does, the devices that replace spinning hard drives will not be based on flash and won’t appear much before the end of this decade (see I/O Bottlenecks: Biggest Threat to Data Storage). Vendors have been claiming tape is dead for the last 20 years, but it continues to play a big role in data protection schemes. There will always be tiers of storage, it seems.

This is the first in a three-part series on planning for flash SSD deployment. The first article will cover applications that will benefit from flash, along with some of file system and other issues for deployment. The second article will cover hardware issues, and the third will cover SSD design issues and their use with SAS and RAID controllers.

 

Applications that Benefit from SSDs

We all know that parts of some applications, such as databases, benefit from high IOPS architectures, but what should you be considering when trying to bring flash SSDs into your architecture?

The real benefit of SSDs for applications involves latency for small block I/O requests. While the fastest 2.5-inch 15K RPM drive might be able to do 250 random IOPS, most enterprise SSDs should be able to easily sustain 40,000 IOPS for read and 30,000 IOPS for write in today’s world. Of course, you need to have the hardware to achieve this performance, and that will be discussed in part 2 of this series.

There are obvious parts of heavily used databases that can benefit from SSD technology, and the most obvious are database indexes. The second most obvious part of databases that can benefit from SSDs are database logs files. Both of these are generally smaller than the table space and are often placed on 15K RPM disk drives today, and even then are still frequently limited in performance. Performance tools such as iostat, sar and other performance monitoring tools are often used to evaluate the high latency found on storage attached to the LUNs associated with these devices.

As flash storage is still extremely expensive compared to spinning disk, it is critical to understand the potential benefits from flash. If you have large command queues on the device and high latency per command (something over a quarter of a second), then flash storage might be the answer for your database.

 

File System Issues with Flash

Another area that is emerging as a potential for flash usage is file system metadata. There are a number of file systems today that separate data and metadata, which will allow metadata to be placed on SSDs. These types of file systems are becoming more common. As a side note, I personally have always advocated for this when talking with file system designers and when I designed a file system myself. It makes perfect sense because metadata generally has different access patterns than data access. Along with file system metadata and just like with databases, file system logs are candidates for SSDs for those file systems that have logs.

One of the big issues with most flash SSDs is that they can only read and write on 4096 byte boundaries. You would think that this would not be a problem, as inodes are mostly 512 bytes and file system metadata allocations are always powers of two, as far as I know. The problem arises from what is often called the file system superblock. The superblock contains the basic information about the file system: The volumes used for the file system and the location, the allocations and tunable parameters, the allocation maps and a variety of other critical data. Some file systems do not pad the superblock to a 4096 byte allocation. Now this was obviously a problem for RAID controllers that had fixed cache alignments, and the problem is no different for flash devices. This is not say that the performance will be bad, but for most enterprise flash devices it will be far better to align on 4096 byte boundaries and reduce performance on reads and write as much as 50 percent. This will be far better than disk, but with an expensive device, why waste 50 percent of the performance?

For those of us lucky to have a budget big enough to buy SSDs for file systems, you need to consider the file system tuning parameters. Some file systems can have large allocations well over 64 KB, and using these large allocations might make sense for a disk-based file system, as you can afford to waste space. So if you have, say, a 1 MB allocation and a bunch of very large files and some small files on disk, every file allocation is rounded up to the next MB, wasting space for the small files but efficiently allocating the large files so that you do not have to go back to the allocation routines as often for the large files. SSDs help because the metadata allocation overhead is trivial, so going back to the allocation routines often does not impact performance anywhere near what would happen on disk. As the cost of SSD space is expensive, however, using large allocations when you have a range of large and small files does not make much sense. Make the allocations as small as your smallest files so as not to waste expensive SSD space. For example, on my laptop, which has an SSD, I made the NTFS allocation 1024 bytes instead of the default 4096 bytes, as I know I have many small files.

 

Other SSD Software Issues

There are a lot of other software planning considerations when using flash SSD devices for your high IOPS requirements. For example, if you are dividing your metadata for file systems and databases on SSDs, your backup and restore programs will need to be able to access the SSDs. Sometimes these programs have a set of hardware that is certified and you will need to confirm that your SSD is supported.

By far the biggest issue is how much space you will need. If you are using SSDs for file system metadata or logs or database indexes or logs, the calculation of space can often be extremely complex. File systems have superblocks, which are generally small and are not easy to calculate, nor are the sizes for inodes and directory blocks. Nor do administrators often know the number of files expected in a directory or the count of directories. Allocating this space varies from file system to file system and you really do not want to run out of space, as you cannot add any more files or complete the write for the file in question. The same can be said for file system logs. On the database side, the issues are the same as they are for file systems. You do not want to run out of space, but the amount of space for indexes and logs is easier to calculate. One other consideration for databases is if you are running them on top of a file system, you need to calculate both the database and file system space requirements. This all becomes an issue, as the cost of SSDs is so much higher than the cost of disk so wasting space is what I term non-optimal.

 

Henry Newman, CTO of Instrumental Inc. and a regular Enterprise Storage Forum contributor, is an industry consultant with 28 years experience in high-performance computing and storage.
See more articles by Henry Newman.

Follow Enterprise Storage Forum on Twitter

 

Henry Newman
Henry Newman
Henry Newman has been a contributor to TechnologyAdvice websites for more than 20 years. His career in high-performance computing, storage and security dates to the early 1980s, when Cray was the name of a supercomputing company rather than an entry in Urban Dictionary. After nearly four decades of architecting IT systems, he recently retired as CTO of a storage company’s Federal group, but he rather quickly lost a bet that he wouldn't be able to stay retired by taking a consulting gig in his first month of retirement.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.