There is a new storage technology on the market, but it’s really an old technology with a new twist: SSD drives made of flash memory.
About 10 years ago, SSD companies and SSD drives were a hot commodity, but during the 2000-2002 downturn, almost all of them disappeared. SSDs were never easy to use for a number of reasons, so why are flash-based SSDs suddenly hot, and will these devices go the way of the last generation?
I used SSDs way back in the mid-1980s when I was at Cray Research, so I have long experience with them, both good and bad. I could tell you some benchmarking tales about how I used SSDs in ways that would never be used operationally, but the benchmark rules allowed this, skewing performance results. I have seen a number of recent solid state performance claims from vendors and I am curious about the test environments, since the claims seem almost too good to be true. At least when you are buying a car, you are told your mileage may vary.
Given the much greater cost of flash, will it become part of the storage hierarchy? Of course, SSD performance is also much greater than disk drives. If you take the standard 15K 2.5-inch SASdrive today, you can assume about 250 random IOPS for each drive. Web searches show that SSD vendors claim numbers from 10 times to as much as 72 times for write and more than 200 times for read IOPS. That could save a large numbers of disks drives, power and RAID controllers, disk trays and connections.
But there are some reliability issues with solid state drives that you need to think about if you’re going to use them in your enterprise storage environment.
Write Endurance and Wear Leveling
There is a great deal of information about flash and wear leveling on the Web. A simple search will find papers from the major players (SanDisk, Toshiba and others). What is important to understand is that flash cells can only be written to about 100,000 times for most current flash parts. After 100,000 times, the flash starts developing errors and can fail similar to the hard error rate in disk drives. What wear leveling does is basically move blocks around based on usage to limit wear.
Let’s say you have a 32 GB flash SSD with a SATAinterface. If you write to the same 1 MB location at, say, 100 MB/sec with a 100,000 maximum write, you would reach the limit in 1,000 seconds. Clearly this is not acceptable, and also highly unlikely. What wear leveling does is move the blocks around so you are not writing to the same place. Your 32 GB SSD might be a 40 GB SSD with 32 GB of user available data. The 8 GB of space is dynamically managed by the SSD interface to allocate blocks in different places so you are not writing to the same place. So that is what wear leveling is, and the question is, does wear leveling solve all of your problems?
For an example I decided to use Mtron. This is a South Korean company that builds a high-performance SSD. Tom’s hardware gave it an excellent review. Here is the link to the details on the Mtron Web site. I have extracted some of the pertinent information:
|
It should be noted that Mtron provides as much or more information than many other vendors and that the information provided by Mtron is in a similar range as that from other vendors. As noted above, the Mtron 32 GB SSD writes at 80 MB/sec maximum performance. Mtron can do this with fairly small blocks sizes, say less than 128 KB. The interesting performance numbers, though, are the write endurance of 140 years. Note that this is based on 50 GB of write per day of sequential write. I believe they mean 50 GB of sequential writing to the same block addresses. 50 GB per day is on average about 0.59 MB/sec per day for the whole day.
Personally, I do not think that is much data to write per second in write-intensive environments such as file system metadata or large databases that are re-indexed. If a SAS 15K 2.5 inch drive can do 250 IOPS with 512 byte random I/Os, that is 128,000 bytes of I/O per second, or just over 20 percent of the write budget for an SSD. That is far different than the 10 times or 72 times claims in terms of usage, but of course the latency with SSDs is far better. Basically, the write budget claimed by Mtron in my opinion is useless. Let’s consider the maximum performance with 128K I/Os and recalculate a more reasonable value expectation for the write budget. According to the Tom’s Hardware article, the transfer rate is a minimum of 73.8 MB/sec, average of 74.2 and maximum of 76.5 MB/sec, which is very fast compared to other flash devices or a hard drives for minimum or average performance for SATA. The difference between maximum and average is incredibly good in my opinion.
If you assume that the write budget for the device is 50 GB*365 days*140 years, or 2,555,000 GB based on Mtron’s information, this could be reached using the minimum performance value in just 410.3 days (2,555,000*1024= MB divided by 73.8*3600 seconds in an hour*24 Hours= 410.3 days). Using the maximum value of 76.5 MB/sec yields just over a year, or 395.8 days. This is far less than 140 years, of course, but consider that very few applications will constantly write at these rates. I would think that expectations of 4-5 years of usage in write-intensive environments would be reasonable. Thinking about the lifetime of most RAID systems, many sites do not keep disk drives more than 5 years given the performance and density changes over that time. So if the Mtron specifications are correct, then the device would be highly useful in some environments where high transaction data could be placed into the flash device.
SMART Thinking
SMART (Self-Monitoring, Analysis, and Reporting Technology) is an agreed upon standard supported by disk drive manufacturers. As of today I am aware that some flash manufacturers support SMART monitoring, but since SMART is a standard that was developed for disk drives, some of the error conditions found in flash likely do not fit within the framework for SMART. Add to this that when integrated into a RAID device, which is doing predictive failure analysis to ensure high reliability, any RAID vendor must integrate the flash vendors’ SMART implementation into their monitoring and management framework.
I believe that this is likely significant since there is no standard for SMART statistics for flash. The whole area of SMART and flash needs to be worked out over the next few years, but I suspect it will not occur until the big companies get into the SSD flash device market. The cost and time and especially the process to develop a standard is not likely in our future until then. The one possible exception is if the RAID vendors band together and force the current crop of SSD vendors to create a standard, given the requirement for predictive failure analysis. This is in my opinion a big concern that needs to be resolved to ensure the reliability and usability of SSD flash devices in enterprise environments.
I believe SSD flash devices are in our future as part of the storage hierarchy. They are too small and expensive to solve all problems and the growth path and cost path are not going to be much different from what I can tell than traditional rotating storage (disks). Yes, flash devices are getting denser but so are disks, maybe not at the same rate currently, but I suspect from my reading that the growth path is leveling off. Anyway, SSD flash is in our future, but those early adopters must be careful to understand the issues and limitations. Wear leveling and monitoring are critical to the reliability of SSD flash devices, and I am not so sure that all of the bases are covered just yet.
Henry Newman, a regular Enterprise Storage Forum contributor, is an industry consultant with 27 years experience in high-performance computing and storage.
See more articles by Henry Newman.