The Future of Storage: Devices and Tiering Software - EnterpriseStorageForum.com
Hot Topics:

The Future of Storage: Devices and Tiering Software

There is a somewhat famous saying that goes something like, "If you ask five economists their opinion, you will get 14 answers." The general idea is that any pundit can easily give you several opinions about a subject, but economists just seem to do it better than anyone else.

This sentiment is also true in the land of IT, I believe because of the extremely fast pace of technology. Nowhere is this more true than in storage, where things are changing extremely fast, from the perspective of both technology and users and their applications.

When you get more than two storage pundits together, the conversation usually turns to the future of storage. This is true of Henry Newman and myself, when we recently recorded a podcast about our series on Linux file systems. One of the questions we were asked was what kind of file system or storage would we develop if we were kings for a day.


I started thinking more about that question while traveling and decided I'd write a bit about some ideas of where storage is headed, focusing on storage devices and tiering software, since I think the two subjects are connected.

Storage Devices

We're discovering that the performance of a much larger number of applications than previously thought are dominated by IOPS (I/O operations per second) performance. In my particular field, high performance computing (HPC), the examination of I/O patterns in applications is showing that small read and write function calls are much more common than previously thought. It's becoming fairly routine to see applications that write Gigabytes or even Terabytes of data to have a large number of 1K and 4K write and read function calls. Consequently, these workloads look increasingly like IOPS-dominated I/O patterns to the OS and the file system.

There has been some work to help operating systems deal with IOPS-dominated workloads. One approach is to use large buffers on the storage servers that allow the OS to combine small read and write functions into larger function calls. This can reduce the effect of IOPS, perhaps making the workload more sequential. However, this can can sometimes require very large buffers to allow read/write requests to be combined. Moreover, this can also result in large latencies because the data for each I/O function is held in the buffer in an attempt to combine them into a single request. There is an extremely fine line between trying to convert IOPS workloads into sequential workloads without adversely increasing the latency to a very high level. The actual outcome greatly depends on the specific application and workload.

We're also discovering that more workloads than thought use random IOPS and not sequential IOPS. Sequential IOPS are desirable workloads because many OSes can take them and combine them into a single request using a relatively small buffer with only a small impact on latency. But there is a measurable impact on the latency, so the resulting required IOPS is smaller than one would expect. But if there are enough adjacent IOPS functions, you can convert the requests into a single, much larger I/O request. However, for random IOPS, there is not much you can do except make extremely large buffers on the storage servers.

Taking these trends and applying them to storage devices, we can see that we'll need devices with more IOPS capability while applications are running. A quick rule of thumb that I use for the IOPS capability of current storage devices is the following:

  • 7.2K SATA/SAS drive: 100-125 IOPS
  • 15K SATA/SAS drive: 200-300 IOPS
  • SATA/SAS SSD: 10,000-100,000 IOPS
  • PCIe SSD: 100,000-1,000,000 IOPS

For a first-level approximation, these numbers work for random or sequential IOPS, although some devices have fairly poor random IOPS performance. You can see there is a big difference in IOPS performance between the "normal" spinning disk devices and SSD devices. Between a 7.2K SATA drive and a PCIe SSD there is about three to four orders of magnitude difference in IOPS (a factor of 1,000 to 10,000).

At the same time, there is an order magnitude in price/capacity ($/GB) difference between 7.2K disks and SSDs (about two to three orders of magnitude for a PCIe SSD).

Finally, there is about a factor of 2 to 20 difference in capacity between 7.2K drives and PCI SSDs or SSDs. We now have 3 TB 7.2K SATA/SAS drives. There are some very large capacity SSDs or PCIe SSDs, but these are tremendously expensive. Hence, the "typical" capacity is in the range of 200 GB up to 1TB.

At one end of the spectrum, are large capacity 7.2K drives that have a very appealing price/capacity but very low IOPS. At the other end, are SSDs that have an amazing IOPS capability but a fairly low capacity, and a relatively high price/performance.

Stuck in the middle is the poor 15K drive. Its price/performance is a bit higher than 7.2K drives but lower than that of SSDs. However, its IOPS performance isn't that much better than the 7.2K drive when you take SSDs into consideration.

To me, it appears that the 15K drive is kind of stuck in limbo. I think we'll see 15K drives disappear in the next several years leaving just 7.2K and SSD drives . The 15K drives are a combination of the worst features of spinning drives (low IOPS) and the worst features of SSDs (lower capacity and higher price/performance). It seems much more logical that enterprises will run applications on fast SSD-based storage and then store the final results on 7.2K drives (or maybe even tape).

 


Page 1 of 2

 
1 2
Next Page
Tags: tiering, IOPS, storage predictions, storage devices


Comment and Contribute

 


(Maximum characters: 1200). You have characters left.