Storage I/O and the Laws of Physics

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

This is an update of one of my first articles for Enterprise Storage Forum from way back in 2002. The article was well received, so I figured it was time to see what has changed – if anything – in our industry over the last 8 years. What I found is that things have only gotten worse when it comes to managing storage. Simple physics defines the limitations under which you have to work. The movement of data from applications to hardware devices is limited by physical constraints within the computer and its storage hardware.

First, let’s compare the fastest computers and the fastest storage disk storage devices from 1976, 2002, and today, to get a better understanding of the changes we’ve seen over the last 26 years.

Year CPU Performance * Disk Drive Type Disk Drive Size Disk Seek Plus Latency Transfer Rate
1976 CDC 7600 25 MFLOPs ** Cyber 819 80 MB 24 ms 3MBps half duplex
2002 NEC Earth Simulator 40 TFLOPS *** Seagate Cheetah 10.6K RPM 146GB 7.94 ms **** 200 MB/sec full Duplex******
2010 Oak Ridge Lab Jaguar 1.75 PFLOPs ****** Many vendors 2TB 3.5 inch 10.7K RPM SATA; 146 GB 2.5-inch 15K RPM SAS; 600GB 2.5 inch 10K RPM SAS ~13.6 ms write; ~5.3 ms write; ~7.5 ms write (Flash SSD average access time is 20 to 120 microseconds) 800 MBps full Duplex********

* Though this might not be the best measure of throughput it is a good comparison
** Million Floating Point Operations Per Second
*** Trillion Floating Point Operations Per Second
**** Average seek and latency for read and write
***** Using FC RAID with 2 Gb interfaces and RAID-5 8+1 in
******According towww.top500.org6/2010 (A new machine from China is expected to be faster and will be announced in mid-November)
******* Using RAID-5/6 8+1 or 8+2

Here is a comparison of the differences:

Item 1976/2002 difference 1976/2010 differences
System computation performance 1,538,461 times 70,000,000 times
Single Disk density 1825 times Between 1825 for the 2.5-inch 15K RPM SAS and 25,000 for the 2TB SATA drives
RAID LUN density 14,600 times (8+1 RAID-5 with 146GB drives) Between 14,600 times and 200000 times depending on the drive type (3.5-inch or 2.5-inch using 8+1)
Seek+Latency ~3 times ~2 times for SATA 7200 RPM drives and ~4.5 for 2.5-inch SAS 15K RPM; For SAS SSDs some are as low as 3 microseconds in latency (1000x disk drives)

Improvements in seek time and latency have been small compared to system CPU performance increases, except for Flash, because disks are mechanical devices, but Flash cannot replace all disk storage. The cost is just too high. Some say that Flash density is increasing fast enough to make it viable, but the question I always ask is whether Flash density is increasing as fast as storage growth. We all know the answer is it is no.

I said back in 2002 that storage density has not kept pace with increases in system CPU performance, and was more than two orders of magnitude less at the time, even when using RAID-5 8+1. This problem has gotten far worse (except for Flash storage) since 2002. The seek and rotational latency for hard disk drives (HDDs) has not changed much. As we move forward, Flash performance will be limited by the performance of the storage stack. The cost of a CPU interrupt, access of the SAS/SATA driver, and the access over the cable will become the limiting factors.

Back in 2002, the most common bus interface between the computer memory system and storage hardware was PCI, which ran at a peak of 532MBps, but PCI-X at about 1GBps was becoming common. Wow. It is hard to believe that, in 2010, the peak bus performance today is PCIe 2.0, 16 lane, which is rated at 500MBps per lane, full duplex or 8GBps full duplex, and yes PCIe 3.0 is coming, but from what I understand we will not see 16 lane PCIe 3.0 for a while (only 8 lane) so the performance is the same as 16 lane PCIe 2.0. This is 8x the performance of 2002. Thirty times the performance since 2002 is pretty bad, but as of today no storage vendor makes 16 lane SAS/SATA/FC card. Sixteen lane slots are generally used for graphics cards, not storage. The fastest generally available storage card is only 8 lanes, which translates to a 4x performance improvement. We have moved from pretty bad to really bad.

I also predicted that disk drives would not change much, given they are mechanical devices, and they have not. Flash drives are becoming commonplace in the market, but they cannot be used for all storage requirements because the cost per GB is so much higher than that of rotating storage. I have said in the past that consumer requirements drive much of the storage industry. A quick check from a leading online retailer today showed a 256GB 2.5-inch Flash drive at $699.00 and a 2TB 3.5-inch hard drive at $129.99 (both examples are of consumer grade storage, not enterprise). Yes, the cost of Flash is coming down, but it still has a very long way to go. I personally believe that Flash will never replace hard drives.

In 2002 I said, “What is important is that, for the foreseeable future, the trend will not change, unless you plan to buy solid state disks (SSDs) for all of your storage at a cost of well over 100 times the cost of rotating storage. Each day, on every system, you will face performance issues that require you to make large requests to achieve high device utilization.” The cost difference for the example above is now down to 42x, which is a big difference, but still not cost effective for most systems.

I emphasized the need to have large I/O requests to effectively use disk drives. This is still critical to hard drive performance and other issues with operating systems, protocols file systems and storage systems, as I/O requests often get broken up into small requests.

Very little has really changed since 2002 – and likely since 1976, in terms of storage. Solid state disks have been available for almost 30 years and the cost difference, as much as 1000x during the early years, has now dropped to 42x. There has been a significant drop over the course of 30 years, but in the grand scheme of all computing technologies 1000x to 42x is not something to do backflips over.

Page 2: Storage I/O and the Laws of Physics

Back to Page 1

The pace of computing technology change is slowing. Yes, we have more cores and more FLOPs (FLoating point OPerations per Second), but are those usable FLOPs? Is the memory bandwidth scaling with CPU performance? Of course, the answer to all of these issues is no. Memory bandwidth is seriously lagging as CPU performance and core count increase – even with the latest chipsets from Intel and AMD. It’s even worse in storage.

The only bright spot is Flash (forgive the pun). Storage will still be the bottleneck even if Flash were infinitely fast. Why? The big area that I did not address in my 2002 article is software. The storage stack has not changed much in the last 20 years. Every time you do a read or write, the operating system, file system, SCSI driver, network drivers, all get involved. It doesn’t get any better with CIFs or NFS, due to network stack overhead.

Without changes to the storage stack, I believe that storage will take a back seat to phase change memory, HP’s Memristor, and other technologies that allow byte addressability (NAND Flash does not) and vendors change chips to support memory hierarchies that include these technologies.

There will always be a need for a storage stack and disk drives, but that does not mean – even a few years from now – that much of the I/O will be using this stack. The end game might be a scenario where you read the data into some new highly dense memory slower than DRAM, but way faster than Flash that the storage stack and never reads again (until you reboot).

I wonder if I will be writing about storage, or even consulting, if that comes to pass.

Henry Newman, CEO and CTO of Instrumental, Inc., and a regular Enterprise Storage Forum contributor, is an industry consultant with 29 years experience in high-performance computing and storage.

Back to Page 1

Follow Enterprise Storage Forum on Twitter.

Henry Newman
Henry Newman
Henry Newman has been a contributor to TechnologyAdvice websites for more than 20 years. His career in high-performance computing, storage and security dates to the early 1980s, when Cray was the name of a supercomputing company rather than an entry in Urban Dictionary. After nearly four decades of architecting IT systems, he recently retired as CTO of a storage company’s Federal group, but he rather quickly lost a bet that he wouldn't be able to stay retired by taking a consulting gig in his first month of retirement.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.