A “hot spot” in storage architecture isn’t nearly as racy as it sounds. In fact, it’s quite the opposite: It’s a part of the disk system that has significantly high activity and is usually characterized by long wait times for I/O requests and long waits for the data from those requests. Hot spots in a […]
A “hot spot” in storage architecture isn’t nearly as racy as it sounds. In fact, it’s quite the opposite: It’s a part of the disk system that has significantly high activity and is usually characterized by long wait times for I/O requests and long waits for the data from those requests. Hot spots in a storage architecture are not desirable, of course, and storage architects and administrators work hard to reduce the number and effect of these hot spots.
Hot spots are trouble because for most applications, you are making requests across many devices for a single application request, so you are limited by the actual physical I/O request. If you have a high latency part of the request, say for a database index, the application response is dependent on that highest latency request created by the SQL call. The result, of course, is a big drop in performance.
So what can you do about these pesky hot spots? Well, almost everyone on the planet says the best way to address hot spots is to spread the data out so that you use more disk drives, but I believe that for many cases this is the absolute wrong thing to do. I am going to explore the pros and cons of this approach and suggest another way of looking at the problem. Remember that just because everyone believes something does not make it true. The world is not flat.
A Brief History of Hot Spots
Back in the early 1990s, when RAID was in its infancy on open systems but there had been some usage on IBM mainframes, EMC was the RAID leader and Veritas volume manager was coming into wide usage. The confluence of these two products, in my opinion, led to what I call “the hot spot theory” of storage architecture, which at the time might have been the correct solution to the problem, but we now have other tools that might provide better solutions. Let’s dive into the way things were back then and why it made sense to do things that way.
The file systems of the time in open systems were standard UNIX file systems that had small allocations and that mixed file system metadata and data areas. On the mainframe side, IBM MVS dominated with its record-oriented file system. The point is that most UNIX file systems were structured so that data was not necessarily allocated sequentially, and MVS allocation was based on records, so databases were not allocated sequentially. Many database users on UNIX systems during this time used raw devices.
On the storage hardware side, Seagate had introduced SCSI drives, and they were taking the world by storm, replacing IPI-3 and other drive types. The table below shows the progression of SCSI drives.
|
||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Compare that progress to today:
|
||||||||||||||||||||||||||||||||||
Some of the key points from the comparison are:
Disks have gotten a little faster since 2004, but not much. Take all of these facts together and the conclusion is that it takes much longer to read data from the whole drive. Take your average 8 KB database I/O. In 1991, with the average seek and latency, it took about 0.004 seconds to read that data. Today it takes about 0.0008 seconds, about a five times improvement. What this means to you when you are doing small I/O requests is that seek and latency times will likely dominate the performance, which is not a surprise since disk drives are mechanical devices. There are two points to make:
So we had a world where disk transfers were reasonably fast relative to seek and latency, file systems broke data up into small chunks, volume managers and RAID allocations were not designed to be very large (
We’ve also had hardware changes for disk drives where transfer rates improved far more than seek and latency times.
Cooling Hot Spots
Any good storage person knows that one size does not fit all, but the hot spot theory of storage architecture is treated as one of the 10 storage commandments. I am not saying that is wrong, but think of the problem this way. Let’s say I have a file system and volume manager allocating 64 MB or more of sequentially allocated data. Even with a RAID-5 8+1 and, say, a 512 KB stripe per drive, that would be 16 sequential allocations. Think about that. The current generation of Seagate hard drives has 16 MB of cache per drive. If everything is random, doing readahead by the drive does not help performance. Also, the average amount of data that can be transferred during a seek and latency for a read is 558 KB and for write 608 KB. This is a great deal of performance for data transfer that is lost for every I/O request.
There is one more area that must be looked at before sequential allocation can be considered, and that is whether the I/O requests are sequential, an area that even I am not 100 percent sure of. The Hot Spot Theory states that I/O requests are random. I am sure that is not correct 100 percent of the time, but I am not sure what percentage of the time it is correct. I know that in the HPC world, much of the I/O from the application is sequential. I know that backups are often sequential. I also know that some database accesses are random, but I also know that some of the databases I have seen and traced actually read indices sequentially for a while and then have a skip increment and then read sequentially. I have seen this type of behavior many times on many different types of databases, but not on all databases. If you put this on RAID-1, and your allocations can be sequential, then readahead will provide a significant performance improvement if requests are sequential, and best of all, you can do this with far less hardware since disk efficiency improves.
We are going to have the hot spotters who proclaim that their way is the only way to architect storage for performance, but if you take the next step down and analyze your data access patterns and use a file system and volume manager to properly lay out your data, you can get better performance with less hardware and have far better scaling. If the world moves to OSD, which I hope it does, the use of object technology should be able have the intelligence to perform the readaheads and writebehinds based on the data access patterns. Whatever happens, I think people need to rethink the problem and consider other potential solutions based on today’s technology.
Henry Newman, a regular Enterprise Storage Forum contributor, is an industry consultant with 27 years experience in high-performance computing and storage.
See more articles by Henry Newman.
Henry Newman has been a contributor to TechnologyAdvice websites for more than 20 years. His career in high-performance computing, storage and security dates to the early 1980s, when Cray was the name of a supercomputing company rather than an entry in Urban Dictionary. After nearly four decades of architecting IT systems, he recently retired as CTO of a storage company’s Federal group, but he rather quickly lost a bet that he wouldn't be able to stay retired by taking a consulting gig in his first month of retirement.
Enterprise Storage Forum offers practical information on data storage and protection from several different perspectives: hardware, software, on-premises services and cloud services. It also includes storage security and deep looks into various storage technologies, including object storage and modern parallel file systems. ESF is an ideal website for enterprise storage admins, CTOs and storage architects to reference in order to stay informed about the latest products, services and trends in the storage industry.
Property of TechnologyAdvice. © 2025 TechnologyAdvice. All Rights Reserved
Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.