The Art of Storage Benchmarking Page 2
Assuming that your purchase amount is significant enough to warrant a benchmark, you'll need to follow some basic steps.
The analysis of your requirements is key to developing a benchmark that meets your organization's needs. If you are benchmarking storage, you need to look at your requirements. These include:
- Backup recovery
- Reliability and repair time
- Application usage
- File system being used
- HBA Failover
Benchmarking storage is difficult because there are so many levels of indirection. You have to consider a number of questions, including:
- What will the size of the application I/O request be?
- What does the system do with that request?
- How and what does the file system and/or volume manager do with the request?
- What happens in the HBA with the queue of requests?
- If a switch is involved, how will issues with latency performance be understood and characterized?
- How will the RAID controller command queue manage the I/O requests?
- How will the RAID controller cache and caching algorithm work?
- What will be the I/O performance for the controller and the I/O performance of the disk drives and the disk cache?
- What happens when an HBA, switch, and/or RAID fails?
Understanding all of this is not a straightforward process at best; therefore, to do it correctly, the vendor will incur a high labor cost to run a benchmark. You need to carefully review your requirements before you set up a benchmark and use these requirements to develop a detailed but reasonable benchmark specification.
Development of benchmarking rules is just plain hard work. Remember, the goal of every benchmarker is to win every benchmark. That is often how the person earns a good portion of their salary. Therefore, a good benchmarker reads your rules and looks for advantages and loopholes that can often be seen only by experienced benchmarkers.
As a benchmarker, I often read the rules and followed the rules to the letter, not necessarily the spirit. In the past (I am a reformed benchmarker), I was known for developing SBTs (slimy benchmarking tricks) -- things like creating file systems with allocation equal to the file sizes in the benchmark to optimize the allocation. Other things I did were equally shady, such as placement of the file system in memory, and booting from an SSD to speed UNIX pipes for an interactive benchmark. In each of these cases, the rules were followed to the letter, but the customer did not get what they wanted, although they did get what they asked for. My company was happy when we won the benchmarks, and I received a bonus for every win.
What should be clear from all of these examples is that documenting what you really want and how you want it done is critical. As a reformed benchmarker, I now spend some time writing benchmarking rules. For a recent benchmark for a government organization, a detailed set of rules for running the benchmarks totaled 100 pages. Obviously, this is a very time consuming practice, and is not very cost effective unless you have a very large procurement or you have regular large procurements. The government organization in question has a yearly benchmark process to purchase around $40 million in new hardware.
Much of the benchmarking that you might be involved with will include databases, which presents numerous problems when designing benchmarks and the rules. How and where the tables, indexes, redo logs, and even the database application itself reside become critical issues. The absolute most important part of any benchmark is to test your reality. Does the benchmark truly match your current environment or the environment you expect to be operating in? Operational issues like backup and recovery must fit within the design of a benchmark as well. For example, if you are purchasing storage, you must have the vendor benchmark the database on the server you use. The same holds true with server benchmarks for storage.