I've found myself getting worked up lately about some of the file system benchmark information and tests that have been published. Vendors use these benchmarks to tout how well their file system did compared to brand X, but file system benchmarks can't be looked at in the same way that say, SPC benchmarks can be looked at, as SPC is designed so that vendors can't really skew the results.
The big problem with file system benchmarks is that they are just the opposite of SPC benchmarks; there are no standard file system benchmarks that are run and agreed upon by vendors. I was a benchmarker at one time and consider myself a reformed benchmarker, so I know the tricks that vendors can use. We'll look at some of them and hopefully in the process make you a more informed consumer.
What Is a Real Benchmark?
This is the question we should all be asking ourselves: What is a real benchmark? Here is my definition: A real benchmark must be representative of your real workload, run in the same way that it would be run on the system being considered, and in the time period that workload will run on that system. Note some key points here:https://o1.qnsr.com/log/p.gif?;n=203;c=204655439;s=10655;x=7936;f=201806121855330;u=j;z=TIMESTAMP;a=20400368;e=i
- It is not the workload you are running today on the system you are running today, but the workload you will run on the new system.
- To be completely representative, that work must be run in the same way it would be run on your system.
Memory usage is an important part of file systems benchmarking. If you are not running real applications like databases and instead using benchmark tests like IOZONE, I/O requests might be consolidated into fewer, larger requests that might not have been possible if there were real applications running. And that will make memory I/O appear more favorable than it might under real-world conditions.
Fragmentation is another big issue. Every file system I have ever seen suffers from performance degradation because of fragmentation, often for both data and file system metadata, and every benchmark I have seen never takes fragmentation of the file system into account. Vendors make claims all the time about how their file system does not degrade with fragmentation, but I don't believe it, nor have I ever seen it in the real world. About the only situation where fragmentation might not be a problem is if you buy a system and sequentially create a bunch of files and then never change any of them or add any storage. I've never seen these types of fixed content environments, and I doubt anyone else has either.
These are just two examples of things to consider, but there are many more, such as the underlying storage and interconnects. From what I've seen, no one does real file system benchmarks, as they do not include file system fragmentation and often do not include real applications using memory space, bandwidth and CPU.
Here are some other common file system benchmarking tricks to be on the lookout for.
I'd like to go into a bit more detail on memory bandwidth and memory space. With the advent of PCIe 2.0, each bus can now support up to 5 GB/sec of memory bandwidth. The memory bandwidth of the Intel (NASDAQ: INTC) Xeon processor 5100 series with a 1333 MHz front size bus and FBDIMMs is listed at 21.3 GB/sec. Therefore, a single PCIe 2.0 bus could use about 23 percent of the total memory bandwidth of the machine, and two of them could use 46 percent of the memory bandwidth. If you are running real applications instead of a benchmark test and you and other applications are doing I/O, the applications are using bandwidth, so running a file system benchmark without using bandwidth does not match a real-world workload.
Most file systems will use varying amounts of memory based on the application load. Without using similar amounts of memory, it is not possible to determine what the file system is really doing with your I/O requests when buffering them in memory. I have seen a number of recently published benchmarks from a vendor showing the amazing performance of their file system. If you read the hardware specifications and the benchmark description carefully, you will see that there was enough memory for the database to completely fit in memory. Therefore, I/O was only done as a background process to synchronize the file and could have been done asynchronously by the buffer cache. A comparison was done with another file system that used direct I/O to immediately write the data to storage with a request size equal to the request size of the benchmark test. This is clearly not a fair comparison, and it is especially questionable for this vendor, as its file system does not support direct I/O.
Other than the obvious types of outrageous testing methods like using Fibre Channel or SAS drives for your benchmark and SATA drives for the comparison system, there are a number of other things to watch for. Some file systems do direct I/O for writes or reads over a certain size. Let's say the vendor plays fair and uses files bigger than memory and large I/O requests that would be seen in a database and is therefore running to disk, but let's say that this file system is designed around small block allocations and reads and writes and does not support direct I/O. You can make the other file system look bad by setting things up as RAID-1 as opposed to RAID-5 or RAID-6. If the allocations are small and larger requests get broken up, then using RAID-1 will help level the advantage the other file system has for this type of test.
File System Tunables
Some file systems have default tunables that match specific I/O request sizes and thread counts. I have seen vendors say that they use out of the box tunables knowing that the I/O test they are going to use will cause the comparison file system to perform poorly for that set of tunables. Of course, for some file systems there are hundreds of tunable options and choosing the right set for a comparison might be difficult, but many vendors use poor choices for default tunables, which can skew benchmark results. I have seen cases where a vendor will tune their file system and either not tune or choose poor tuning parameters for the other file system.
Most vendors do not run real applications as part of their I/O benchmarks. While some do, in either case I/O benchmarks and applications can be deceptive. I am not picking on specific applications that have file system-specific tunables embedded in the application, but you must be aware that that some databases, file system benchmarks and other applications have file system-specific tunables built into them. IOZONE is a common benchmark that has file system and system-specific performance changes for the VxFS file system and HP-UX operating system, and other system and file system-specific changes. This does not make IOZONE a bad test, but it does mean that you need to be aware of the potential issues when using any I/O test, whether that is a database or some other benchmark.
The FUD Factor
Let's face it, the goal of any marketing department is to spread FUD (fear, uncertainty and doubt) about whether you are making the right choice. Some of the file system benchmarks I have seen recently are good examples of this, as many of them aren't going to be of much use in helping you figure out how it will fare in a real world environment. The latest craze is to use SPC-1 and cite price-performance numbers. Just read the benchmarks carefully and be aware of the issues and your requirements.
Henry Newman, a regular Enterprise Storage Forum contributor, is an industry consultant with 28 years experience in high-performance computing and storage.
See more articles by Henry Newman.