Benchmarking Storage Systems, Part 1

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Benchmarking storage systems has become very complex over the years given all of the hardware and software parts being used for both NAS and SAN systems. In order to conduct an effective storage benchmark, it’s important to have a solid understanding of all of the parts needed for a storage benchmark, how to create the benchmark based on your applications, and how to put the benchmark together. I’ll begin this three-part series on benchmarking storage with an examination of each of the components that might be included in a benchmark.


The emergence of so many different types of computer and storage systems from a variety of vendors has led customers to develop benchmarks that characterize the performance of their applications on each of the systems they’re looking at purchasing. Over the years, though, the benchmark process has become a game of “cat and mouse, dog eat dog, high stakes poker” – pick your cliché – with computer vendors and customers each trying to get the upper hand.

Customers would typically write rules and create emulations of their workload or use their real workloads in an attempt to get an accurate representation of the vendor’s performance as well as to prevent the vendors from taking advantage of a tactic I call SBT (Slimy Benchmarking Tricks). On the other side of the equation, the vendor’s customary goal of attempting to win the benchmark at all costs would often work against the customer’s goal of meeting requirements for performance, reliability, and cost. On the customer side, it is almost always a balancing act among these three points.

What to Benchmark and Why

There are the obvious pieces of hardware that you need to include in the benchmark, such as NAS, RAIDs, tape drives, and server/hosts. There are also a large number of not-so-obvious hardware and software components, such as file systems, OS system tunables, tape libraries, HBAs and HBA tunables, Fibre Channel switches, NIC and TOE cards, RAID tunables and cache sizes, NAS tunables and cache sizes, and failover and fail back, just to name a few.

Here are a couple of questions to start with when planning a benchmark:

  • Are all of the not-so-obvious items important to the benchmark?
  • Do I need to create benchmarks in order to measure and understand each component?

I believe that some of the not-so-obvious hardware and software parts can be quite important in some cases, and at a minimum you need to understand the issues surrounding them. You might remember the example I used previously of the $2K HBA that significantly reduces the performance of a $1M RAID system.

Regarding the measurement issue, this is generally your responsibility. The storage provider will generally tell you what hardware and software they will provide and what is being used in the benchmark. If the benchmark does not reflect your real workload, then you could have the $2K HBA problem when the system is installed, but again, that would be your problem.

On the other hand, measuring the performance of even a simple HBA is a non-trivial effort for all but the most experienced storage performance analyst. Add to the process a consideration of the HBA tunables and failover and fail back, and the problem becomes insurmountable for most organizations. That is why it is a good idea to understand the issues, but not necessarily a good idea to attempt to benchmark each of the component parts.

Page 2: Component Part Benchmarking Issues

Component Part Benchmarking Issues

Each component can have a significant impact on the results of your benchmark. Let’s take a quick look at each.

RAID and Tunables

Benchmarking RAID devices can be difficult given all of the different aspects of the RAID hardware and software:

  • Front-end performance – Performance from the host to the Fibre Channel connection to the cache
  • Cache – Performance and bandwidth of the cache and the caching algorithm
  • Back-end performance – Performance of the RAID from the cache to the disk

Add to these issues the performance of various RAID levels, the myriad of tunable parameters for cache, cache allocation, device allocation, etc., and you will soon realize that it’s extremely difficult to cover every area of RAID hardware and software.

Instead, you need to ensure that you understand the effects of the I/O of your benchmark and how the real workload utilizes the cache of the RAID in a way that will emulate the behavior of the real system.

Tape and Libraries

Benchmark tapes and libraries have the fewest number of hardware and software areas that need to be considered. Tape compression is a big consideration for performance, along with the interface (see this article for more on the issue).

The tricky part is the development of data sets that closely mimic your data to ensure that the compression for the benchmark matches the compression at purchase. The other area to consider is the tape load and position time. For libraries, the issues are the robot pick time and how well the library will work with the software that you’re going to be using.

File Systems

Benchmarking file systems is just plain hard and is likely to be the most complex part of the benchmark. It also has the potential to create quite the opportunity for a large number of SBTs (I should know, as I used to do them, but have since reformed). Here are a few areas to consider:

  • Allocation sizes
  • Topology of the file system metadata, data, and log, and RAID devices
  • Tunables for allocation, logs, metadata, etc.
  • Server memory and file system tunables

By far the biggest issue I have seen is the discrepancy in performance for file systems when they are fragmented and when they are just mkfs’s. More than likely, your storage benchmark would run on a newly mkfs’ed file system, which would likely lead to larger I/O requests and could change the results of the benchmark for the RAID vendor, as some vendors are far better at large I/O requests compared to smaller requests, and will surely change the RAID tunable’s large block performance.

System Tunables

It cannot be emphasized enough that you need to either specify the system tunables for the benchmark or ensure that each vendor reports the tunable changes.

Page 3: NAS and Tunables

NAS and Tunables

Benchmarking NAS devices is even more complex than benchmarking RAIDs with file systems, as you have RAID and file systems, hardware, and software all in a single package. The only good news is that NAS performance does not and cannot get close to the performance of Fibre Channel-attached RAID. Additionally, as NAS cost per MB is generally lower than that of Fibre Channel RAID, NAS devices are benchmarked far less often. Fragmented file system performance and CPU performance can be big issues for some NAS systems.

You must ensure that you understand the effects of the I/O of your benchmark and utilize the cache of the NAS in a way that will emulate the behavior of the real system.

Servers/Host Hardware

The server performance is often not considered, but definitely should be, especially for Fibre Channel RAID devices given their high performance. It might be sad, but I have observed the following to be true:

  1. Some servers have PCI and PCI-X buses that do not run at full rate
  2. Some servers have different performance for different PCI/PCI-X slots
  3. Some servers are not configured with enough memory bandwidth to run all of the PCI-X slots at full rate
  4. Many servers do not have the memory bandwidth to run the CPU(s) and PCI/PCI-X buses at the same time

Operating Systems

Let me tell you about a test we recently conducted for a client. We were testing a RAID device and found that our 16 MB I/Os were getting broken down into requests from 4 KB to 128 KB on Linux. This reduced the performance of the RAID device by about 30% for reads and 40% for writes.

It was suggested that we load Windows 2000 and then retry our experiments. Once we did so, low and behold the performance was where it should be. I am not trying to bash Linux with this example, but rather point out that the operating system, even on exactly the same hardware, can make a huge difference.

HBAs and HBA tunables

In almost all cases, the vendor will understand and tune the HBA for maximum performance for the system. Therefore this should not a big issue. The only issue is that an HBA and drive can have a large performance difference for both transfer rate and system overhead to read/write the data with the HBA driver.

Page 4: Fibre Channel Switches

Fibre Channel Switches

The question is simple. Do your applications require full duplex 2 Gb I/O performance any port to any port? Most environments do not. For the few that do, the Fibre Channel switch can be a big issue, as not all switches are alike.

NIC and TOE cards

These cards are used for IP communication and are part of a NAS benchmark. NICs (network interface cards) provide the connection between the computer and the network’s physical medium, while TOE (TCP/IP offload engine) cards offload the processing of the TCP/IP stack, taking some of the burden off of the main processor.

I have seen over 2x difference in performance and almost the same for system overhead for Gigabit Ethernet NIC cards from two different vendors using the same host, same OS, same everything. Therefore it is important to get the right TOE or NIC card for the type of work that you are going to be doing.

Failover and Fail Back

You might ask why failover and fail back would be part of a benchmark and not just a usability issue, and you might be correct. The reason I have added it to this list, though, is that in some cases fail back requires manual intervention and can significantly impact performance.

Failover performance might need to be tested for both HBAs and possibly the RAID controller depending on the design. You should know the performance of the system during a failover, as HBAs and NICs generally have higher failure rates than most of the other server system components.

What’s Next

We have covered the components of a storage benchmark and some of the issues surrounding these components. Each of these hardware and software pieces can have a significant impact on the performance, reliability, and price of the system. Regardless of the type of NAS or SAN system, storage vendors will either provide each of these parts or you will be using some you already have. All of the parts must be considered and specified as a possible function of the benchmark to ensure that you need – as compared to get – what you asked for.

Future articles in this series will cover how to develop representations of your workload and then how to package it all together for a formal benchmark.


See All Articles by Columnist
Henry Newman

Henry Newman
Henry Newman
Henry Newman has been a contributor to TechnologyAdvice websites for more than 20 years. His career in high-performance computing, storage and security dates to the early 1980s, when Cray was the name of a supercomputing company rather than an entry in Urban Dictionary. After nearly four decades of architecting IT systems, he recently retired as CTO of a storage company’s Federal group, but he rather quickly lost a bet that he wouldn't be able to stay retired by taking a consulting gig in his first month of retirement.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.