Download the authoritative guide: Enterprise Data Storage 2018: Optimizing Your Storage Infrastructure
This is the final installment in our three-part series on benchmarking. Part 1 examined each of the components that might be included in a typical benchmark, while Part 2 looked at developing representations of your workload as well as the pros and cons of using your applications and real data in the benchmark as opposed to developing emulations of both.
In the previous two storage systems benchmarking articles we covered:
- Types of hardware and software to benchmark
- Types of benchmarks
- Applications characterization
- Vendor issues
This leaves several steps that still need to be covered in order to complete the benchmarking process:
- Internal agreement on scoring
- Writing the specification for the benchmark and the vendor proposal
- Analysis of vendor responses
Each of these areas needs to be addressed, agreed upon by all involved, and scheduled as part of the whole procurement process. A formal procurement process, though significant work on your side, is also significant work for the vendors and should be made as painless as possible for all.
In addition, from what I have seen, a formal, open, and fair procurement process tends to get the customer a better price than calling up a vendor and saying, "How much would 10 TB of Fibre Channel RAID cost me?" The most important benefit from what I have seen is that by going through a formal benchmark process, everyone from the accounting department to the system administrator knows what the organizational and operational requirements are, as they will have been specifically defined as part of the procurement process.
The process of scoring a benchmark is just plain hard. You have different groups wanting different things that all need to fit within the budget, and vendors do not necessarily make it easy to determine what has been bid at what price, what the cost of maintenance will be, and over how many years that will span. Add to that the services from the vendor, and understanding the many cost factors can be hard work.
From what I have seen over the years it is critical that the scoring methodology be created and agreed upon before the benchmark is released. This saves a great deal of internal infighting and ensures that favorite vendors from each of the departments as well as not-so-favorite or unfamiliar vendors all remain on a level playing field.
One thing I would strongly suggest in order to reduce the complexity of scoring is defining a maintenance price that the vendors will all bid to. More often than not, vendors will price maintenance so differently that scoring the actual price becomes an exercise in futility. If you tell the vendors that the maintenance cost shall be say 8% of the price and have them bid the initial price based on that 8%, this will significantly reduce the complexity of scoring the bid. You may also want to add an inflationary factor into the maintenance cost depending on the lifecycle of your procurement.
Before you start, everyone involved in the process has to agree on what is important. This means that the people that are spending the money have to agree on what they have in the budget, the people who maintain the equipment have to provide what they have in their O&M (operations and maintenance) budget, and the group responsible for defining the performance requirements has to provide the specific performance requirements.