Tuning Your RAID Controller for Maximum Storage Performance - EnterpriseStorageForum.com
Hot Topics:

Tuning Your RAID Controller for Maximum Storage Performance

Henry Newman Tuning RAID controllers is not as difficult as some vendors would have you believe; there's no need for professional services to get the job done.

Many of the parameters reside around the cache and cache usage, along with the obvious tunable parameters for the RAID LUNs. This article isn't about tuning specific RAID controllers; for that, you will need to spend some time reading the documentation, but hopefully by reading this you will be able to consider the parameters in context with the I/O of the whole system. Each vendor has its own nomenclature for variable names and what they mean. As there is no standard set of definitions, I have chosen my own, which you should be able to apply to a specific vendor. The areas that need to be considered are: LUN creation and RAID level, and cache tuning and configuration.

Figuring out what RAID levels to use has been pretty well covered (see RAID Storage Levels Explained), so we'll stick to the subject of RAID tunables here. If you configure RAID to optimize your system, whether that be a RAID controller card on your PC or a high-end mission-critical enterprise RAID array, you should have a good understanding of what to consider after reading this article.

We'll start by considering what type of RAID controller you have. Today they can be broken down into three categories:

  1. Enterprise Active/Active: This type of controller allows you to write from any host to any LUN without performance degradation. These controllers usually have large mirrored caches (usually over 32 GB), and the controllers are designed for hot swap everything and very high up time. Communication to the controller today is over Fibre Channel, and soon FCoE.
  2. Midrange Active/Passive: This type of controller has two sides for each LUN; an active side, which is a primary path, and a passive side, which is used for failover. You typically divide the LUNs between primary and failover, evenly dividing your system. Cache can be mirrored in the controller, but these controllers are not as resilient as enterprise controllers. Communication to the controller today is over Fibre Channel, and soon FCoE.
  3. RAID Host Cards: These are cards that plug into PCIe and connect to the drives via SAS or SATA connection. These cards do not have processors as large as midrange or enterprise controllers, nor do they support as many drives. Failover to another controller is not possible, and your system is only as resilient as your PCIe slot and controller card.
Many RAID vendors think only about their devices and storage. They somehow think that storage is only allocated sequentially from the host and that storage is a raw device that is allocated sequentially. Although this view is changing somewhat, I still run into these bizarre vendor views that the whole world uses nothing but raw devices and databases and that files are written to one at a time. Block-based file systems don't allocate data sequentially.

RAID Cache Tuning and Configuration

RAID cache tuning can be broken down into three areas:

  • Tuning cache, both read-ahead and write-behind
  • Tuning cache block sizes
  • Tuning cache for mirroring (important for midrange controllers)
Read-ahead and Write-behind: You might think that read-ahead and write-behind behavior would be the same, but they are actually quite different.

If read-ahead – reading data before the request is made by reading sequential blocks on the disk –is to work, it assumes that the data will be read sequentially and that the it is allocated on sequential block addresses. RAID controllers do not know the topology of the file system or the data; all they know is sequential block addresses, so controller I/O requests are for sequential block addresses. If your file system allocation is less than your RAID stripe size, then files are likely to be fragmented within these RAID stripes if more than one file is being written to at the same time.

If, for example, the file system allocation is 64 KB and the RAID 5 8+1 stripe is 512 KB and multiple files are being written, what most RAID controllers do is read the data you requested, in this case 64 KB, and maybe another 64 KB, and if you read again sequentially, then often the whole stripe. On the other hand, if you read just a single 64 KB block and the rest of the stripe has data from other files, then read-ahead only hurts. Match RAID stripes to file system allocations and some underlying knowledge of how many files are being written at the same time, and you'll get a good picture of the impact read-ahead could have on your system. This should give you a good understanding of methods for tuning for read-ahead in your RAID.

Write-behind – reading blocks into cache so they can be written – provides significant value if the data being written is aligned to the stripe value of the RAID, as it gives the writer acknowledgement of the write as soon as the data hits the cache. The key here is that the data must be aligned to the RAID stripe, which depending on the file system can often be difficult. If it is not aligned, then the RAID controller must do a read-modify-write (read the stripe in, modify with the new data, write the stripe out), which has high overhead and latency. The purpose of RAID cache in this case is to hide the latency of writing to disk and receive acknowledgment as soon as the data hits the cache. Tuning for write-behind often involves figuring out how much cache space to allocate for writing compared to read-ahead for some controllers, and tuning also involves the minimum cache block size that can be read or written.

Page 2: Tuning RAID Cache Block Sizes


Page 1 of 2

 
1 2
Next Page


Comment and Contribute

 


(Maximum characters: 1200). You have characters left.