Getting Failover Right Page 2


Want the latest storage insights?

Download the authoritative guide: Enterprise Data Storage 2018: Optimizing Your Storage Infrastructure

Continued From Page 1

Volume Manager and File System Options
Volume managers such as Veritas VxVM and file systems such as ADIC StorNext and a number of Linux cluster file system vendors understand and are able to maintain multiple potential paths to a LUN. These types of products are able to determine what the appropriate path to the LUN should be, but oftentimes for Active/Passive controllers, it is up to the administrator to determine the correct path(s) to access the LUNs without failing over the LUNs to the other controller unnecessarily.

Failover at this layer was the initial type of HBA and storage failover available for Unix systems. Failover at the file system layer allows the file system itself to understand the storage topology and load balance it. On the other hand, you could be doing a great deal more work in the file system that might belong at lower layers that have more information about the LUNs and the paths. Volume managers and file system multipathing also support HBA load balancing.

Loadable Drivers
Loadable drivers from vendors such as EMC (PowerPath) and Sun (Traffic Manager) are examples of loadable drivers that manage HBA and switch failover. You need to make sure that the hardware you plan to use with these types of drivers is supported.

For example, according to the EMC Web site, EMC PowerPath currently supports only EMC Symmetrix, EMC CLARiiON, Hitachi Data Systems (HDS) Lightning, HP XP (Hitachi OEM) and IBM Enterprise Storage Server (Shark). According to Sun's Web site, Sun Traffic Manager currently supports Sun Storage and Hitachi Data System HDS Lightning.

Other vendors are developing products that will provide similar functionality. As with the volume manager and file system method for failover, loadable drivers also support HBA load balancing as well as failover.

HBA Driver Failover
HBA drivers on some systems provide the capability for the drive to maintain and understand the various paths to the LUNs. In some cases, this failover works only for Active/Active RAIDs, and in other cases, depending on the vendor and the system type, it works for both type of RAIDs (Active/Active and Active/Passive). Since HBA drivers often recognize link failures and link logins faster than other methods, using this failover mechanism generally allows for the fastest resumption of I/O, since at the lowest level you have the greatest knowledge.

Each failover mechanism can be tuned to improve the performance of the system, but the most important issue is to determine which products work with which systems and OS releases and with which RAID hardware. If you have a heterogeneous environment, as is becoming more common, you need to develop a matrix. It is very likely that you will need to have different failover software for different machine types.

What you do not want to do is what I saw at one site. They decided that they wanted to have a failover and a backup, so they implemented two different (HBA and Volume Manager) failover methods. The two failover methods confused each other and the system became overwhelmed. Whatever you do, implement only one method per machine, if at all possible. On rare occasions, you might have to implement different failover mechanisms for different RAIDs and HBAs depending on what is supported, but do so carefully.

It goes without saying that testing HBA failover is critical to ensure that the configuration is correct and that HBA and switch failover works as planned. Management often doesn't understand the complexity of this type of configuration and doesn't realize that testing must be done for each kernel patch, volume manager/file system update, loadable driver update, HBA driver or firmware update, switch firmware update and RAID controller update.

All too often, I have seen that a simple patch or firmware change is installed, and a week later an HBA or switch port fails and failover no longer works. This happens far less often than it did a year ago, and hopefully will be less common still a year from now, but it happens. Testing is your only hope to ensure that everything works. It is expensive in terms of hardware, software and time, but worth it.

See all articles by Henry Newman

Submit a Comment


People are discussing this article with 0 comment(s)