Storage Networking Basics: Configuring SAN-Attached Servers

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Connecting a host to your shiny new SAN is not the same as connecting a single disk, or even a direct-attached SCSI array. This article will explain the reasoning behind current best practices, and explain how to configure your storage for optimal reliability.

Direct attached storage arrays, if you have used them, offer a great introduction into the world of storage. They have LUNs to configure on the array itself, and then you must deal with them at the host level. As storage sizes have increased, so too have the demands on sysadmins to configure storage in a usable and reliable manner. It may have been acceptable to assign 10 20GB LUNs to 10 different partitions in the past, but 200GB is not much storage any longer.

First, let’s define a few steps that should be taken before we start to think about file systems. Before creating file systems, the following must happen:

  • Configure the array, as described in our previous article, to assign LUNs to your host.
  • Attach fiber, one from each card, to two switches in distinct fabrics.
  • Zone both switches appropriately so that the initiator and target are both visible to each other.
  • Verify you see all LUNs.
  • Configure multipathing: path failover.

The last step is the tricky part, depending on your operating system and disk array. We’ll get to that shortly.

Attaching the fiber is self-explanatory, assuming we understand the concept of keeping each “path” to the storage in separate fabrics. Zoning the switches takes considerably more knowledge, but is very vendor-specific. Brocade, McData, and Cisco switches vary immensely, but the concepts are global. Decide how to zone, and apply the configuration.

Useful Terms

Need another definition? Look it up at Webopedia:

 

At this point, you should be able to “see” the new LUNs on the server. In Windows, opening the Disk Manager should bring the new volumes to light (some report a reboot may be required). Linux, at least recent versions, should immediately discover the new LUNs. In Solaris you’ll need to run ‘cfgadm’ and possibly ‘devfsadm’ to see your new LUNs.

If you only have a single path to the storage, you’re almost there—it’s time to create file systems. The vast majority of SAN-attached hosts, however, will have two paths to the LUNs, so the host will see the same LUNs twice, once per target. Since the storage array has two interface, there really are two targets. The host needs to be made aware of the fact that these are really the same volumes.

Multipathing is a host-based driver, combined with array support, which allows redundant connections to your storage array. If you tried to create file systems on all the LUNs you saw, and then decided to try mounting each one individually, your disk array would (hopefully) scream and yell. There’s a concept of “primary controller” defined on your array, and if an initiator tries to access the LUN on the non-primary target without first “downing” the preferred path, the array will protect itself. That’s hugely simplified, but a good way to think about it.

If you configured your LUNs to be assigned one-per-controller, alternating, as we recommended last time, then your host will be able to successfully use half of the LUNs. It can create file systems and successfully use each LUN, but only via its preferred controller. The only thing this buys you in the event of controller or switch failure is that only half of your volumes will disappear. What we really want to do is abstract our device paths, and mount the abstracted device. Using multipath device nodes means that the underlying “actual” devices can disappear at random, and as long as the driver and storage array get along well, the operating systems will never see a mounted disk device disappear.

Actually configuring multipathing, is less than trivial. If you want to make life easier, use Veritas Volume Manager with DMP (Dynamic MultiPathing). It runs on all operating systems, and it works the same in each. You’ll also get the added bonus of using operating system-neutral file systems, in case the need arises to move volumes between platforms.

In a Nutshell

  • The steps are: configure the array, configure the switches, and set up multipathing.
  • Ideally, use the same multipathing solution on every host. Secondly, try to use a vendor-supplied driver. Grudgingly use OS-native solutions.
  • Playing with volume managers and creating file systems is highly site-specific, but again, it’s ideal to be consistent across platforms.
Read the rest of Storage Basics at Enterprise Storage Forum.

If you’re unable to use DMP, you still have two options. First thing to try is getting a driver from the storage manufacturer. If the array you purchased was sold with support for your operating system, chances are good that you simply need to install the vendor’s driver, and you’re off and running. If not, then you get to try to use whatever native multipathing driver your OS includes.

Solaris, for example, has excellent multipathing support. It works very well with storage that Sun has blessed, but may not work at all with some storage. It’s a crapshoot; hopefully you did your homework before purchasing the array.

Once multipathing is configured, you’ll have one set of devices that you’re free to play with. The actual devices are abstracted now, so you want to make sure that you’re using the multipath device nodes, not the physical paths.

Now comes the fun part. You get to plan and implement your file system layout. Be extremely careful here, because even with a volume manager as flexible as Veritas or ZFS, you’ll still be working yourself into a corner if the wrong decisions are made. The decisions are highly use-specific, so the best advice that can be given is to think carefully. Most people will want to stripe some amount of LUNs together to make larger file systems, but not too large that you can’t back it up in a sane amount of time. Too large a file system also means that repairing damage can take excruciatingly long, too.

Of course, don’t forget to save your switch and array configurations somewhere safe, and document your multipathing and file system decisions. The best part about multipathing is the testing stage. Go ahead, start copying a huge file and yank the fiber!

Article courtesy of EnterpriseNetworkingPlanet.com

For a further look into storage networking, check out Storage Networking and Hardware Solutions.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.