The Basics of SAN Implementation, Part II

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Most of the attention on SANs has focused on the performance benefits of a dedicated gigabit network that relieves conventional LANs of data movement loads. But, from a more holistic perspective, SANs will provide other significant advantages such as improved storage implementation, manageability, more reliable and flexible backup operations, and shared storage resources among multiple servers.

For peak demand periods, SAN-based implementations offer the ability to allocate additional resources to priority applications and servers. While server re-allocation is possible without SANs, such an approach is far less useful as storage resources cannot be shared. A powerful combination is dynamic server allocation with the ability to add or change storage resources without pre-determination.

One of the most attractive features of SAN implementation technology is its impact on standard network operations. The heavy overhead that conventional storage architectures place on LANs and network file servers is eliminated by relocating storage resources to an independent network.

With the preceding in mind, this article continues the SAN implementation theme presented in Part I, by briefly discussing other SAN implementation topics with regards to backups, clusters, appliances and database applications. Let’s look at backups first.

SAN Backup Implementation

Backup operations, typically CPU intensive processes, will be completely removed from the servers. Faster, more reliable backup operations are a key component of SAN implementation. Indeed, the first generation of significant SAN-based applications will be built around a new generation of backup technologies such as:

  • LAN-Free Backup.
  • Server-Free Backup.
  • Zero Backup Window.
  • Multiple Small/Medium Libraries Versus One Large Library.

We’ll look at each of these separately.

LAN-Free Backup

Enterprise storage resources reside on an independent gigabit-speed network in a SAN implementation. All data movement occurs over this high-speed dedicated network and not a standard Ethernet LAN. The effect of SAN-based, LAN-free backup is an immediate improvement in LAN performance.

LAN-free backup technology gives multiple servers access to a single tape library connected to the SAN. Rather than the conventional Ethernet LAN, all backup operations are now routed through the gigabit-speed Fibre Channel SAN.

A new generation of SAN-aware backup software supports this architecture. In order to share tape libraries and eliminate data movement over the LAN, the backup software, being SAN-aware, coordinates between servers to allocate tape library resources. The first widely deployed storage management application to emerge on Storage Area Networks is likely to be LAN-free backup.

Server-free Backup

Server-free backup takes the LAN-free backup concept a step further. Not only are all backup operations relocated from the LAN to the SAN, but by enabling direct data movement between SAN devices, SAN bandwidth is maximized. In the case of backup, this means that the data moves directly from Redundant Array of Independent Disks (RAID) disk storage to the tape library; and, thus removes the server bottleneck.

Server-free backup leverages two key technical developments: the small computer system interface-3 (SCSI-3) block copy command (also known as third-party copy) and the Network Data Management Protocol (NDMP)-compliant software to manage communications between the server and the tape library. Also, for this application, the term “server-free” backup is actually somewhat of a misnomer. By using the NDMP to manage communications between the SAN storage devices, the server still plays a role in the backup operation and ensures that the backups are completed successfully. However, as in traditional backup operations, server intervention is minimized and all data is sent directly over the SAN and not via the server. This significantly increases performance, while improving the reliability of automated backup processes. As data moves from a server-attached RAID device to a local or network-attached tape library, conventional LAN backup operations consume a vast amount of resources, including server CPU cycles, I/O busses and LAN bandwidth. All aspects of server and network operations are impacted, thus imposing a significant performance hit.

As data is transferred over the high-speed Fibre Channel SAN directly between the source and target storage devices, server-free backup architectures remove virtually all of this processing overhead, thus eliminating traditional backup overhead. Server-free backup solves the backup dilemma by:

  • Leveraging Fibre Channel bandwidth to dramatically increase the rate at which data can be moved.
  • Eliminating repeated data movement by enabling direct transfers between SAN storage devices.
  • Reducing the server resources required to move the data.
  • Delivering this functionality on live production systems.

Zero Backup Window

The combination of NDMP-compliant backup applications and SCSI copy technology has the potential to enable another powerful SAN-based application: zero backup window. A “snapshot” of the backup data is created–in effect a point-in-time virtual mirror that requires only a small fraction of the disk space needed to create an actual mirror of the data. Instead of remaining unavailable for the duration of the backup operation, this allows the applications to be returned to production status almost immediately. The snapshot directs the backup software to the disk location of the original data for backup. If a write command is issued to update the backup data set, it is intercepted and the update is written to a new section on the disk, thus maintaining the integrity of the original data. When the backup is completed, the snapshot is deleted freeing up that disk space.

One Large Library versus Multiple Small/Medium Libraries

While the traditional architecture has been to choose one larger library, a SAN also enables universal access to multiple libraries. The advantage of multiple smaller libraries is redundancy (therefore uptime) and cost savings. Large silos often cost hundreds of thousands of dollars, take up significant floor space and are costly to maintain. And if they fail, you cannot continue to perform a backup. You can actually save space, cost, and never be down by having multiple smaller libraries. Having multiple small/medium libraries also allows for cost-effective redeployment of a number of units outside of the SAN.

Implementing Integrated SANs

Until now, customers preparing SAN implementations have been faced with two broad choices: a standards-based multi-vendor solution that forces the customer to select each piece and then attempt to integrate a total solution, or an integrated proprietary SAN solution from a single vendor. A third option is rapidly evolving: an integrated open standards SAN.

The open integrated SAN has the potential to provide the needed interoperability with the advantages of the multi-vendor market competition, even though implementing a SAN is clearly not a plug-and-play solution. This allows customers to pick and choose the individual SAN components that best meet their unique needs. Thus, the open standards concept based on inter-vendor product testing and certification is critical to the growth of SANs.

Depending on the size of the installation, the SAN hardware opportunity is not limited to Fibre Channel switches, routers and the Fibre cabling. These components will complement Fibre-ready disk array and tape libraries. On the software side, traditional backup applications will be used in first-generation SAN applications, but will be SAN aware to take advantage of the LAN-free and server-free backup applications.

Now, let’s look at how to implement SANs for clusters in a variety of configurations. This would include consolidations and those with SAN appliances.

Implementing SANs for Clusters

Dual fabrics are separate storage networks between the servers and the storage enclosures. The use of two separate fabrics ensures that no component within the storage subsystem (Host Bus Adapter (HBA), cable, or RAID controller) is a single point of failure for the cluster. Dual fabrics effectively double throughput from the server to the storage, while also protecting the system against component failures. Implementing a switched SAN adds a significant number of capabilities to the cluster:

  • Backup over SAN: Servers share tape backup devices and perform backup operations over the SAN rather than over the network.
  • Many-to-one cluster configurations: Cluster consolidation with up to 10 clusters sharing a single storage system and optional tape backup device(s).
  • Mixed storage and cluster consolidation configurations: Multiple clusters and stand-alone servers with heterogeneous supported operating systems sharing the same storage system and optional tape backup device(s).
  • Multipathing: Redundant, active HBA-to-controller connections.
  • One-to-many cluster configurations: One cluster with multiple storage systems.

Several different storage technologies, including logical unit number (LUN) masking and Fibre Channel switch zoning, enable these configurations for clusters. Because various operating systems claim all accessible disks, zoning and LUN masking techniques ensure that each server or cluster attached to the SAN does not see or have access to disks belonging to other servers or clusters.

Storage and Cluster Consolidation

In the storage environment, consolidation relates to multiple servers using resources within a single enclosure. Consolidation examples include:

  • Cluster consolidation: Shared storage for multiple clusters coexists within the same Fibre Channel enclosure attached to a SAN.
  • SAN-consolidated backup: Tape backup changers and autoloaders accessed by multiple servers over a storage area network.
  • Storage consolidation: Two or more servers each have dedicated RAID volumes inside a single storage enclosure.

    With clusters, all of these configurations should be tested, documented, and supported simultaneously on the same SANs. Each storage enclosure should support stand-alone nodes and cluster pairs. The cluster pairs should be running the same operating system on both nodes, but clusters can coexist in the same SAN, and even share the same storage enclosure.

    Integrating the SAN Appliance Implementation

    A SAN appliance implementation should enhance the availability and capabilities of the SAN. The SAN appliance is a central point of intelligence for the SAN that enables storage replication, snapshots, and virtualization in an integrated appliance rather than distributing the implementation of these features across a variety of different hardware components and software tools. Furthermore, by off-loading the processing required to execute these solutions from the servers and storage devices, the SAN appliance is appropriate for high-performance configurations and mission-critical data connected to stand-alone servers and clusters.

    A SAN appliance usually sits logically “between” the host servers and the storage, but adds only minimal latency to the overall storage system. The SAN appliance should be able to assign “virtual” storage units to each host server. The host server is unaware of the specific physical disk, array, or enclosure that it is actually using since the SAN appliance should be able to “map” the storage to any available target ID and LUN.

    Each redundant pair of SAN appliances should be able to virtualize up to four storage systems. As with the SAN-attached configurations, each redundant pair of SAN appliances should be able to support up to 10 cluster pairs or a combination of up to 20 clustered and nonclustered hosts.

    Partitioning can be useful for separating logical data units or for enforcing quotas. When configuring an active/active cluster in which separate disk resources must be available for both servers, the creation of multiple RAID volumes is a requirement.

    A SAN appliance can be configured to provide data replication either locally (over a Fibre Channel network), remotely (over an IP network), or both. This process is transparent to the cluster nodes and the applications.

    For combined local and remote mirroring, also referred to as three-way mirroring, the source RAID volume is replicated to two different targets: one locally using Fibre Channel (packetized SCSI over Fibre Channel) and one remotely using an IP network. The availability of a local copy of the storage protects the cluster from a complete failure of the primary storage system. The SAN appliance can detect the failover of the primary storage system and make the failover to the local replica instantly, without interfering with the cluster’s operations. If both copies of the local storage fail, all I/Os are routed to the remote storage.

Local mirroring via Fibre Channel Protocol (FCP) is always synchronous and supported for distances up to 500 meters over multimode fiber, or one or more storage systems may be located up to 10 kilometers from the SAN appliance if Fibre Channel switches with long-wave gigabit interface converters (GBICs) and single-mode fiber are used. Remote mirroring over an IP network supports either synchronous or asynchronous mirroring and can support distances greater than 10 kilometers.

Finally, let’s look at why applications that depend on an ever-increasing amount of data are now seen as a strategic resource and, in some cases, a key source of revenue for organizations. In the past 11 years or so, there has been a virtual renaissance in the information and computing field–particularly around storage.

Implementing SANs for Database Applications

Gone are the days when only the most essential information was kept on computer storage or disk drives (DASD for those who have been around for awhile). This was due, in large part, to the high cost of storage and associated interfaces. Also gone, are the days of double digit dollar per megabyte costs, proprietary and expensive interfaces and costs associated with managing storage (well, almost).

Open systems and open interfaces including parallel SCSI and non-proprietary storage devices including parallel SCSI RAID, have helped drive storage prices down to the current range of well under a dollar per megabyte. Today, new requirements including disaster tolerance, extended distances, world-wide applications, along with content-based web applications and databases, have placed an emphasis on storage being scalable, modular, highly available, fast, open, and cost effective. While existing parallel SCSI storage and, in particular, RAID arrays, have gone a long way towards addressing these requirements. The storage industry is still limited to evolving towards the “virtual data center” vision or concept by storage interfaces and proprietary solutions.

New Storage Interfaces

The storage interface or “plumbing” that sits between the host computer systems and storage devices (like parallel SCSI) is now nearing or, in some cases, has become itself a hindrance to growth. That is not to say that open interfaces like parallel SCSI are dead or dying — far from that. Rather, a new storage interface is needed for new growth-oriented applications for high bandwidth distributed applications, and for applications that need large amounts of data moved to various systems. In fact, parallel SCSI will continue to co-exist in many environments and is part of an overall storage area network (SAN) storage implementation strategy.

You should keep in mind that the key to configuring storage for performance and database applications is to avoid contention or choke points. So, you should avoid the mistake of trying to use a single, fast, Fibre Channel interface or loop to support all the storage when implementing a SAN for database environments. Instead, to spread I/O devices such as RAID arrays on different interfaces to avoid contention, you should use multiple Fibre Channel host bus adapters along with existing parallel SCSI adapters.

For example, a “SAN Box” is the simplest and easiest way to implement a SAN and gain experience that can be used for implementing a full-blown SAN. A “SAN Box” is simply a small SAN made up of a host system with a Fibre Channel adapter, a storage device, and either copper or fiber optic cabling. The hardware components for a SAN include Fibre Channel host bus adapters, cabling (copper or optical), hubs or switches, and storage devices like RAID arrays. Device drivers for the host bus adapters, management software, and optional, special function host software, are software components you may need to include. Special function software includes host mirroring software for remote or disaster tolerant mirroring of storage, backup software to access SAN devices, data sharing or file replication software, clustering software, or distributed locking for messaging and database applications.

You might want to implement some small production SANs based upon hubs or switches that enable groups of systems to share storage and resources as a next step. A subsequent step would be to interconnect various sub-SANs and implement zoning or volume mapping to isolate storage to specific host systems for data integrity. Volume mapping enables a shared storage device, such as a LUN on a RAID array, to be mapped to a specific host system. In a shared storage environment, volume mapping ensures that only the authorized or mapped host can access the LUN.

Furthermore, for a different application or for performance reasons, you should consider implementing a second SAN that is isolated from the first SAN. Up to now, cost and availability has been the main advantage of using hubs for simple or small SANs. During 2003, you should expect to see a shift in the industry where more switches will be deployed to connect multiple sub-SANs together. Reduced cost per port of switches, increased functionality, management tools, and interoperability, is what will drive this shift towards an increase in switch use.

You may need more bandwidth than provided by a single loop or hub. Of course, this depends on the needs of your SAN. For increasing bandwidth in a SAN for host-to-SAN, within a SAN, and storage device-to-SAN connections, there exists a couple of methods. For example, additional host bus adapters may be added and attached to separate hubs, loops, or switch ports, to increase the bandwidth between a host and a SAN. Since there is still a single shared loop, by simply inter-connecting two or more hubs together without a switch, will not increase bandwidth. However, by interconnecting two hubs with a switch that will provide 100MB/second on each of the two hub loops (as well as, for each of the switch ports), performance can be increased. By using a switch to increase bandwidth between various points in the SAN, Inter-SAN performance can also be increased.

Finally, hubs will continue to be a popular and easy method for implementing SANs, for small or simple SANs. In SAN and LAN environments, comparisons can be drawn between use of hubs and switches. In order for a concentrator and a device to simplify cabling, hubs can provide similar functions for both SANs and LANs. The same would apply in a SAN–where a switch would be deployed in a LAN to interconnect various sub-LANs or segments. The choice between switch and hub should not have to be binary. Rather, hubs and switches compliment each other and are both key components in a SAN. In fact, to enable more devices to attach to a given number of ports, to help offset the cost of switch port costs, hubs can also be used to front-end switches.

Summary And Conclusions

Storage Area Networks are the next great leap forward in storage. The ability to locate storage resources on a dedicated gigabit-speed network with shared access and centralized management functions holds the potential to revolutionize enterprise storage. SANs promise higher performance, improved storage reliability and availability, as well as lower total cost of ownership.

Faster, scalable and highly available storage interfaces are needed to meet growing application and enterprise needs, in order to support the ever-increasing amounts of data storage and information retrieval. Fibre Channel is a key enabling technology or the “plumbing” for implementing Storage Area Networks (SANs) that will support tomorrows needs. In order to enhance storage connectivity by overcoming distance, performance, scalability, and availability issues, Fibre Channel and SANs are being used today. SANs will evolve over the next few years from being a storage interface to a robust storage network with enhanced capabilities including LAN-free backup; server-less backup; data and storage sharing; shared file systems; remote data copy and mirroring; and data distribution. Up to now, Fibre Channel has been deployed as loops using hubs. However, to create fabrics over the next year or two, these loops will be interconnected or networked using Fibre Channel switches. Therefore, without the headaches and issues associated with loops, switches will enable many of the promised features of Fibre Channel.

Over the next few years, Fibre Channel can be expected to undergo improvements and enhancements, including faster performance. Today, not all host or system vendors have Fibre Channel products. And, interoperability issues are still being addressed. Some older host systems may not support the industry-standard fibre channel-arbitrated loop (FC-AL) protocol. It is important to verify which Fibre Channel protocol a vendor is talking about. Storage networks consisting of hubs and switches will be needed to ensure adequate performance and redundancy, for all but the smallest systems.

Given the vast numbers of installed parallel SCSI peripherals, it will be a couple of years before Fibre Channel overtakes parallel SCSI as a dominant storage interface. It is safe to say that parallel SCSI will be around for some time to come. However, as a new storage interface, Fibre Channel is here and is gaining momentum. Now is the time to start planning and making decisions regarding storage interfaces.

There are steps that can be taken to prepare for a SAN, whether you are implementing a SAN today, or investigating the technology for a future implementation. First of all, Fibre Channel and SANs should be seen as an enhanced storage interface to replace or supplement existing parallel SCSI interfaces. Although faster than parallel SCSI, Fibre Channel should be treated as an I/O interface and thus it should not be overloaded. For example, like a parallel SCSI interface, it is not a good idea to place too many devices on a single Fibre Channel interface. Instead, spread the I/O over multiple interfaces.

Second, SANs can be implemented in phases and may include some of your existing storage devices. Cost for SAN components are dropping while feature and functions are increasing. This allows you to implement Fibre Channel JBOD today and tomorrow migrate it to Fibre Channel to Fibre Channel RAID controllers.

Third, you can configure your SAN with multiple sub-SANs or switched segments, where certain systems and storage can be isolated and mapped to specific hosts. This is similar to your network environment, which may include sub-nets or switched segments.

Fourth, over the next couple of years, SAN technology developments will include enhancements to performance from 100MB/second to 400MB/second. Interoperability will continue to improve and SAN hardware components will continue to evolve. Also continuing to evolve, will be SAN software for data sharing, file replication, mirroring, SAN backup and other applications.

Finally, the storage industry has the challenge to deliver open interoperable SANs to meet the huge demand for this innovative storage architecture. Overland is convinced that the best way to deliver these new capabilities is with a partnership approach. Open standards are always the preferred option, but the market won’t wait for bureaucratic organizations to issue their proclamations. The combination of vendor certification and an adoption of de-facto market standards will speed the development of SANs with a ready supply of hardware and software products and the channel partners to deliver them.


About the Author: John Vacca is an information technology consultant and author. Since 1982, John has authored 36 technical books including The Essential Guide To Storage Area Networks, published by Prentice Hall. John was the computer security official for NASA’s space station program (Freedom) and the International Space Station Program, from 1988 until his early retirement from NASA in 1995. John can be reached at jvacca@hti.net.


»


See All Articles by Columnist
John Vacca

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.