Virtualizing SAN Management

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Enterprises are continually seeking out cost-effective ways to manage the virtual explosion of information created by e-business and other initiatives. They are turning to Storage Area Networks (SANs) in droves to help solve this massive information explosion problem.

As you likely already know, SANs are a networked storage infrastructure designed to provide a flexible environment that decouples servers from their storage devices. SANs accomplish this by providing any-server-to-any-storage connectivity through the use of Fibre Channel switch fabric technology (commonly referred to as the SAN fabric). SANs address today’s most challenging business requirements: how to protect and access critical data, how to utilize computing resources more efficiently, and how to ensure the highest levels of business continuity.

As information systems have become more tightly integrated, enterprises have also adopted applications that span multiple servers and maintain multiple and complex relationships with the enterprise’s data. Infrastructure servers have also begun migrating to the blade architecture, and infrastructure storage has started moving from relatively simple direct attached storage to SANs. These technology and organizational trends have given rise to increasingly complex connectivity solutions.

SANs are proving to be a more scalable and manageable way of organizing your storage devices and servers. Information technology (IT) planners can now separate server decisions from storage decisions by using a SAN. This simplifies and streamlines infrastructure planning. In order to support their business needs, customers can now buy the right servers and the right storage, with the SAN providing connectivity as needed.

SANs have their historical roots in connection-oriented technologies such as enterprise system connection (ESCON) and small computer systems interface (SCSI). In order to enable the consolidation of SAN resources, and to provide switched data pathways across high-performance networks that connect servers to their data sets, SANs consist of dedicated resources. Thus, the business benefits of consolidated SANs include:


  • Better enterprise business integration,
  • Greater utilization and flexibility in storage systems,
  • Reduced total cost of ownership (TCO), and
  • A streamlined approach to disaster recovery.

SANs are the foundation for business continuity plans, and they provide the backbone for high-availability environments in the datacenter. And as more storage and processing devices are connected, SANs will need to span different processor and operating environment technologies and protocols. With all of the preceding in mind, let’s begin looking at how the SAN fabric virtualization process delivers a key goal for storage administrators — SANs that are easier to manage.


Management of the SAN Fabric Virtualization Process

The process of simplifying the management of a SAN by isolating physical descriptions of storage, server, and network functionality — and replacing these descriptions with logical designators — is known as fabric virtualization. For example, in a virtualized SAN, system administrators will be able to provision storage for use by a server or a cluster of servers without needing to know which storage device is being utilized or exactly what network paths are being used. Policies for backup and restore and other data protection/data movement procedures will also be applied automatically in a virtualized SAN. This means that SAN virtualization products will need to reach the enterprise market environment in three stages:


  1. Visualization.
  2. Assisted management.
  3. Automatic provisioning.

SAN virtualization and storage device virtualization products are expected by the storage industry to evolve concurrently and independently. This is because SANs perform different functions in the IT infrastructure. For example, the functionality expected from network and switch providers is the automated discovery of new network nodes and policy-based routing of data through the network switch. In contrast, the virtualization of logic units, volumes, and storage addresses will form new products and functions offered by storage device providers.

The technology substrate upon which ease-of-use functionality will be delivered is provided by the virtualization of storage devices and networks. The value proposition to users will be that simpler, self-managing SANs will deliver the aforementioned benefits of consolidation:


  • Efficient mechanisms to ensure business continuity planning,
  • Flexibility in meeting enterprise storage needs,
  • Higher utilization of enterprise-wide business integration of storage resources, and
  • Lower Total Cost of Ownership (TCO).

The preceding benefits will be tightly integrated so that the network can automatically respond to changes in business requirements by making needed adjustments to the network and storage configurations in real time. In other words, a consolidated SAN provides direct business benefits.


Efficient Mechanisms to Ensure Business Continuity Planning

Some enterprises have simply taken the risk of losing data when storage is distributed. Why? Because providing backup and recovery for direct attached storage has been a difficult challenge for datacenter managers. Therefore, in order to consider the level of redundancy needed for different categories of enterprise data, you should consider consolidated SANs to provide that opportunity. Consolidated SANs also help reduce IT operational risk in proportion to the value of the information being stored.


Flexibility in Meeting Enterprise Storage Needs

SANs can be reconfigured quickly, and in order to meet shifting requirements, managers can reallocate consolidated storage. For example, this can be done by adding storage in response to seasonal increases in sales and reclaiming storage when a marketing campaign reaches its conclusion. New applications can be connected to multiple sources of data that reach across enterprise boundaries (engineering, inventory, sales, and customer care, for instance). Furthermore, datacenter managers will be more aggressive about improving the utilization of SANs as they realize that reallocation is a low-risk operation.


Higher Utilization of Enterprise-wide Business Integration of Storage Resources

In general, better enterprise-wide planning requires consolidated IT systems — SAN subsystems in particular. Thus, enterprise planners in finance, marketing, and operations need access to unified data sets as quickly as possible, especially during business transitions such as mergers and acquisitions. Unified corporate information leads to more informed critical enterprise decisions in uncertain times.


Lower Total Cost of Ownership (TCO)

A lower TCO primarily comes from the ability to manage more storage with fewer people as well as a higher utilization of shared storage resources. In other words, the efficiencies of consolidated storage are similar to the economies of scale associated with consolidations of any kind.

So, given the benefits of self-managing SANs, what can you do with virtualization, and does it actually simplify SAN management? Let’s take a look.


Simplifying SAN Management with Virtualization

The management of complex systems is simplified by the virtualization process. By separating the host (or server) view of a SAN from its physical implementation, virtualization technologies (which include both hardware and software) achieve simplicity. With attributes that are logical designators of storage, server, and network resources, system administrators can manage an aggregated pool of storage. Thus, system managers will be able to do the following with virtualization:


  • Declare storage, processing, and network requirements for a business application according to predefined policies, and the virtualized network will support those requirements automatically.
  • Automate business continuity plans through the use of explicit policies for mirroring, preparing, storing point-in-time replicas of data, and invoking failover procedures.
  • Provide storage, processing, and network resources independently and on demand, without the need to specify exactly which storage device is allocated, which network path is utilized, and which server is processing the data,

Full virtualization integrates three distinct subsystems (storage, servers, and the SAN). This is a compelling trend in the evolution of the SANs.


  • For servers, virtualization means the ability to adjust quality of services (QoS) for storage devices and the network while the server is executing on behalf of the application that it hosts.
  • For storage devices, virtualization means hiding the details about the exact locations of data, the precise performance characteristics of storage devices, and available capacity.
  • For the SAN, virtualization means hiding the details about the physical connections of network cabling, the allocation of ports, and the provisioning of appropriate bandwidth.

This trend toward higher-level server, storage, and network virtualization rests firmly on a foundation of lower-level hiding. Virtualization essentially means being aware of details within the automation processes but hiding them from the administrator. For example, different classes of devices with different performance attributes can be properly allocated to deliver differentiated QoS where needed.

The “instrumentation” provided by a foundation of network, server, and storage devices builds network management tools. In this case, instrumentation means that network switches, servers, and storage devices signal each other with critical, real-time operating data. These data are then standardized and made available to management tools.

In addition, a collection of application programming interfaces (APIs) known as Metadata, and other abstractions that map to the SAN’s elements, will effectively constitute a SAN operating system. This particular operating system will support the financial accounting for storage services as well as for SANwide services such as automated provisioning of capacity and bandwidth.

Finally, as progress toward virtualized SANs continues, new and more powerful administration functions will emerge to mask the complexity and leverage the datacenter manager. Managers will be able to declare policies that regulate network and storage behavior.


Summary and Conclusions

New demands are placed on today’s enterprise storage by critical business requirements. The number of business applications is increasing. Increasing even more rapidly is the amount of content that applications are expected to deliver. For example, enterprises are deploying new service offerings in areas of customer care and e-commerce. These service offerings depend on rich reservoirs of content that are increasingly multimedia in nature. Customer satisfaction is frequently dependent on consistent transaction response time to this rich media. Therefore, as enterprises deploy applications that generate revenue and provide customer care, business continuity planning risk increases.

Thus, in tough economic times, the tight management of IT costs and the continued monitoring of returns on IT investments are critical. Next-generation applications will require SANs with a wider geography, greater capacities, greater availability, and higher performance to support the enterprise appetite for “rich and plentiful.” Content SAN management technologies will also be increasingly important to make enterprise storage affordable.

Datacenter managers must continue their migration to consolidated SANs that use dedicated networks and high-performance switches to link a cloud of servers to a shared pool of storage in response to enterprise demands. When compared to that of the previous server-oriented storage model (direct attached storage), the cost of managing consolidated SANs can be dramatically reduced. Datacenter managers can now route storage services to application servers by monitoring and managing a network that connects the two, rather than having to take down a server to add more storage or recabling a server to a different storage device.

With the preceding in mind, the most important trend in SAN technology is virtualization. Virtualization masks the detailed characteristics of network and storage devices in order to make consolidated SANs easier for administrators to manage.

Virtualization will provide the foundation for SANs that automatically adjust to shifting storage needs over time. Atop virtualized systems, network paths can be used to deliver the data while being assured that the necessary storage QoS will be achieved, and users can provision storage without the need to know exactly which device is providing the additional capacity. It is also possible to automate explicit policies for managing the SAN in a virtualized SAN environment.

In realizing the benefits of consolidated storage, virtualization is a necessary step. Virtualized network services will be most critical for SANs that must be reconfigured frequently and for large SANs with thousands or tens of thousands of nodes. Additionally, consolidated SANs can and will provide lower cost, higher-quality, dependable storage services on demand with greater management precision.

Finally, a consolidation of shared and dedicated networks within the enterprise is expected as more effective tools emerge to manage quality of service (QoS) within the network infrastructure. In other words, in the future, QoS will be woven deeply into the network infrastructure, and within a consolidated network that integrates different connection technologies, network bandwidth and latency will be provisioned as needed.

John Vacca is an information technology consultant and internationally known author based in Pomeroy, Ohio. Since 1982, John has authored 39 books and more than 485 articles in the areas of advanced storage, computer security and aerospace technology. John was also a configuration management specialist, computer specialist, and the computer security official for NASA’s space station program (Freedom) and the International Space Station Program, from 1988 until his early retirement from NASA in 1995. John was also one of the security consultants for the MGM movie titled : “AntiTrust,” which was released on January 12, 2001. John can be reached on the Internet at

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.