Enterprises are continually seeking out cost-effective ways to manage the virtual explosion of information created by e-business and other initiatives. They are turning to Storage Area Networks (SANs) in droves to help solve this massive information explosion problem. As you likely already know, SANs are a networked storage infrastructure designed to provide a flexible environment […]
Enterprises are continually seeking out cost-effective ways to manage the virtual explosion of information created by e-business and other initiatives. They are turning to Storage Area Networks (SANs) in droves to help solve this massive information explosion problem.
As you likely already know, SANs are a networked storage infrastructure designed to provide a flexible environment that decouples servers from their storage devices. SANs accomplish this by providing any-server-to-any-storage connectivity through the use of Fibre Channel switch fabric technology (commonly referred to as the SAN fabric). SANs address today’s most challenging business requirements: how to protect and access critical data, how to utilize computing resources more efficiently, and how to ensure the highest levels of business continuity.
As information systems have become more tightly integrated, enterprises have also adopted applications that span multiple servers and maintain multiple and complex relationships with the enterprise’s data. Infrastructure servers have also begun migrating to the blade architecture, and infrastructure storage has started moving from relatively simple direct attached storage to SANs. These technology and organizational trends have given rise to increasingly complex connectivity solutions.
SANs are proving to be a more scalable and manageable way of organizing your storage devices and servers. Information technology (IT) planners can now separate server decisions from storage decisions by using a SAN. This simplifies and streamlines infrastructure planning. In order to support their business needs, customers can now buy the right servers and the right storage, with the SAN providing connectivity as needed.
SANs have their historical roots in connection-oriented technologies such as enterprise system connection (ESCON) and small computer systems interface (SCSI). In order to enable the consolidation of SAN resources, and to provide switched data pathways across high-performance networks that connect servers to their data sets, SANs consist of dedicated resources. Thus, the business benefits of consolidated SANs include:
SANs are the foundation for business continuity plans, and they provide the backbone for high-availability environments in the datacenter. And as more storage and processing devices are connected, SANs will need to span different processor and operating environment technologies and protocols. With all of the preceding in mind, let’s begin looking at how the SAN fabric virtualization process delivers a key goal for storage administrators — SANs that are easier to manage.
The process of simplifying the management of a SAN by isolating physical descriptions of storage, server, and network functionality — and replacing these descriptions with logical designators — is known as fabric virtualization. For example, in a virtualized SAN, system administrators will be able to provision storage for use by a server or a cluster of servers without needing to know which storage device is being utilized or exactly what network paths are being used. Policies for backup and restore and other data protection/data movement procedures will also be applied automatically in a virtualized SAN. This means that SAN virtualization products will need to reach the enterprise market environment in three stages:
SAN virtualization and storage device virtualization products are expected by the storage industry to evolve concurrently and independently. This is because SANs perform different functions in the IT infrastructure. For example, the functionality expected from network and switch providers is the automated discovery of new network nodes and policy-based routing of data through the network switch. In contrast, the virtualization of logic units, volumes, and storage addresses will form new products and functions offered by storage device providers.
The technology substrate upon which ease-of-use functionality will be delivered is provided by the virtualization of storage devices and networks. The value proposition to users will be that simpler, self-managing SANs will deliver the aforementioned benefits of consolidation:
The preceding benefits will be tightly integrated so that the network can automatically respond to changes in business requirements by making needed adjustments to the network and storage configurations in real time. In other words, a consolidated SAN provides direct business benefits.
Some enterprises have simply taken the risk of losing data when storage is distributed. Why? Because providing backup and recovery for direct attached storage has been a difficult challenge for datacenter managers. Therefore, in order to consider the level of redundancy needed for different categories of enterprise data, you should consider consolidated SANs to provide that opportunity. Consolidated SANs also help reduce IT operational risk in proportion to the value of the information being stored.
SANs can be reconfigured quickly, and in order to meet shifting requirements, managers can reallocate consolidated storage. For example, this can be done by adding storage in response to seasonal increases in sales and reclaiming storage when a marketing campaign reaches its conclusion. New applications can be connected to multiple sources of data that reach across enterprise boundaries (engineering, inventory, sales, and customer care, for instance). Furthermore, datacenter managers will be more aggressive about improving the utilization of SANs as they realize that reallocation is a low-risk operation.
In general, better enterprise-wide planning requires consolidated IT systems — SAN subsystems in particular. Thus, enterprise planners in finance, marketing, and operations need access to unified data sets as quickly as possible, especially during business transitions such as mergers and acquisitions. Unified corporate information leads to more informed critical enterprise decisions in uncertain times.
A lower TCO primarily comes from the ability to manage more storage with fewer people as well as a higher utilization of shared storage resources. In other words, the efficiencies of consolidated storage are similar to the economies of scale associated with consolidations of any kind.
So, given the benefits of self-managing SANs, what can you do with virtualization, and does it actually simplify SAN management? Let’s take a look.
The management of complex systems is simplified by the virtualization process. By separating the host (or server) view of a SAN from its physical implementation, virtualization technologies (which include both hardware and software) achieve simplicity. With attributes that are logical designators of storage, server, and network resources, system administrators can manage an aggregated pool of storage. Thus, system managers will be able to do the following with virtualization:
Full virtualization integrates three distinct subsystems (storage, servers, and the SAN). This is a compelling trend in the evolution of the SANs.
This trend toward higher-level server, storage, and network virtualization rests firmly on a foundation of lower-level hiding. Virtualization essentially means being aware of details within the automation processes but hiding them from the administrator. For example, different classes of devices with different performance attributes can be properly allocated to deliver differentiated QoS where needed.
The “instrumentation” provided by a foundation of network, server, and storage devices builds network management tools. In this case, instrumentation means that network switches, servers, and storage devices signal each other with critical, real-time operating data. These data are then standardized and made available to management tools.
In addition, a collection of application programming interfaces (APIs) known as Metadata, and other abstractions that map to the SAN’s elements, will effectively constitute a SAN operating system. This particular operating system will support the financial accounting for storage services as well as for SANwide services such as automated provisioning of capacity and bandwidth.
Finally, as progress toward virtualized SANs continues, new and more powerful administration functions will emerge to mask the complexity and leverage the datacenter manager. Managers will be able to declare policies that regulate network and storage behavior.
New demands are placed on today’s enterprise storage by critical business requirements. The number of business applications is increasing. Increasing even more rapidly is the amount of content that applications are expected to deliver. For example, enterprises are deploying new service offerings in areas of customer care and e-commerce. These service offerings depend on rich reservoirs of content that are increasingly multimedia in nature. Customer satisfaction is frequently dependent on consistent transaction response time to this rich media. Therefore, as enterprises deploy applications that generate revenue and provide customer care, business continuity planning risk increases.
Thus, in tough economic times, the tight management of IT costs and the continued monitoring of returns on IT investments are critical. Next-generation applications will require SANs with a wider geography, greater capacities, greater availability, and higher performance to support the enterprise appetite for “rich and plentiful.” Content SAN management technologies will also be increasingly important to make enterprise storage affordable.
Datacenter managers must continue their migration to consolidated SANs that use dedicated networks and high-performance switches to link a cloud of servers to a shared pool of storage in response to enterprise demands. When compared to that of the previous server-oriented storage model (direct attached storage), the cost of managing consolidated SANs can be dramatically reduced. Datacenter managers can now route storage services to application servers by monitoring and managing a network that connects the two, rather than having to take down a server to add more storage or recabling a server to a different storage device.
With the preceding in mind, the most important trend in SAN technology is virtualization. Virtualization masks the detailed characteristics of network and storage devices in order to make consolidated SANs easier for administrators to manage.
Virtualization will provide the foundation for SANs that automatically adjust to shifting storage needs over time. Atop virtualized systems, network paths can be used to deliver the data while being assured that the necessary storage QoS will be achieved, and users can provision storage without the need to know exactly which device is providing the additional capacity. It is also possible to automate explicit policies for managing the SAN in a virtualized SAN environment.
In realizing the benefits of consolidated storage, virtualization is a necessary step. Virtualized network services will be most critical for SANs that must be reconfigured frequently and for large SANs with thousands or tens of thousands of nodes. Additionally, consolidated SANs can and will provide lower cost, higher-quality, dependable storage services on demand with greater management precision.
Finally, a consolidation of shared and dedicated networks within the enterprise is expected as more effective tools emerge to manage quality of service (QoS) within the network infrastructure. In other words, in the future, QoS will be woven deeply into the network infrastructure, and within a consolidated network that integrates different connection technologies, network bandwidth and latency will be provisioned as needed.
John Vacca is an information technology consultant and internationally known author based in Pomeroy, Ohio. Since 1982, John has authored 39 books and more than 485 articles in the areas of advanced storage, computer security and aerospace technology. John was also a configuration management specialist, computer specialist, and the computer security official for NASA’s space station program (Freedom) and the International Space Station Program, from 1988 until his early retirement from NASA in 1995. John was also one of the security consultants for the MGM movie titled : “AntiTrust,” which was released on January 12, 2001. John can be reached on the Internet at jvacca@hti.net.
Enterprise Storage Forum offers practical information on data storage and protection from several different perspectives: hardware, software, on-premises services and cloud services. It also includes storage security and deep looks into various storage technologies, including object storage and modern parallel file systems. ESF is an ideal website for enterprise storage admins, CTOs and storage architects to reference in order to stay informed about the latest products, services and trends in the storage industry.
Property of TechnologyAdvice. © 2025 TechnologyAdvice. All Rights Reserved
Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.