In conventional IP networks, the end systems or hosts are the active participants in data communications. The network that links hosts together has one primary goal -- to quickly switch or route host-generated messages from source to destination. Advanced network-based services such as Quality of Service or data encryption may require processing power or "intelligence" in the switch or router, but these services only enhance the transport and are transparent to the end systems. The intelligence to run applications resides at the end systems; the network itself is simply a transport.
SANs, by contrast, have active end systems such as servers, but they also have passive recipients of server requests such as disk arrays and tape subsystems. The active initiator and passive target relationship of storage transactions requires assistance from the network in the form of switch-based intelligence to provide login, device discovery, and zoning services. Storage initiators and targets must each communicate with the network infrastructure independently before they can establish sessions with one another.
In addition, Fibre Channel technology is designed to be largely self-configuring in terms of addressing and fabric building as well as self-monitoring in terms of State Change Notifications. Compared to conventional IP networked devices, storage end systems require additional network-based intelligence for discovery and monitoring, along with traditional switch transport services.https://o1.qnsr.com/log/p.gif?;n=203;c=204655439;s=10655;x=7936;f=201806121855330;u=j;z=TIMESTAMP;a=20400368;e=i
Vendors Seek Value-added, Network-based Intelligent Services
The fact that the SAN infrastructure already provides intelligent services to assist storage transactions has encouraged SAN manufacturers to seek additional value-added services and thus command higher margins. A few years ago, for example, the SCSI-3 extended copy (third-party copy) standard defined the capability for an intelligent agent to perform direct disk-to-tape block copy processes. This removed the server from the backup process (i.e. serverless backup), streamlined data movement, and helped accelerate the backup routine.
At the time, the industry debated whether third-party copy should be embedded in the fabric switch, in a SCSI-to-Fibre Channel bridge, or in a tape subsystem. Although a few vendors attempted to market switch-based third-party copy, the technology eventually gravitated to the SAN bridges (aka routers) or as embedded functionality within the tape target.
Why? Partly because customers were already burdened with the relatively high cost of fabric switches and did not want to pay a premium for switch-based extended copy. In the end, the market determined that the extended copy function should reside close to the tape target or within the tape library, not dispersed within the network.
This historical example, however, has not dissuaded SAN vendors from claiming new territory for network-based intelligence. Storage virtualization, for example, masks the complexity of hardware storage assets and presents a simplified and more readily manageable view of disparate storage arrays as a single storage pool.
This both streamlines storage administration and enables more efficient use of storage capacity. Storage pooling, for example, allows dynamic allocation of storage if a particular application is exceeding capacity and tapping into unused storage space to balance allocation among multiple servers. Although storage virtualization is still in its infancy, it represents opportunity for a variety of SAN vendors to enhance their product offerings.
Storage Virtualization Adrift in the SAN Sea
Currently, storage virtualization technology is adrift in the SAN space. Vendors of host software such as VERITAS argue that storage virtualization is best anchored as a server-based technology. This provides independence from both the storage arrays and the SAN interconnect type.
Virtualization appliance vendors such as FalconStor, on the other hand, claim that virtualization should sit on the storage network, and so are creating black boxes that simply attach to the SAN, intercepting server requests and redirecting data to dispersed storage resources. This offloads both the host systems and facilitates heterogeneous storage environments.
Meanwhile, storage array manufacturers such as EMC, HDS, and XIOtech are either promoting their own flavors of array-based virtualization or allying themselves with other solutions that they can control. Array-based virtualization tends to be more tightly integrated with RAID and other storage functions, but these are largely single-vendor solutions.
And finally, fabric switch vendors such as Cisco and Brocade are developing switch-based virtualization engines, highlighting the central position of switches in a SAN and the ability of switches to redirect between any storage targets and any servers.
All of these individual solutions have their respective merits and demerits, but collectively they indicate that SANs are becoming more intelligent in terms of automating and simplifying complex storage processes. Whether that intelligence resides in hosts, in appliances, in storage targets, within SAN switches, or is distributed throughout various SAN components, the net result is that fewer administrators will be required to manage additional storage capacity more efficiently and more cost-effectively.
The end-user value of SAN technology in general, which is already well-established in the market, will be raised to a more productive level once the complexity of SANs has been hidden from view. This in turn expands the market for SAN technology, enabling it to penetrate the vast small and medium business arena which thus far has been hesitant to adopt SANs.
Current Virtualization Just the Beginning
Although virtualization in the form of storage pooling is a good beginning, the potential for automating storage processes is much greater. Representing different types and capacities of storage as a single resource avoids the labor-intensive configuration of individual storage devices. With additional intelligence, however, it should also be possible to dynamically assign resources based on the unique requirements of specific applications.
Application-aware virtualization requires monitoring of storage transactions to identify the most appropriate form of storage that should be applied. Streaming video, for example, is best allocated to the outer tracks of disks so that minimal head movement will result in more consistent streaming. Online transaction processing is best stored on high availability RAID with immediate synchronous data replication to provide a readily accessible copy of the most current business orders.
Additionally, policy engines may automatically determine which hierarchical storage management (HSM) routines should be applied to particular data types. In financial enterprises, for example, high-availability storage for stock transactions may need synchronous disk-to-disk replication, secondary disk-to-tape, and, as the data ages, tape-to-optical for long term storage. Less critical applications may only require periodic backup to tape. By examining the data type issued by a host or server, policy intelligence can apply predefined rules on how that data should be treated and which conduits within the SAN should be enabled to expedite policy enforcement.
Although some vendors are currently offering limited versions of application-aware and policy-based HSM, the missing component is tight integration into the SAN infrastructure to minimize data handling and movement. SAN management applications in general and virtualization have trailed behind the evolution of SAN hardware, and the next several years will see increasing intelligence through software and optimized hardware to fully exploit the power of networked storage. This in turn will make storage networking far more productive and ubiquitous as an obvious choice for enterprises large and small.