DataCore. SANOne. Veritas. With all of attention that storage area network (SAN) virtualization gets, you'd think that the more than 50 companies crowding this space pioneered the SAN movement or, better still, the concept of storage virtualization. They didn't. In fact, to date, a standard API for SAN virtualization doesn't even exist.
Storage virtualization provides users with a logical view, rather than a physical configuration, of the storage devices. In essence, users don't need to know how the storage devices are configured, where they're located, or what their characteristics are.
Likewise, SAN virtualization offers a logical view in which the servers give the appearance of owning dedicated physical storage. In reality, no single server has any dedicated storage. SAN virtualization creates a single point of management enabling physical storage devices to be added, upgraded, or replaced without disrupting the application or the server availability.https://o1.qnsr.com/log/p.gif?;n=203;c=204660765;s=10655;x=7936;f=201812281308090;u=j;z=TIMESTAMP;a=20400368;e=i SAN virtualization, however, adds several new methods to the existing pool of storage virtualization methods, as well as a new way human resources manage storage. But new methods don't mean you should forget about the old ones. To make informed storage decisions, you need to put into perspective each role storage virtualization serves and the way storage is managed around it.
The roots of overall storage virtualization go back to the early 1980s. Apparently, in 1983 a mainframe user's group alluded to virtualization in a white paper about the future requirements of storage management The paper noted that the user must be aware of data attributes of the storage, not the physical aspects.
The paper formed the basic value proposition that underscores all forms of storage virtualization -- users shouldn't have to care about the layout of the storage, the vendor, and the media. They should care about the capabilities provided by that storage. So the goal of storage virtualization became one of being able to abstract the physical characteristics to a level of what users really cared about.
Why storage virtualization? Storage media has attributes like disk size, number of disks, seek time, and cache hit rate. Systems administrators, not users, care about these low-level physical statistics. Users, on the other hand, care more about the higher-level abstracted characteristics, such as the application requirement for growth potential.
Actually, the entire RAID storage concept is based on this logical to physical abstraction in a storage array. The 1987 University of California - Berkeley white paper, The Case for Redundant Arrays of Inexpensive Disks (RAID) presented the concept of being able to abstract the physical disks to a higher layer of capacity. Each of the five layers defined in the paper used a different RAID algorithm to map the blocks of virtual storage to the underlying blocks of physical storage.
Although each RAID level used a different algorithm, each RAID level one had the same goal as the others -- to increase the online availability of data and to minimize the risk of disk failure.
Writing host-based logical volume managers enabled vendors to move forward with still way to go from the logical to physical abstraction. Like the RAID concept, logical volume managers also abstract the physical characteristics of the disk to a higher level. For example, a logical volume can actually scan host bus adapters or multiple physical arrays. As a result, you have more flexibility with capacity and with failure tolerance.
These two storage virtualization methods raised a classic question: Which one is better - subsystem-based virtualization or host-based logical volume virtualization? Ironically they both attempt to do the same thing. So, you're better off looking at the synergy between them, not the competition. As it turns out, you can layer both types of virtualization to get combination of the two. For example some users harness the synergies between the two by slicing the virtualization of the subsystem's LUNs to create logical volumes.
These storage virtualization methods had one downside - their system-centric approach physically tied the storage to one or more multiple servers. Because storage became captive to specific servers, you needed a separate systems administrator who managed each server platform. You ended up doing platform-based policies.
The emergence of SANs in the mid 1990s introduced the concept of making central storage pools available any to any. The server-centric storage model became a handmaiden to the network-centric storage model. Now storage didn't have to be held to a small set of servers. All of the servers could have potential equal access to this central storage.
The SAN model also added a new method for managing storage. With the server-centric model, a systems administrator managed end-to-end from the application down to the storage. SANs, on the other hand, require groups of storage administrators who specialize in different areas of storage management. For example, you might have one group in charge of managing and provisioning storage across the entire enterprise or across platforms and applications.
At the same time, the SAN model has added several new methods to storage virtualization. These two new infrastructure-based virtualization methods include the following:
These two types of SAN virtualization bring up the classic question: Which one is better than the other? In-band versus out-of-band?
Instead, you need to look at the synergy among all of the storage virtualization methods. You might need RAID sub-system virtualization for storing Windows documents, and in-band SAN virtualization for production databases on different platforms.
To this end, you need you to ask yourself what are the problems you are trying to solve and how will the solutions contribute to the business value propositions. Take at a look at your environment requirements and then map them to the best of all of the virtualization methods. Factor in the service level requirements of your application to determine your best virtualization choices.