A SAN (storage area network) is a network of data storage devices. By taking storage devices and storage traffic off the Local Area Network (LAN), another network is created specifically for storage data. SAN storage solutions can range from a few servers accessing a central pool of storage devices to thousands of servers accessing TBs or more of storage.
In a SAN, data is presented from storage devices to a host so that the storage looks like it is locally attached. This is achieved through various types of data virtualization. SAN storage, then, is a high-speed network that provides network access to storage. In some cases, SANs can be so large that they span multiple sites, as well as internal data centers and the cloud.
Storage Area Networks differ from Direct Attached Storage (DAS). In DAS, the data is directly attached to one server. A SAN, on the other hand, presents storage devices to a host such that the storage appears to be locally attached. This simplified presentation of storage to a host is accomplished through the use of different methods of virtualization.
SANs are also different from Network Attached Storage (NAS). While NAS also takes storage devices away from the server to create a central pool of data, NAS storage connects directly to the network (LAN). In SAN storage, capacity is pooled and provided with a dedicated network. This enables faster communication over faster media.
SAN are numerous, hence the popularity of SAN in the enterprise. SAN advantages include:
- Elimination of bandwidth bottlenecks associated with LAN-based server storage
- No scalability limitations imposed by SCSI bus-based implementations.
- High availability
- Greater fault tolerance
- Centralized storage management.
- Faster backups
- Global file systems
- Rapid data migration
- Fault tolerance
- Better data security
- Improved storage utilization
- Greater scalability
- Greater Improve application availability such as multiple data paths
- Enhanced application performance by offload storage functions or segregating networks.
- Better data protection and Disaster Recovery (DR)
For SAN advantages, take the case of storage migration. If data is sitting on many servers, it is a laborious process for the storage administrator to take if off each server and transfer it to a new home. This might involve steps such as unmounting the file systems using the storage, unplugging the unit, moving it, then connecting it to a different host, and bringing up the file systems on the new machine. In a SAN storage solution, it is a simple matter to move an entire large storage array from one host to another. All you have to do is unmount the file system, quickly reconfigure the SAN then bring up the data on the new host. This saves an enormous amount of time for a storage administrator.
This architecture becomes more and more vital as the size of the amount of storage grows. It is much too cumbersome to attempt to manage multiple TBs of data on a server by server basis. It takes a storage network to do the heavy lifting and remove the drudgery. For instance, if you need to add storage from another array to a server, a SAN-attached architecture enables you to allocate logical unit numbers (LUNs) from multiple arrays to that one server.
What is a LUN? A LUN is a unique identifier that designates an individual storage device or a collection of physical or virtual storage elements execute I/O commands with a host computer. The logical unit that is identified might be a block of capacity on a storage drive, the entire drive, or part of several hard drives, SSDs or tape storage residing on one or several storage systems. As such, a LUN might refer to an entire RAID set, a single hard drive, or a partition on a drive.
The storage area network combines an array of storage devices and enables faster communication.
There is another key difference between data storage on a server (or NAS box), and how data is stored in a SAN. The former uses file level storage and the latter uses block level storage.
· File level storage is found in hard drives and NAS systems. The storage disk is configured with a protocol such as NFS or CIFS so that files are stored and accessed in bulk, for instance, on a file by file basis. This approach is simple and easy to implement.
· Block Level Storage creates raw volumes of storage where each block of data is controlled by the operating system as though it were an individual hard drive. These blocks of data are not tied to specific files. Block storage, then, manages LUNs as opposed to the individual files that are managed in NAS systems.
The advantages of block level storage include:
· Management flexibility
· Easy storage management of databases
· Faster and more reliable data transportation
· The ability to treat each storage volume as an independent disk drive controlled by an external server operating system.
· Access and control privileges are easier to manage.
It used to be an either/or proposition – block storage or file storage. But more recently, storage systems have been developed which can deal with file storage and block level within a single appliance. These unified storage systems and hyperconverged systems are becoming more common.
Many SANs use the Fibre channel (FC) standard, which is a high-performance data communications technology supporting very fast data rates. FC SAN switches are used to connect devices within a SAN to create what is called the SAN fabric. These switches are somewhat similar to those functioning on regular Ethernet networks in that they act as points of connectivity for the network. By using FC SAN switch technology, dedicated paths are established between devices in the fabric to harness high bandwidth.
SANs are typically composed of elements such as:
· Fiber optic cables
· Disk or solid state drive (SSD) arrays
· Disk array (or flash) controllers
· Host Bus Adapters (HBAs). An HBA is basically an I/O adapter sitting between the bus of the host computer and the FC fabric. It is there to manage information transfer, and reduces the impact of the SAN on the performance of the host processor.
· FC switches
Fibre Channel network often have core and edge switches. Core switches are generally known as Director switches. They are often rack mounted chassis and have no single points of failure. Edge switches are smaller, simpler and don’t usually have as many redundancy features.
The problem with huge networks, of course, is that one small problem can impact the whole network. SANs get around this it to create smaller fabrics within the larger SAN. This is done using a variety of routing methods, some of which are vendor specific. SAN switches, for example, typically use Inter Switch Linking (ISL) to enable data to be transferred from one switch to another.
In addition to FC, some SANs utilize Fibre Channel over Ethernet (FCoE). This enables FC traffic to be moved across high speed Ethernet infrastructures. The advantages of this approach include the ability to converge storage and IP protocols onto a single cable.
As well as the FC SAN, there is also Internet Small Computing System Interface (iSCSI) SAN (also sometimes known as the IP SAN). iSCSI storage enables data to be transported to and from storage devices over an IP network by serializing traffic from a SCSI connection. Normally used in small and medium-sized businesses as a cheaper alternative to FC, the iSCSI SAN has grown in prominence over the past decade. This is due to FC SANs having a reputation as being complex, difficult to manage, requiring highly trained (and well paid) specialists and overall being expensive. By using Ethernet, the iSCSI SAN can transmit SCSI commands in IP packets so that there is no longer any need for an FC connection.
· Advantages: no need to learn, build and manage an FC network; using the same cabling for both the Ethernet-based LAN and storage.
· Disadvantages: the potential to clog up the LAN with too much traffic – thus iSCSI storage is used more by small and mid-sized organizations.
FC SANs, on the other hand, are mostly used by large organizations, as well as those who have distributed applications requiring fast local network performance. By offering multiple data paths, FC SANs offer better application performance and offload storage functions from IP networks.
Whether the organization uses an iSCSI SAN or an FC SAN, the effectiveness and use of storage is improved as administrators can consolidate resources and establish tiered storage:
· A top tier of super-fast storage for smaller data datasets and mission critical applications.
· Followed by successive tiers of slower storage and higher capacity.
SAN software, of course, is also needed to organize servers, storage devices and network for functions such as data transfer. To move LUNs and add storage to existing file systems, volume management software is required. RAID is also used in SANs such as software-level RAID to provide a RAID 0 stripe.
Since their early days in the 1990s and 2000s, SAN technology has evolved considerably.
· Unified Storage: Instead of separate islands of file and block storage, they are united in a single storage device.
· Virtual SAN: The virtual SAN or VSAN is the result of software-defined storage. It is implemented in conjunction with virtualization software such as a VMware of Microsoft Hyper-V hypervisor. For those in heavy virtualized server environments, VSANs offer simplified management and far greater scalability.
The SAN used to be limited by the number of storage arrays that could be hooked together within one physical data center. But virtual SANs with a simple, hypervisor-converged storage design provide a way to set up storage for huge numbers of VMs. This has streamlined storage provisioning and management in virtual server environments.
Virtual SANs can also be clustered together to provide the enterprise with greater scalability. By creating larger virtual SANs, it becomes possible to pool massive amounts of storage that can be managed centrally with relative ease. Further, the latest virtual SANs are moving away from proprietary hardware by using industry-standard server components to reduce storage CapEx and provide a vendor agnostic architecture.
Enterprises are using virtual SANs to:
- Pool internal disks for virtual server environments.
- Stretch their storage networks across data centers lying within metropolitan distances.
- Synchronously replicate data between two geographically separate sites to provides a Recovery Point Objective (RPO) of a few minutes.
- Set per-VM policies and automate provisioning.
- Create software defined data centers that extend on-premise storage and management services across different public clouds to give a more consistent experience.