The SAN Cookbook: Best Practices for Storage Virtualization

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

More than 90% of virtual machines (VMs) reside on SAN storage. The rise in popularity of VMs and the speed at which they can multiply, is forcing storage teams to find ways to automate a provisioning process that, in many shops, is still manual. At the same time, many organizations tier storage across different drives types. That means storage administrators need to plan carefully, assessing performance metrics and capacity utilization when deciding which storage will work best with which applications to avoid application downtime.

Managing and creating VMs in a properly designed SAN environment is the key to cost savings, performance maintenance and the future growth of business critical applications. In a complete storage virtualization strategy, things to consider include:

  • Reliability
  • Availability
  • Scalability

Why SANs?

Besides required features, performance, and cost, the criteria that typically drive customer choices are the reliability, availability, and scalability of a given storage solution. SANs are specifically designed to meet these additional criteria and satisfy the requirements of mission-critical business applications. The data center infrastructures built to run these applications typically handle large volumes of important information and data. As such, they must operate reliably and continually and be able to grow to meet increasing business volume, peak traffic, and an ever-expanding number of applications and users. The key capabilities that SANs provide to meet these requirements include:

  • Storage clustering, data sharing and flexibility of storage planning (central versus distributed):providing highly available compute systems for maintaining operational excellence and reliability
  • Ease of connectivity:industry standards such as those governed by T10 (standardizes SCSI Storage Interfaces) and T11 (standardizes Fibre Channel, HIPPI and IPI interfaces for high-performance mass storage peripherals, and networks) ensure compatibility of industry products
  • Storage consolidation:SAN architectures provide for the sharing of storage resources among multiple servers
  • LAN-free backup:data is no longer sent over the local area network (LAN), but directly to shared resources on SAN
  • Server-less backup – disk-to-tape:backup operations and procedures are no longer managed by the server but by an intelligent agent using Network Data Management Protocol (NDMP)
  • Ease of scalability:enabling control for ease of administration and management when new compute resources are added
  • Storage and server expansion:grouping of physical compute resources into logical service pools to provide central management of memory, CPU and storage resources
  • Bandwidth on demand:I/O control mechanisms that allow policies to be set for various workload conditions
  • Load balancing: advances in multipathing software either natively to the operating system or through third party plugin software provides intelligence to aggregate I/O workload across multiple available paths to SAN

Reliability

The traditional definition of reliability in a SAN means that the system must be fault tolerant during fabric disruptions such as port login and logout anomalies, FC switch failures, or other conditions that cause a Registered State Change Notification (RSCN) storm. A storage virtualization environment must be well suited for error recovery, and guard against I/O subsystem malfunctions that may impact the underlying applications. Because VMs are protected from SAN errors by SCSI emulation, the applications they run are also protected from any failure of the physical SAN components.

Reliability in SANs What to look for in storage virtualization solutions
Fabric disruptions Automatic failover path detection that hides complexity of SAN multipathing
Data integrity and performance File system that includes rescan logic, auto-discovery, hiding SAN errors, distributed journal for faster crash recovery

Availability

Availability generally refers to the accessibility of a system or application to perform tasks upon request. For SAN storage, availability means that data must be readily available in the shortest possible time after a SAN error condition. Thus, redundancy is a key factor in providing highly available I/O subsystems. Availability in a storage virtualization environment must also have a built-in multipathing algorithm that automatically detects an error condition and chooses an alternative path to continue servicing data or application requests.

Availability in SANs What to look for in storage virtualization solutions
Link failures HBA multipathing auto-detects an alternate path
Storage port failures Storage port multipathing auto-detects alternate storage ports
Dynamic load performance Distributed resource management
Fault-tolerance and disaster recovery High availability feature
Storage clustering Clustering support
Higher bandwidth 4Gbps FC and higher bandwidth support
LAN-free backup LAN-free backup enabling technologies

Scalability

In traditional terms, SAN scalability means the ability to grow your storage infrastructure with minimal or no disruption to underlying data services. In a virtualized environment, scalability means being able to grow your virtualization infrastructure by adding more virtual machines as workloads increase.

Scalability in SANs What to look for in storage virtualization solutions
Server expansion Template deployment
Storage expansion Volume spanning; Rescan or auto-detect features; Volume hot-add to virtual machines
Storage I/O bandwidth on demand IO load-balancing
Heterogeneous environment Extensive QA testing for heterogeneous support

A Senior Storage Technologist for VMware Global Strategic Alliance Organization, Lucas Nguyen has years of experience working with storage vendor leaders on storage best practices and storage related future technologies. Lucas is also a veteran VMworld speaker where he has lead a super session on storage deployment strategies. Prior to VMware, Lucas worked as a Senior Test Architect for designing, testing and deploying large SAN infrastructure for Brocade’s scalability laboratory.

Follow Enterprise Storage Forum on Twitter

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.