I/O, I/O, It's Off to Virtual Work We Go - EnterpriseStorageForum.com
Hot Topics:

I/O, I/O, It's Off to Virtual Work We Go

Virtualization for servers, storage and networks is not new, with years, if not decades, of propriety implementations. It can be used to emulate, abstract or aggregate physical resources like servers, storage and networks. What is new — and growing in popularity — are open systems-based technologies to address the sprawl of open servers, storage and networks to contain cost, address power or cooling limitations and boost resource utilization, along with improving infrastructure resource management.

With the growing awareness of server virtualization (VMware, Xen, Virtual Iron, Microsoft), not to mention traditional server platform vendor hypervisors and partition managers and storage virtualization, the terms virtual I/O (VIO) and I/O virtualization (IOV) are coming into vogue as a way to reduce I/O bottlenecks created by all that virtualization. Are IOV and VOI a server topic, network topic or a storage topic? The answer is that like server virtualization, IOV involves servers, storage, networks, operating system and other infrastructure resource management technology domain areas and disciplines.

You Say VIO, I Say IOV

Not surprisingly, given how terms like grid and cluster are interchanged, mixed and tuned to meet different needs and product requirements, IOV and VIO have also been used to mean various things. They're being used to describe functions ranging from reducing I/O latency and boosting performance to virtualizing server and storage I/O connectivity.

Virtual I/O acceleration can boost performance, improve response time and latency and essentially make an I/O operation appear to the user or application as though it were virtualized. Examples of I/O acceleration techniques, in addition to Intel processor-based technologies, include memory or server-based RAM disks and PCIe card-based FLASH/NAND memory solid state disk (SSD) devices like those from FusionIO, which are accessible only to the local server unless exported via NFS or on a Microsoft Windows Storage Server-based iSCSI target or NAS device. Other examples include shared external FLASH or DDR/RAM-based SSD like those from Texas Memory (TMS), SolidData or Curtis, along with caching appliances for block- or file-based data from Gear6 that accelerates NFS-based storage systems from EMC, Network Appliance and others.

Another form of I/O virtualization (IOV) is that of virtualizing server-to-server and server-to-storage I/O connectivity. Components for implementing IOV to address server and storage I/O connectivity include virtual adapters, switches, bridges or routers, also known as I/O directors, along with physical networking transports, interfaces and cabling.


Traditional separate interconnects for LANs and SANs
Figure-1: Traditional separate interconnects for LANs and SANs

Virtual N_Port and Virtual HBAs

Virtual host bus adapters (HBAs) or virtual network interface cards (NIC), as their names imply, are virtual representations (Figure 2 below) of a physical HBA (Figure 1 above) or NIC similar to how a virtual machine emulates or represents a physical machine with a virtual server. With a virtual HBA or NIC, real or physical NIC resources are carved up and allocated as virtual machines, but instead of hosting a guest operating system like Windows, UNIX or Linux, a Fibre Channel HBA or Ethernet NIC is presented.

On a traditional physical server, the operating system would see one or more instances of Fibre Channel and Ethernet adapters, even if only a single physical adapter such as an InfiniBand-based HCA were installed in a PCI or PCIe slot. In the case of a virtualized server such as VMware ESX, the hypervisor would be able to see and share a single physical adapter, or multiple for redundancy and performance, to guest operating systems that would see what appears to be a standard Fibre Channel and Ethernet adapter or NIC using standard plug and play drivers.

Not to be confused with a virtual HBA, N_Port ID Virtualization (NPIV) is essentially a fan-out (or fan-in) mechanism to enable shared access of an adapter bandwidth. NPIV is supported by Brocade, Cisco, Emulex and QLogic adapters and switches to enable LUN and volume masking or mapping to a unique virtual server or VM initiator when using a shared physical adapter (N_Port). NPIV works by presenting multiple virtual N_Ports and unique IDs so that different virtual machines (initiator) can have access and path control to a storage target when sharing a common physical N_Port on a Fibre Channel adapter.

The business and technology value proposition or benefits of converged I/O networks and virtual I/O are similar to those for server and storage virtualization. Benefits and value proposition for IOV include:

  • Doing more with what resources (people and technology) you have or reducing costs
  • A single (or pair for high availability) interconnect for networking and storage I/O
  • Reduction of power, cooling, floor space and other green friendly benefits
  • Simplified cabling and reduced complexity of server to network and storage interconnects
  • Boosting clustered and virtualized server performance, maximizing PCI or mezzanine I/O slots
  • Rapid re-deployment to meet changing workload and I/O profiles of virtual servers
  • Scaling I/O capacity to meet high-performance and clustered server or storage applications
  • Leveraging common cabling infrastructure and physical networking facilities


unified fabric
Figure 2: Example of a unified or converged data center fabric or network

In Figure 2, you see an example of virtual HBAs and NICs attached to a switch or I/O director that in turn connects to Ethernet-based LANs and Fibre Channel SANs for network and storage access. Figure 3 shows a comparison of various I/O interconnects, transports and protocols to help put into perspective where various technologies fit. You can learn more about storage networks, interfaces and protocols in chapters 4 (Storage and I/O Networks), 5 (Fiber Optic Essentials) and 6 (Metropolitan and Wide Area Networks) in my book, “Resilient Storage Networks” (Elsevier).


data center I/O protocols
Figure 3: Positioning of data center I/O protocols, interfaces and transports

(Continued on Page 2: Data Center Ethernet and FCoE )


Page 1 of 2

 
1 2
Next Page


Comment and Contribute

 


(Maximum characters: 1200). You have characters left.