Storage Basics: Deciphering SESAs (Strange, Esoteric Storage Acronyms), Part 2 -

Storage Basics: Deciphering SESAs (Strange, Esoteric Storage Acronyms), Part 2

In the first article in our Storage Basics: Deciphering Strange, Esoteric Storage Acronyms (SESAs) series, we examined some of the more common but often little understood terms bantered about in the storage industry, including FCIP, iFCP, SoIP, NDMP, and SMI-S. In this article, we'll continue with a look at four more acronyms: IB (InfiniBand), FSPF, VI, and DAFS.

First, A Bit on Bus Architecture

Developments in I/O architecture are not difficult to trace. In the late 1980s we saw the introduction of the 8-bit expansion slots on the XT 8088s. Since that time we have seen the 16-bit Industry Standard Architecture (ISA) bus, the introduction of a 32-bit bus, the Micro Channel Architecture (MCA) on IBM PS/2 systems, the Extended Industry Standard Architecture (EISA) bus, and the Video Electronics Association (VESA) local bus that followed in 1992. All of this brings us to where we are today with the 32-bit Peripheral Component Interconnect (PCI) bus and the more recent 64-bit PCI eXtended (PCI-X) bus.

What does the history of I/O architecture have to do with modern storage needs? Quite a bit, actually. Since 1992, the PCI bus has been the standard bus architecture used with servers. However, while reviewing the history of I/O bus developments, it is clear to see that I/O developments have been somewhat sedate when compared to other technologies such as CPU power and memory bandwidth. With CPU advancements having outpaced I/O bus developments, a performance mismatch and system bottleneck has resulted.

In many organizations today, the PCI bus is overwhelmed by the use of InterProcess Communication (IPC) or clustering cards, Fibre Channel cards, and a host of other high bandwidth cards located on a single server. Because PCI uses a shared bus architecture, all devices connected to the bus must share a specific amount of bandwidth, which means as more devices are added to the bus, the bandwidth available to each device decreases.

It’s not only PCI speeds that limit the usefulness of the PCI bus in modern servers but also its level of scalability and fault tolerance. Scaling PCI requires expensive and awkward bridge chips on the board. As for fault tolerance, under most situations, when a PCI expansion card fails, the server must be taken offline to replace the card. This introduces a single point of failure and the potential for server downtime.

Enter InfiniBand (IB)

This brings us to InfiniBand, an I/O architecture designed to address the shortcomings of the PCI bus and meet the I/O needs of modern organizations. The switched fabric architecture of InfiniBand is designed around a completely different approach as compared to the limited capabilities of the shared bus.

InfiniBand uses a point-to-point switched I/O architecture. Each InfiniBand communication link extends between only two devices, which allows the devices at either end of the communication path to have exclusive access to the full data path. To expand beyond the point-to-point communication, switches are used.

Interconnecting these switches creates a communication fabric, and as more switches are added, the aggregated bandwidth of the InfiniBand fabric increases. In addition, adding switches creates a greater level of redundancy, as multiple data paths can be accessed between devices. The following table highlights the differences between the PCI shared bus and the InfiniBand fabric.

Shared vs. Fabric Bus

Shared Fabric
Topology Shared Switched
Connection Points Minimal High
Signal Length Inches Kilometers
Fault Tolerant No Yes
Scalable No Yes
Reliability Minimal Excellent

Page 2: InfiniBand Continued

Page 1 of 3

1 2 3
Next Page

Comment and Contribute


(Maximum characters: 1200). You have characters left.



Storage Daily
Don't miss an article. Subscribe to our newsletter below.

Thanks for your registration, follow us on our social networks to keep up-to-date