Industry Interview - Asaf Somekh, Voltaire
There are many emerging technologies that promise to improve the storage environment, but few are anticipated as much as InfiniBand, an interconnect architecture which boasts greater speed, efficiency, and reliability than existing interconnect technologies such as PCI.
As products that use InfiniBand start to become available, many are predicting that the technology will become the standard interconnect in a range of computing and storage devices. However, as with other emerging technologies, InfiniBand is not without it's hurdles. To get the low-down on InfiniBand, and what it will mean to the storage networking landscape, we talked to Asaf Somekh, director of marketing for Voltaire, a leading InfiniBand developer and a member company of the InfiniBand Trade Association.
[ESF] Hello Asaf, thanks for taking the time to talk with us.
[Asaf Somekh] You are welcome.
[ESF] Voltaire is regarded as one of the leading companies in InfiniBand development. What, in layman's terms, is InfiniBand, and what does it mean to us in terms of performance increases?
[Asaf Somekh] InfiniBand architecture is a new I/O industry standard developed by all the leading system vendors. It is a high-speed switched fabric interconnect that offers links from 2.5 Gb to 30 Gb per second - 10 Gb links (InfiniBand 4X) are available now. InfiniBand is a technology specifically designed to solve the many inefficiencies which exist in data centers today. Many of these problems are the result of the use of Ethernet and TCP/IP for high speed links.
TCP/IP, which is implemented in software run by the operating system, creates huge overhead on the servers' CPUs, which sometimes consumes 80%-90% of the CPU cycles. As InfiniBand is designated to this particular environment the InfiniBand protocol stack was simplified and implemented in the InfiniBand silicon freeing up the CPU of the TCP/IP overhead. Additional advantage of InfiniBand is its RDMA capability, which essentially allows a server to directly access the memory of another s! erver 10 times faster than one Ethernet offers. These fundamentals enable InfiniBand to boost performance in a dramatic way.
Recently Intel and IBM DB2 published a benchmark where a DB2 configuration had a 100% performance increase when moved to InfiniBand while the CPU utilization went down from 100% to 8% - meaning it could perform other tasks in parallel. Another very important virtue of InfiniBand is that it is the only technology that enables building mid-range and high-end server systems based on commodity servers as building blocks to reduce the cost of the such configurations dramatically (60%-80%) and provide a much more flexible and scalable solution than classic large SMP monolithic servers.
[ESF]What impact will InfiniBand have on storage networking?
[Asaf Somekh] The impact on SANs will be gradual. Initially InfiniBand will be deployed in server environments and NAS solutions. In later phases we will see InfiniBand penetrate to the SAN as well, but it InfiniBand will coexist with fiber channel (FC) solutions in the short term.
[ESF]How will it change the equipment we already use?
[Asaf Somekh] NAS Vendors have already started using InfiniBand as a backplane technology for their NAS appliances (which are clusters of servers). Initially these appliances may maintain their existing external interface, which is Ethernet based, but as InfiniBand islands of servers appear in data centers, it would make a lot of sense to add InfiniBand ports to the NAS appliances allowing the server clusters to communicate with their storage directly on InfiniBand.
In the SAN environment the process will be slower, but we see a similar path that InfiniBand evolves from a backplane technology into an external one. By that time, InfiniBand will be used a unified fabric for some environments using InfiniBand for I/O purposes.