Industry Interview – Asaf Somekh, Voltaire

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

There are many emerging technologies that promise to improve the storage environment, but few are anticipated as much as InfiniBand, an interconnect architecture which boasts greater speed, efficiency, and reliability than existing interconnect technologies such as PCI.

As products that use InfiniBand start to become available, many are predicting that the technology will become the standard interconnect in a range of computing and storage devices. However, as with other emerging technologies, InfiniBand is not without it’s hurdles. To get the low-down on InfiniBand, and what it will mean to the storage networking landscape, we talked to Asaf Somekh, director of marketing for Voltaire, a leading InfiniBand developer and a member company of the InfiniBand Trade Association.

[ESF] Hello Asaf, thanks for taking the time to talk with us.

[Asaf Somekh] You are welcome.

[ESF] Voltaire is regarded as one of the leading companies in InfiniBand development. What, in layman’s terms, is InfiniBand, and what does it mean to us in terms of performance increases?

[Asaf Somekh] InfiniBand architecture is a new I/O industry standard developed by all the leading system vendors. It is a high-speed switched fabric interconnect that offers links from 2.5 Gb to 30 Gb per second – 10 Gb links (InfiniBand 4X) are available now. InfiniBand is a technology specifically designed to solve the many inefficiencies which exist in data centers today. Many of these problems are the result of the use of Ethernet and TCP/IP for high speed links.

TCP/IP, which is implemented in software run by the operating system, creates huge overhead on the servers’ CPUs, which sometimes consumes 80%-90% of the CPU cycles. As InfiniBand is designated to this particular environment the InfiniBand protocol stack was simplified and implemented in the InfiniBand silicon freeing up the CPU of the TCP/IP overhead. Additional advantage of InfiniBand is its RDMA capability, which essentially allows a server to directly access the memory of another s! erver 10 times faster than one Ethernet offers. These fundamentals enable InfiniBand to boost performance in a dramatic way.

Recently Intel and IBM DB2 published a benchmark where a DB2 configuration had a 100% performance increase when moved to InfiniBand while the CPU utilization went down from 100% to 8% – meaning it could perform other tasks in parallel. Another very important virtue of InfiniBand is that it is the only technology that enables building mid-range and high-end server systems based on commodity servers as building blocks to reduce the cost of the such configurations dramatically (60%-80%) and provide a much more flexible and scalable solution than classic large SMP monolithic servers.

[ESF]What impact will InfiniBand have on storage networking?

[Asaf Somekh] The impact on SANs will be gradual. Initially InfiniBand will be deployed in server environments and NAS solutions. In later phases we will see InfiniBand penetrate to the SAN as well, but it InfiniBand will coexist with fiber channel (FC) solutions in the short term.

[ESF]How will it change the equipment we already use?

[Asaf Somekh] NAS Vendors have already started using InfiniBand as a backplane technology for their NAS appliances (which are clusters of servers). Initially these appliances may maintain their existing external interface, which is Ethernet based, but as InfiniBand islands of servers appear in data centers, it would make a lot of sense to add InfiniBand ports to the NAS appliances allowing the server clusters to communicate with their storage directly on InfiniBand.

In the SAN environment the process will be slower, but we see a similar path that InfiniBand evolves from a backplane technology into an external one. By that time, InfiniBand will be used a unified fabric for some environments using InfiniBand for I/O purposes.

[ESF]Realistically, how long will it be before we see InfiniBand implemented in production environments?

[Asaf Somekh] Q2 2003.

[ESF] Where do you think InfiniBand fits with other technologies such as iSCSI, Fibre Channel and so on?

[Asaf Somekh] InfiniBand can and should coexist with these technologies. Products like our nVigor family of routers will allow this to happen. Only in the second phase of the adoption of InfiniBand, you will see environments where InfiniBand replaces these technologies.

[ESF]You have recently participated in a series of road shows run by the InfiniBand Trade Association. How did you find peoples understanding and approach to InfiniBand technology?

[Asaf Somekh] In many cases people were surprised to see where the technology is. Most thought that it was still a – white-board-hype – technology, but seeing the list of companies with actual products coming to market had its impact. In general, there’s great receptiveness to InfiniBand, although it is a new technology, simply due to the fact that InfiniBand enables IT managers to run the data centers more efficiently.

[ESF]Both Microsoft and Intel have recently been seen to back away from InfiniBand as a technology. How do you interpret these moves and what does it mean to the InfiniBand industry?

[Asaf Somekh] Intel is still very committed to making the InfiniBand market happen through software initiatives (dozens of software engineers are working on InfiniBand) and marketing initiatives. The recent Intel Developer’s Forum (IDF), in September had a dedicated InfiniBand track and InfiniBand was the most visible technology at the show. Their decision to stop the production of their silicon was due to the fact that their 4X silicon was over a year behind the IBM and Mellanox silicons. Intel’s decision had no practical impact on the companies developing InfiniBand systems because they did not rely on the Intel silicon as it was too late for their plans.

Microsoft was planning its support for InfiniBand only in one of the later versions of .Net. With .Net availability slipping this caused difficulties with the adoption of InfiniBand in windows environments. The latest announcement from Microsoft did say they are pushing back InfiniBand, but at the same time they also said they would certify InfiniBand solutions from 3rd parties for W2K and .Net.

[ESF] With the newest versions of PCI-X offering InfiniBand-like performance, is there not a risk that InfiniBand will become a niche technology rather than a mainstream one?

[Asaf Somekh] PCI-Express is designated for on-board communication and not for communication between systems, which is what InfiniBand is for.

[ESF]Do you think the success of InfiniBand may end up having more to do with the state of the economy than the viability and advantages of the technology?

[Asaf Somekh] The state of the economy has a lot of impact.

[ESF] On a broader note, some analysts have branded the storage industry as simply ‘hype’. What would you say that that?

[Asaf Somekh] Storage needs keep doubling with little impact by the economy — I wouldn’t call it a hype.

[ESF]Finally, over the next few years, what do you see as being the main challenges facing the storage industry

[Asaf Somekh] Solutions for a distributed environment are still far from being optimal.

[ESF] Thanks for your time Asaf.

[Asaf Somekh] Thank you.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.