Mellanox Marries InfiniBand, Ethernet
Top InfiniBand chipmaker Mellanox Technologies unveiled its fourth-generation adapter architecture at Supercomputing 2006 in Tampa, Fla., this week.
Mellanox, which supplies silicon to Cisco and Voltaire, among others, took the wraps off ConnectX, which incorporates connectivity to 1 and 10 Gigabit Ethernet fabrics in addition to SCSI, iSCSI and Fibre Channel storage protocols.
Thanks to server virtualization, storage growth and clustered multi-core servers, demand is growing for faster and more reliable data center communications fabrics, Mellanox says.
While Mellanox chose SC06 to unveil its latest architecture, Dan Tuchler, senior director of product management, notes that IB "is really starting to stick as a mainstream technology."
For enterprise customers, the advantages that come from a performance boost can be dramatic. Goldman Sachs, for example, notes that every millisecond gained in its program trading applications is worth $100 million a year.
"Customers porting applications to 1U servers and server blades with little room for network connectivity are deploying InfiniBand adapters in increasing numbers," said Brian Garrett, an analyst with Enterprise Strategy Group. "As the market for high-bandwidth, low-latency Mellanox silicon expands out of the high-performance computing market and into the commercial computing and storage markets, ESG believes that Mellanox has made a brilliant move by incorporating 10 Gigabit Ethernet support."
The new ConnectX architecture will enable products that support up to 40Gb/s InfiniBand data rates (initial products will support 10Gb/s and 20Gb/s data rates), with latencies as low as 1 microsecond, in addition to 1 and 10 Gigabit Ethernet. It supports protocols using IPv4 or IPv6, as well as popular data networking (TCP/IP, sockets, MPI) and storage protocols. The broad support allows maximum flexibility in equipment choices for enterprise data centers, high-performance computing and embedded applications, Mellanox says.
"While low-power multi-core CPUs and virtualization promise to reduce power and thermal loads, they are only part of the solution," says Gartner research vice president Joe Skorupa. "Servers, storage and networking must be integrated into a single design if data center scalability and agility is to be achieved."
In stateless offload mode, ConnectX 10 Gigabit Ethernet products integrate seamlessly with existing operating systems and applications, while protocols such as Remote Direct Memory Access (RDMA) can bring some of the performance advantages of InfiniBand to the Ethernet environment. And with an architecture designed to support 40Gb/s InfiniBand interfaces, data center managers have an upgrade path for the foreseeable future.
Next-generation InfiniBand products using the ConnectX architecture (including dual-port silicon devices and adapter cards supporting 10 and 20Gb/s InfiniBand speeds) will be available in the first quarter of 2007. Also available will be dual-port, multi-protocol silicon devices and adapter cards supporting 20Gb/s InfiniBand and RDMA and transport acceleration over 1 or 10 Gigabit Ethernet. 40Gb/s products are expected to reach the market in 2008.