Network convergence promises to be a good thing for data centers. Connecting SAN and LAN traffic on a common network would reduce cabling and other connection components, saving money and simplifying management. A common, standardized fabric would also facilitate the dynamic allocation of pooled storage resources for virtualization.
Both Brocade and Cisco have introduced data center switching platforms that promise Fibre Channel/Ethernet convergence.
Brocade’s DCX Backbone can field as many as 768 8Gbps Fibre Channel ports, with 10Gbps Ethernet ports to come. Cisco’s Nexus 7000 is a high-capacity core switch with up to 512 10Gbps Ethernet ports. Both platforms provide throughput measured in terabytes per second, and both will soon support Fibre Channel over Ethernet (FCoE), the still-developing ANSI standard that enables SAN/LAN convergence.
Though a work in progress, it’s clear by now that FCoE is going to happen and should be complete this summer. The standard has full support and involvement from the big storage network players and is advancing fast enough to support products as soon as the end of this year. What’s not clear is who will adopt converged data center network topologies, how soon, and to what extent.
Fibre Channel Extended
FCoE is Fibre Channel Protocol (FCP) carried on Ethernet cabling and switches. It differs from Fibre Channel (FC) only in the bottom protocol layers (physical and link). To match Fibre Channel’s reliability and throughput, FCoE depends on a set of enhancements to the IEEE 802.3 Ethernet standard referred to as Convergence Enhanced Ethernet (CEE) or Data Center Ethernet (DCE). The Ethernet standard changes are also still in development, but are progressing in tandem with FCoE. The enhancements will provide Ethernet with quality of service (QoS) levels, priority groups, efficient multipathing, lower latencies, and a control mechanism that prevents dropped packets. (Ethernet’s dropping packets as a response to network congestion is one of its biggest failings from a storage network point of view.)
One of the biggest and most immediate benefits of connecting server to storage with FCoE rather than FC is reduction in cabling — by half. An FCoE converged network adapter (CNA) acts as both Ethernet NIC and FC HBA. An FCoE storage access server, instead of needing redundant pairs of NICs and FC HBAs, will need only a pair of FCoE CNAs. The cost decrease and the cable reduction are a big win, particularly for blade server setups.
Not An ‘iSCSI Killer’
Though initially branded as an “iSCSI killer,” FCoE is not targeted directly at iSCSI. FCoE is a low-level protocol (for efficiency and low latency) and is not routable. It doesn’t leave the data center, whereas iSCSI uses TCP/IP and can travel long distances. FCoE is an adjunct to, and potentially a replacement for, straight Fibre Channel.
FCoE proponents are well aware of the resources data centers have invested in Fibre Channel SAN technology. Thus the mantra: Fibre Channel over Ethernet does not replace Fibre Channel; it extends it. You will be able to connect an FCoE network to an FC SAN, and they will talk. Switches that provide both types of connection will be among the first products available, providing the necessary protocol translation at the interface. The idea is that you can start adding FCoE-enabled switches at the edges of the data center and work in over time as existing servers and storage units serve their normal life span. Existing FC SANs and FCoE SANs can coexist and be managed with the same software.
Both Brocade and Cisco provide for a graduated path to convergence by supporting coexistence of Fibre Channel and FCoE-enhanced Ethernet. The DCX platform will support in the same chassis both high-speed Fibre Channel and, when they become available, 10-Gbps FCoE ports. Cisco will continue to support Fibre Channel with its existing MDS platform, and envisions convergence with a combination of Nexus and MDS hardware.
The Road Ahead
Deepak Munjal, Cisco’s Senior Marketing Manager for Data Center Solutions, expects FCoE plug-in modules for the Nexus 7000 to arrive early next year, with cards for the smaller Nexus units (rack and blade models) even before the end of this year. He sees high-end data centers as the sweet spot for FCoE implementation. “Convergence is something customers are asking for. If you can simplify and consolidate, you save money,” he said.
However, data centers are not going to rip out perfectly good Fibre Channel equipment and replace it with FCoE equivalents just for the sake of simplification, and array vendors may stay with FC for a long time. Munjal sees it as an evolutionary process, starting with units like FCoE-enabled Nexus racks at the SAN edges — where the cabling densities are highest — then moving FCoE switching into the aggregation layer, and eventually to the storage units themselves. It’s a matter of time before disk array vendors come on board with FCoE interfaces.
There is no one right answer, but even if SAN storage units remain Fibre Channel, a data center will still benefit from having FCoE just at the SAN edges. According to Munjal, “as you move the FC/Ethernet interface closer to the storage units, you reduce cost and complexity.”
Price may also be attractive. Said Munjal, “Ethernet, even with FCoE, will come at a price point that’s much more attractive than Fibre Channel. Historically, Ethernet has always found a way to compete and win the price war.”