Storage companies began rolling out their first 8 Gigabit per second Fibre Channel products this summer, but customers won’t get their hands on the devices until early next year, and complete systems composed of host bus adapters (HBAs), switches and storage arrays will take a lot longer than that.
One reason for the long lag is the rigorous process imposed on new products before they reach users, said Tam Dell’Oro, founder and president of Dell’Oro Group. “The testing process typically takes six months or more,” she said. “It’s lengthy and thorough.”
“This equipment has to be highly robust — super, super reliable,” Dell’Oro explained, “and it has to be able to operate with a bunch of other stuff.”
As a result, adoption of new technology like 8Gbps Fibre Channel can take years. For example, according to Dell’Oro, switches and HBAs incorporating the technology’s predecessor, 4Gbps, began falling into users’ hands in 2004, but it hasn’t been until this year that it has begun to dominate shipments of new equipment. In 2007, 97 percent of Fibre Channel switches and 80 percent of HBAs will use 4 Gbps technology, she said.
Storage arrays, she added, are usually slower than other system components when it comes to falling in line with an evolving Fibre Channel technology. “We didn’t see the first four-gig storage arrays come to market until the end of 2006,” she said, “and at that time, Hewlett Packard, which is a pretty significant manufacturer of storage equipment, still did not have a four-gig product out.”
Historically, new generations of Fibre Channel technology have been shipped every three to four years. “That’s the cycle we’re on again,” observed Scott McIntyre, vice president for software marketing at Emulex, which announced several new 8Gbps products last summer, including a family of HBAs, custom mezzanine cards for server blades, and an embedded I/O controller. Emulex’s main competitor, QLogic, has also rolled out 8-gig components, and Brocade has unveiled 8-gig blades for its 48000 Director.
McIntyre noted that the ramp up for 4Gbps was the fastest in the history of Fibre Channel. “That indicates that there’s a strong and consistently growing demand for I/O throughput,” he said.
Virtualization Spurs ‘Throughput Hunger
One of the drivers of that throughput hunger is the spread of virtualization technology. “What we’re seeing is very strong adoption of server virtualization technologies by our enterprise customers,” McIntyre said. “That means they’re stacking up more and more virtual machines and more and more applications on a single server, and in many cases driving them to larger servers to accommodate many more virtual machines, and that’s obviously creating a higher demand for I/O throughput on each server.”
Virtualization’s popularity is feeding a phenomenon called “fan out,” or more servers connecting through the same port on a switch, explained Brian Garrett, an analyst at Enterprise Strategy Group. “Companies are consolidating lots of servers to share the same storage subsystem, either physically or virtually,” he said.
Also, demand for online data continues to grow rapidly, he said. “Growth of online data is doubling every 18 months to two years,” said Garrett. “So there’s more data to manage within the storage area network, more data that needs to be backed up and replicated, so that creates a lot of demand for higher throughput.”
“We expect — based on that and based on our experience with the transition to four-gig a couple of years ago — that there’ll be a healthy demand for eight gig as it arrives next year,” he said.
Early Adopters
Who’s likely to jump first on the 8Gbps express train when it pulls into the station next year? Among the Fortune 100, most likely candidates will be financial services and pharmaceutical companies, McIntyre predicted. “However,” he added, “the great thing about storage is there’s demand for it across every kind of enterprise and every kind of industry.”
An organization’s pain threshold will also influence the speed at which it embraces the new Fibre Channel technology, according to Greg Schulz, senior analyst and founder of the StorageIO Group. SAN operators pained by bottleneck problems, bandwidth constraints and network consolidation are all likely early adopters of 8Gbps, he predicted.
High-performance users will also be quick to pull the trigger on the new technology, as well as bandwidth-sensitive environments such as video production and distribution and large-scale backups.
“There are a lot of guys who spend a lot of time constantly tuning their backup network to do their backups quicker to reduce the risk of losing data and increase the availability of applications,” said Garrett. “This extra bandwidth is the kind of tool they like to do that.”
That thirst for extra bandwidth has driven some industries to turn to 10Gbps Fibre Channel as a solution. That technology, though, is outside the 2-4-8 Gbps evolutionary path, so it offers no backwards compatibility.
“It’s pretty much a non-starter because it’s such an oddball niche,” said Schulz. The technology is primarily used for tying switches together to form a backbone. “Ten-gig Fibre Channel doesn’t interoperate with any other of the Fibre Channel modes,” Schulz said. “It’s very closed.”
FCoE a Wild Card
Although network watchers predict that 8 Gbps adoption will follow historic patterns, there is a wild card in play. That wild card is Fibre Channel over Ethernet (FCoE), which should make waves on the network scene at about the same time Fibre Channel technology should be taking its next step in the evolutionary process.
“Assuming Fibre Channel over Ethernet does get approved, which should be in about three years, maybe sooner,” Schulz said, “it will arrive about the time people have to make the decision to go from eight-gig to 16-gig Fibre Channel.”
Further complicating matters could the emergence of 10Gbps Ethernet iSCSI. Both FCoE and Ethernet iSCSI are potential ways to wean data centers away from pure Fibre Channel and onto Ethernet.
“I think it’s still a very long ways away before enterprise data center managers who rely on Fibre Channel today are going to be switching over to those technologies,” said Garrett.
Fibre Channel over Ethernet is still in its standards phase, said Garrett, and 10Gbps iSCSI systems are just emerging in the market, so they’re still relatively expensive — thousands of dollars per port versus hundreds of dollars for Fibre Channel. In the long run, though, both technologies hold the promise of simplifying data center operations. “They can potentially provide a way so you don’t have to have two separate sets of networking gear — one for Ethernet and one for Fibre Channel — in your data center,” Garrett said.
Cost, though, is only one of the challenges these technologies must surmount if they’re to make any headway in the network storage arena. “You have these environments where there are turf wars,” Schulz said. “People who like Fibre Channel don’t like IP and Ethernet. Those that like IP and Ethernet don’t like Fibre Channel.”
“Network people look at storage people and can’t figure out why they pay so much for a Fibre Channel port when they could do it for a fraction of the cost with an Ethernet,” said Schulz. “Storage people look at Ethernet and can’t quite figure out why an adapter loses so many frames and why there’s so many layers of software to make up for performance deficiencies.”
“When you think about it,” he noted, “Fibre Channel over Ethernet is the best of both worlds. It’s a way for both groups to come together and move forward collectively, rather than winner takes all.”