1, 2, 4, 8, 10: The Evolution of Fibre Channel

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

1, 2, 4, 8, 10: Those numbers aren’t some high school football cheer, but Fibre Channel options facing users over the last five years and into the future. These are technology choices that we will have to make, and it might be a good idea to explore the various technologies, the implications of using them, and the decisions we are going to have to make over the next few years. There are a few issues to consider as we explore these issues:

  • Storage performance has not changed much, and disk density has become a performance issue, since the long-term trend has been toward less bandwidth per GB of storage.
  • 10 Gb per second Fibre Channel sites are going to have to change infrastructure to support the 10 Gb standard.
  • Going from 1Gb/s to 2Gb/s Fibre Channel, and even 4 Gb, 8 Gb and 10 Gb, might not improve your performance if you are only making small I/O requests.

The Fibre Channel of just a few years ago is changing. Arbitrated loop devices are becoming a thing of the past. New tape, disks and RAIDs made by vendors now support full fabric devices. Those 1 Gb devices are a thing of the past, and in the not too distance future they will be gone. Everything becomes obsolete in our industry, but the length of time it takes varies depending on the requirements of the major stake holders of that technology, the time it takes to get a new technology to market, and the impact that new technology has on the market.

1 Gb FC arbitrated loop had some but not a lot of impact on the market. It was only three times faster than the other standard technology of the time, Ultra-SCSI. It was not on the market for very long before it was superseded by 1 Gb fabric technology, and the number of products and the competition for the early adopters of this technology was not great.

1 Gb full fabric was not well adopted compared with 2 Gb technology, since it was not on the market that long either. We have been living with 2 Gb technologies for about three years now, which is far longer than either of the 1 Gb technologies lasted before they were superseded.

We are now on the verge of 4 Gb technology with RAID controllers and switches, since we already have 4 Gb HBAs, and much of the technology that allows for 4 Gb will allow for 8 Gb. That means you will be able to plug 8 Gb technology into the RAID controllers and switches that support 4 Gb. It does not mean that all of them will support 8 Gb, but it does mean that you do not need a complete infrastructure change to move from 4 Gb to 8 Gb. This is unusual for our industry, but it did happen with 1 Gb to 2 Gb technology.

Page 2: 10 Gb Means Big Changes

Continued From Page 1

10 Gb Means Big Changes

10 Gb technology, on the other hand, requires a complete infrastructure change and is not backwards-compatible with 8 Gb technologies. I am aware of a few vendors building 10 Gb RAID products that provide two interfaces and two different interface chipsets to cover their bets on which technology will be deployed. This is a smart move, since I believe the winning technology could be a real horse race. There are advantages and disadvantages to each of the choices.

Another point to consider is how long 2 Gb technology will be supported and how long it will be useful for many environments.

Tradeoffs and Choices

For the most part, no one streams data at 200 MB/sec full duplex (2 Gb hardware translated into storage numbers). Some exceptions are: video, imaging and editing and delivery of that data; real-time data capture from satellites for weather and other applications, and real-time data capture for financial services and point of sale.

Bandwidth-type applications are few and far between compared to the standard high I/O per second (IOPS) required by most applications, and many file systems randomize the data so that bandwidth applications are not really running at full rate anyway. The point here is that in many cases, 2 Gb Fibre Channel is not even close to running at the 2 Gb full duplex rates. Add to this the fact that we are getting less bandwidth per gigabyte of disk space, and what does 4 Gb Fibre Channel buy in the real world?

The following chart shows bandwidth per gigabyte of disk space from 1977-2005:

The performance per gigabyte of storage has also been going in the wrong direction. With more data storage and less performance for each device, this can present a significant problem.

Clearly the rate of increase in storage density has not matched the rate of CPU increase, nor has the bandwidth per GB of data, and this change is small compared to bandwidth, since seek and latency times have not changed very much, nor are they going to anytime soon with current technology. Why is this an important consideration for choosing between 1, 2, 4, 8 and 10 Gb Fibre Channel?

New 4 Gb, 8 Gb and 10 Gb technologies will improve the number of IOPS that can be outstanding to most devices. Since RAID controllers often sort I/O requests based on the order of the block addresses of the data, having a larger command queue allows more commands for more devices and improved seek order optimization. This reduces the seek distance and the total time for seeks. Take the following example for 2 Gb Fibre Channel:

Let’s say the RAID controller can support a command queue of 512 commands, and that controller is managing 100 disk drives. If the controller is running all disks with RAID-1, then you can look at the 100 disks as 50 disk drives in terms of the number of commands to devices. Given this, you could have on average 10.24 commands per disk outstanding for the controller (512 commands for the controller/50 disk drives). This is not a great deal of commands to sort for seek order optimization given the time to do a seek. On the other hand, if the command queue was much larger, say 2048 commands, as is expected with 4 Gb, you would be sorting 40.96 commands per drive, which could improve the seek ordering, reducing the total number of seeks and improving overall performance. That is not to say that we will now be running IOPS at 4 Gb, but the total channel usage will likely improve, given the reduced time to do seeks.

Page 3: Which Technology To Choose?

Continued From Page 2

Which Technology To Choose?

I am not sure that 10 Gb technology will be available at the server level for a while, given the complexity of developing interfaces. Developing 10 Gb for a complex server with many memory interconnects is no easy task.

I do believe that 10 Gb from the switch to the RAID device is going to be used because it allows you to aggregate bandwidth from a number of different systems. Sure, you can get PCI express for desktop devices from Dell today, but how many people need that much bandwidth from a single desktop to storage, and sure there are people selling 10 Gb interconnects, but how many people need them?

The Linux cluster interconnects in an HPC environment is one example where it might be needed, but surely that type of bandwidth to storage is not needed from each node in the cluster. Does a single node or each node in a cluster have bandwidth requirements per node that need 2, 4, 8 or 10 Gb worth of storage bandwidth? In almost every case the answer to this question is no. Since 1 Gb FC is out of the question, given the age of the technology, what is the best bet for today? Is to stick with 2 Gb? From what I can tell, 2 Gb will be supported for at least 8 years from now for HBAs, switches and RAID controllers, since it will be compatible with 4 and 8 Gb technology.

Clearly, the big leap is 10 Gb, but who really needs that? A related topic is how many outstanding I/Os are supported on these new RAID controllers? Sure, you can have a 10 Gb interface, and support 1,000 300 GB FC drives, but if the command queue in the controllers is only 1,024 commands and you are using the controller for random IOPS, your performance is in trouble. On the other hand, if the RAID is used for streaming I/O, 1,024 commands is far more than is needed to saturate the 10 Gb of bandwidth. At least with early 2 Gb RAID products, the size of the command queue was an issue. Some of them only had 128 commands that could be queued and supported 256 disk drives. The problem took years to fix for some vendors.

Conclusions

Sometime over the next year or so, there will be two issues to weigh as you consider whether to begin the upgrade path from 2 Gb to 4 Gb: What are your performance requirements, and does your bus interface (PCI/PCI-X) support the increased performance?

The performance requirement can be broken down into streaming and IOPS. Just because you do not need 400 MB/sec full duplex streaming bandwidth (this is the equivalent of 4 Gb FC) does not mean that you will not benefit from the increase in the command queue that you will find in 4 Gb HBAs and RAID controllers. If you are able to go to 10 Gb RAID controllers, you will likely find that your command queue is even larger and that you can support more IOPS and streaming I/O.

The problem is that much of the streaming performance is limited to our current PCI and PCI-X bus structures. At the low end, some smaller PCs have new buses, but the time to redesign memory interfaces for new I/O bus on high-end servers takes time.

I am almost always in favor of bigger, better, faster, but it comes at a price, and you need to be able to justify the price given your infrastructure.

Back To Enterprise Storage Forum

Henry Newman
Henry Newman
Henry Newman has been a contributor to TechnologyAdvice websites for more than 20 years. His career in high-performance computing, storage and security dates to the early 1980s, when Cray was the name of a supercomputing company rather than an entry in Urban Dictionary. After nearly four decades of architecting IT systems, he recently retired as CTO of a storage company’s Federal group, but he rather quickly lost a bet that he wouldn't be able to stay retired by taking a consulting gig in his first month of retirement.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.