1, 2, 4, 8, 10: The Evolution of Fibre Channel Page 3 - EnterpriseStorageForum.com

1, 2, 4, 8, 10: The Evolution of Fibre Channel Page 3

Continued From Page 2

Which Technology To Choose?

I am not sure that 10 Gb technology will be available at the server level for a while, given the complexity of developing interfaces. Developing 10 Gb for a complex server with many memory interconnects is no easy task.

I do believe that 10 Gb from the switch to the RAID device is going to be used because it allows you to aggregate bandwidth from a number of different systems. Sure, you can get PCI express for desktop devices from Dell today, but how many people need that much bandwidth from a single desktop to storage, and sure there are people selling 10 Gb interconnects, but how many people need them?

The Linux cluster interconnects in an HPC environment is one example where it might be needed, but surely that type of bandwidth to storage is not needed from each node in the cluster. Does a single node or each node in a cluster have bandwidth requirements per node that need 2, 4, 8 or 10 Gb worth of storage bandwidth? In almost every case the answer to this question is no. Since 1 Gb FC is out of the question, given the age of the technology, what is the best bet for today? Is to stick with 2 Gb? From what I can tell, 2 Gb will be supported for at least 8 years from now for HBAs, switches and RAID controllers, since it will be compatible with 4 and 8 Gb technology.

Clearly, the big leap is 10 Gb, but who really needs that? A related topic is how many outstanding I/Os are supported on these new RAID controllers? Sure, you can have a 10 Gb interface, and support 1,000 300 GB FC drives, but if the command queue in the controllers is only 1,024 commands and you are using the controller for random IOPS, your performance is in trouble. On the other hand, if the RAID is used for streaming I/O, 1,024 commands is far more than is needed to saturate the 10 Gb of bandwidth. At least with early 2 Gb RAID products, the size of the command queue was an issue. Some of them only had 128 commands that could be queued and supported 256 disk drives. The problem took years to fix for some vendors.

Conclusions

Sometime over the next year or so, there will be two issues to weigh as you consider whether to begin the upgrade path from 2 Gb to 4 Gb: What are your performance requirements, and does your bus interface (PCI/PCI-X) support the increased performance?

The performance requirement can be broken down into streaming and IOPS. Just because you do not need 400 MB/sec full duplex streaming bandwidth (this is the equivalent of 4 Gb FC) does not mean that you will not benefit from the increase in the command queue that you will find in 4 Gb HBAs and RAID controllers. If you are able to go to 10 Gb RAID controllers, you will likely find that your command queue is even larger and that you can support more IOPS and streaming I/O.

The problem is that much of the streaming performance is limited to our current PCI and PCI-X bus structures. At the low end, some smaller PCs have new buses, but the time to redesign memory interfaces for new I/O bus on high-end servers takes time.

I am almost always in favor of bigger, better, faster, but it comes at a price, and you need to be able to justify the price given your infrastructure.

Back To Enterprise Storage Forum


Page 3 of 3

Previous Page
1 2 3
 

Comment and Contribute

 


(Maximum characters: 1200). You have characters left.

 

 

Storage Daily
Don't miss an article. Subscribe to our newsletter below.

Thanks for your registration, follow us on our social networks to keep up-to-date