FCoE Gets Lost in Vendor Stupidity - EnterpriseStorageForum.com
Hot Topics:

FCoE Gets Lost in Vendor Stupidity

I have long been a proponent of FCoE (Fibre Channel over Ethernet). At one time, before the 2008 economic meltdown, I thought that paradigm-shifting technologies changes such as object storage and FCoE were on our doorstep.

It hasn't worked out exactly as I envisioned. I'm sad to say T10 compliant object storage was a total flop. There are no object disk drives today and we are still dealing with block storage with no end in sight. FCoE had the promise of taking Fibre Channel out of the market, but up to this point FCoE has not lived up to market expectations.

iSCSI, however, started slowly about 11 years ago and up until recently there were few iSCSI targets except low-end storage, but today both mid-range and enterprise storage use iSCSI. I even use iSCSI as a NAS target at my home, because it was easy to setup and was significantly faster than using it as a network drive. Now my online Internet backup service works with an iSCSI target, but not with a network drive.

FCoE vs. iSCSI

In this article I'll compare and contrast FCoE and iSCSI based on research I did as well as research and insight from some of my colleagues. That is, we'll explore the advantages and disadvantages of both FCoE and iSCSI. Of course, as is often the case, I'll take a different tact than many of my peers.

Also, one note of caution: There are all kinds of benchmarks and performance tests available on both sides of this issue. I have reviewed many of them. Some of the things I have seen are benchmarks comparing 4Gb Fibre channel to 10Gb Ethernet years after 8Gb FC was released. I've also seen research using drivers or other parts of the software stack that are specifically tuned to prove a point. And I've seen tests that didn't use known features and functions that are generally available. My approach is to assume that you have efficient software and an optimized hardware environment and to not implement some SBT (slimy benchmark tricks). Remember I was a benchmarker.

Baseline Facts

I want to review two important areas that I think have been left out of discussions that I have seen:
  1. IP header size
  2. Real file system issues not streaming data or IOs per second (IOPS)

IP Header Size

As we all know, the "I" in iSCSI stands for IP. The IPv4 header size is 20 octets or 20 bytes. The world is moving from IPv4 to IPv6. Now there are many who might say that in the future we will use IPv6 between machines across the WAN, but IPv4 locally. I know of a number of companies -- as well the U.S.government and foreign governments -- that say if you refer to the "I word" it will be IPv6. This, of course, is not true today, but it will become more true later this year and in 2012 and 2013. What this means is going from 20 bytes to 40 bytes. The point here is that FCoE does not require this extra 20 or 40 bytes for IP encapsulation. Remember that more than likely this will become 40 bytes given the IPv6 requirements coming.

Real File System Issues

Whether we like it or not, testing any I/O performance in the real world must take into account file systems, and some file systems are more efficient than others. Measuring IOPS, streaming or any other test to just a LUN does not provide a realistic comparison to real I/O operations to a file system. I and others have pointed out for years that the bottleneck for most modern file systems is I/O for file system metadata. I will define metadata to include the following:
  • File system superblock, which is read at mount time to understand the file system topology and needs to be updated occasionally for some file systems.
  • File system allocation maps are updated regularly with files being written and deleted. Maps can be represented a variety of ways including bitmap and btrees.
  • File system inodes and extended attributes, which include the location of the file, the attributes for the file such as UID, GID, access time, creation time and many other attributes, some of which are file system dependent
My premise is that metadata performance is often the limiting factor to file system performance. The metadata bottleneck is well-known and is getting worse given the massive growth in counts of files. Updating metadata is almost always a small block random I/O problem and, in some cases, must be done synchronously to meet various POSIX and file system requirements.

Page 1 of 2

 
1 2
Next Page


Comment and Contribute

 


(Maximum characters: 1200). You have characters left.