How iSCSI Lost the War

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Henry Newman It wasn’t too long ago that iSCSI vendors were claiming they were going to take over the world and leave Fibre Channel (FC) storage and SANs in the dust, and many industry pundits agreed. Corporations wanted a single network infrastructure and single management method and also wanted to eliminate unnecessary positions, they said.

Several years later, iSCSI has grown nicely, yet has only garnered about 3 percent of the storage market, according to IDC (see iSCSI Rides Virtualization Wave). I never predicted the end of FC or said that iSCSI would become the dominant storage networking topology, but I tend to look at things from an HPC point of view, not at the commodity level.

Storage appears to remain a growth market any way you slice it. Companies are hiring more storage people, rather than fewer, even though the rate of growth for networking people appears to have slowed. Storage is also growing in terms of money spent and percentage of the IT budget. The complexity of management and plans for backup and restoration are growing faster than tools are developed to manage the data.

So why didn’t iSCSI bury the world of Fibre Channel SANs? Why were the industry pundits wrong, and how did I avoid inserting my foot into my mouth for a change? Is iSCSI still out there waiting to take over the world or make a comeback like InfiniBand (IB) has? I don’t think so, and here are the reasons why.

Overhead and Performance

For starters, iSCSI has more overhead than Fibre Channel. iSCSI could be run over any IP-based network such as 10 GbE or even IB, but the overhead from packetization of the iSCSI packet is higher over IP than using standard SCSI protocol.

When an operating system writes data to a device, it builds a packet with the communications protocol that the devices in the path will understand. For example, if you write to an FC RAID device, you will build a SCSI packet and write the SCSI command or data to the device, and that will be translated to Fibre Channel packets by the HBAs and reassembled by the disk or RAID controller. If you have a SATA target, the same process occurs: the SATA device driver takes the operating system commands to do I/O and builds a SATA command, which is broken up into SATA packets.

With iSCSI, the same thing happens, but the command this time, whether SATA or SCSI, is encapsulated in a TCP/IP packet and then translated into the low-level hardware packet such as Ethernet. A command packetized within a command. This requires more CPU time within the operating system to create additional commands within commands, and the extra TCP/IP command requires space and uses up network bandwidth.

So from the standpoint of the system, you have to create an additional type of command to encapsulate the storage command and then send that whole larger thing down the channel.

Low-Level Communications Technology

One of the supposed advantages of iSCSI was that it could use existing network infrastructure that was underutilized. That may or may not be the case. Currently, the fastest ubiquitous networking technology is GigE, which on a good day with a strong tail wind might be able to achieve about 100 MB/sec full duplex, given overhead and request size issues. The equivalent FC technology today is 2 Gb FC, which can achieve 200 MB/sec full duplex. SATA technology is 3.2 Gb/sec, or 400 MB/sec, but this is half duplex. Any way you look at the problem, iSCSI from a performance point of view is far behind the standard technology today.

Many claim that 10 Gbit Ethernet will change things, but as least for now, I don’t see this technology as standard to the desktop, at least for now. Some desktops will have these types of connections, but most will not, given the cost not just of new NICs, but the number and cost of 10 Gbit Ethernet ports in switches, and the requirement for PCI Express bus (see A Historic Moment for Storage I/O).

CPU Usage

As we already noted, CPU usage will be higher with iSCSI than moving data with SCSI, SATA, SAS or IB, given the extra amount of TCP/IP packet data that must be added. This overhead translates to CPU usage. The amount of CPU usage is highly dependent on the size of the request, since the overhead will be much higher if you are sending small bits of data compared to large transfers. So your mileage will vary, depending on operating systems, drivers and iSCSI cards, so the potential range of performance is wide.

Division of Labor

Let’s face it, storage management and network management are very different, and this has been one more obstacle for iSCSI.

Storage management includes managing the storage network along with provisioning storage to the appropriate groups. This provisioning is far different and far more complex in most cases (the network people are not going to like this) than network management. Storage management people need to understand operating systems, file systems, storage connectivity and the physical storage itself. Call me a storage snob, but from what I see it is more complex, but even if it isn’t, it is far different than network management. You cannot take a networking person and say here is the storage, manage it. Like it or not, there is a division of labor. The people that are experts in managing both storage and networking are few and far between, and at large complex sites, you need experts. At some of the sites I have been to, they even break up the file system and storage people into different groups. This, in my opinion, is a bad idea, but it happens all too often. The point is that network and storage management remain two very different tasks, and that makes unseating Fibre Channel tough.

The thought that iSCSI was going to revolutionize the industry, simplify management and save money is likely not going to happen — with one caveat, and that is SAN/WAN storage connectivity. I don’t believe that even with 10 Gbit Ethernet that this is going to change, since the current total cost of this technology makes it a bit too expensive and not really necessary for most desktops. I have yet to see Seagate or HDS build disks with Ethernet interfaces, and that is another limitation to the acceptance and usage of iSCSI in a ubiquitous way.

For a new technology to revolutionize an industry, there have to be very good reasons for the changes that technology requires. I believe that iSCSI’s performance and the complexity of storage management doomed it to failure as the replacement for Fibre Channel storage networking. iSCSI has a place at the storage table as a tool for extending storage to the WAN, and I believe this is a good tool. The question that might be asked is whether this tool is better used as a host-based NIC or as a port on a switch, like many Fibre Channel switch vendors have done.

So when the next great new technology comes down the road, remember why iSCSI failed to live up to the hype and don’t believe everything you read. Just because it is on the Internet doesn’t make it true.

Henry Newman, a regular Enterprise Storage Forum contributor, is an industry consultant with 26 years experience in high-performance computing and storage.
See more articles by Henry Newman.

Henry Newman
Henry Newman
Henry Newman has been a contributor to TechnologyAdvice websites for more than 20 years. His career in high-performance computing, storage and security dates to the early 1980s, when Cray was the name of a supercomputing company rather than an entry in Urban Dictionary. After nearly four decades of architecting IT systems, he recently retired as CTO of a storage company’s Federal group, but he rather quickly lost a bet that he wouldn't be able to stay retired by taking a consulting gig in his first month of retirement.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.