Shared File Systems, Part 2: Why Pay More?

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

In the first part of this series, we discussed the significant cost differences between NAS devices and Fibre Channel RAID-based shared file systems.

Clearly, the cost differences between NAS and shared file systems can be great, but as you will see in this article, the performance differences between NAS and shared file systems can be just as big, and reliability should also play a big part in analyzing your requirements.

Performance Matters

If you are running streaming video at 500 Kb/second, that is a far different performance requirement than if multiple people are editing digital high definition video. Running 500 Kb/sec with NAS and current technology, a standard gigabit ethernet NIC can easily meet your needs, but with high-definition video editing, you might need as much as 40 MB/sec per user, which is almost 82 times greater performance. With a single gigabit Ethernet, you can theoretically get about 100 MB/sec, but you have a number of potential issues:

  • With TCP/IP, the overhead for each packet is greater than SCSI and Fibre Channel overhead for the amount of packetization data required to send the packet over TCP/IP.
  • If you do not have a TOE card (TCP Offload Engine), you likely have far greater system overhead for TCP/IP data movement within the server on most systems, given the number of memory copies.
  • Depending on the NAS device, you may or may not have direct memory access (DMA); if you do, the performance will be much better.

Some say you can solve these problems by using gigabit Ethernet trunking. Trunking involves grouping multiple channels together and using them essentially as a striped device. This might have merit for a few gigabit Ethernet channels, but with 2 Gb/sec Fibre Channel and the difference between the high system overhead of TCP/IP compared with SCSI, Fibre Channel still has a big advantage.

Others might argue that 10 Gb/sec Ethernet will solve your problems. All you have to do is plug a 10 Gb/sec NIC into the PCI-X slot and your problems are solved. That might be true in some cases, but from what I have seen, many PCI-X buses will not run anywhere near their rated speed. Just because the slot says PCI-X does not mean you will get 1.064 GB/sec, which is still less than 10 Gb/sec anyway.

You need to fully understand your performance requirements, and understand what connection technology will meet those performance requirements in a real-world scenario.

A NAS box might have aggregate bandwidth that meets your total needs, but not enough bandwidth to meet the needs of a single stream of data, so you will need to determine what the real performance requirements are for your environment. Single-stream performance should be one of the most important considerations.

Capacity Issues

Most NAS systems, especially lower-cost NAS systems, currently support only a few TB of storage. Sometimes the file system limits are 2 TB per file systems. Often both of these are because of the limits in the Linux 2.4 kernel. Linux is commonly used in low-end NAS products because of the development and deployment savings. The vendors have an operating system and file systems available to them without having to create all of it from scratch. Once the 2 TB file system limit changes with the 2.6 kernel, if Linux file systems follow the same track as Unix file systems, it will take a few years before the performance issues of larger file systems are found and fixed in NAS devices. So on the low-end, you are limited to 2 TB for now. The large NAS systems have different limits, which you will need to check.

On the other hand, most large shared file systems that have installations in production far exceed 50 TB, so if you need a great deal of storage in a single file system, the choice may have already been made for you.

Reliability and Recovery

Claims of reliability for the different systems are difficult to confirm, but shared file systems are difficult to implement and involve a great deal of hardware and software. For some shared file systems, you have the option of a failover metadata server; that can be expensive and difficult to manage, and testing is time-consuming and expensive.

Another issue is with the reliability of the data itself. We could debate the reliability of SATA and SCSI drives and communication channels ad nauseum, but what about backup and off-site disaster recovery? What are your recovery requirements after failure? Restoring 20 TB with AIT drives is very time-consuming.

Most shared file systems have HSM systems that provide an automatic backup and restore capability, and since most also support multiple tape copies, making off-site copies is not difficult. If you use HSM, your system can be up and running in minutes, since you do not have to bring all of the data back from tape to have the file system available. If you are using the backup/restore method, in most cases you have to wait until the whole system is restored.

Most shared file systems that support HSM support higher-end tape drives that run with compression at 70 MB/sec or greater. Support for enterprise tape drives with high performance load, position, read/write and compression are only available on high-end NAS devices. One NAS device is not the same as another. It should be noted that high-end enterprise tapes often have a much lower bit error rate and longer shelf life than you will find in lower-end tape drives.

The Choice Should Be Clear

Determining which is a better choice, a shared file system or NAS, should not be a difficult decision if you have a clear understanding of your requirements. The tradeoffs between the two approaches to sharing data are pretty clear, and stepping through the process to find which meets your requirements should not be difficult.

In the long-term, it is my hope that both technologies will converge. The current file system and block-based devices have been around for decades, and we need a paradigm shift to make NAS and shared file systems scalable both in terms of performance and the number of connections. That new paradigm change is coming, I believe. The T10 group (www.t10.org) has put forth a new standard called Object-based Storage Devices, or OSD (see A Storage Framework for the Future).

If the standard is adopted a few years from now — and I sure hope it will be — file systems and storage devices as we know them are going to fundamentally change. The concepts of NAS and shared file systems will merge, and HSM and backup as we know it will change. A number of vendors are working on object storage implementations, and one vendor, Panasas, has gotten ahead of the standard and developed a product on what they expect the standard will be; not much different than what others did with Fibre Channel. You’ll be hearing more on OSD in the coming months.

For more storage features, visit Enterprise Storage Forum Special Reports

Henry Newman
Henry Newman
Henry Newman has been a contributor to TechnologyAdvice websites for more than 20 years. His career in high-performance computing, storage and security dates to the early 1980s, when Cray was the name of a supercomputing company rather than an entry in Urban Dictionary. After nearly four decades of architecting IT systems, he recently retired as CTO of a storage company’s Federal group, but he rather quickly lost a bet that he wouldn't be able to stay retired by taking a consulting gig in his first month of retirement.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.