Choosing the Right High-Performance File System

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

There are a lot of high-performance file systems out there: Sun QFS, IBM GPFS, Quantum StorNext, Red Hat GFS and Panasas, to name a few. So which is best? It depends on who you ask and what your needs are.

“We typically compete with NetApp OnTap or OnTap GX, EMC, IBM GPFS, HP Polyserve or Sun’s open source research project called Lustre,” said Len Rosenthal, chief marketing officer of Panasas Inc. “Although we have replaced systems running Sun’s QFS, we have never really competed with them in sales situations.”

Rosenthal claims that Quantum StorNext and HP Polyserve can only deal with a maximum of 16 clustered NFS servers, so they don’t tend to compete in scale-out NAS bids. Similarly, he said that IBM GPFS and Sun Lustre, which are both parallel file systems like Panasas PanFS, are mainly used by universities and government research organizations for scratch storage, as they don’t provide high enough I/Orates or a sufficient range of data management tools such as snapshots.

Tough talk indeed from Panasas. So how do its rivals respond to these claims?

Todd Neville, GPFS offering manager at IBM (NASDAQ: IBM), said the GPFS installation base is diverse, including HPC, retail, media and entertainment, financial services, life sciences, healthcare, Web 2.0, telco, and manufacturing. Neville is also dismissive of the I/O rate claims.

Greg Nuss, director of the software business line at Quantum (NYSE: QTM), is more emphatic, stating that the statement by Panasas about StorNext’s capabilities is completely false.

“Each node in a StorNext cluster can act as NFS server, each presenting the common file system namespace at the back end,” he said. “Today our stated node support is 1,000 nodes and we support both SAN-attached as well as LAN-attached nodes into the cluster. We have practical installations in the 300-400 node range deployed today. We don’t typically run into Panasas in the market because StorNext is not typically deployed in scale-out NAS configurations, but rather in high-performance workflow and archive configurations.”

HP (NYSE: HPQ), meanwhile, also took umbrage about the Panasas claims. The company said that HP Scalable NAS does not have an architectural limit on the number of NAS File Services server nodes that a customer can use in their clusters.

“The stated 16 server node limit is a test limit only,” said Ian Duncan, director of marketing for NAS for HP StorageWorks,. “HP has a number of NAS File Services customers using clusters with more than 16 server nodes.”

Duncan said Panasas, Sun QFS, IBM GPFS and Quantum StorNext are not true symmetrical file systems, but are cluster file systems based on master servers — whether for metadata operations, locking operations, or both — which are relatively easy to implement as an extension of traditional, single-node systems. However, Duncan believes they suffer from performance and availability limitations inherent in the master server’s singular role.

“As servers are added, the load on the master server increases, undercutting performance and subjecting more nodes to loss of functionality in the event of a master server’s failure,” said Duncan. “By contrast, the 4400 Scalable NAS File Services uses the HP Clustered File System (CFS), which exploits multiple, independent servers to provide better scalability and availability, insulating the cluster from any individual node’s failure or performance limitation.”

With that out of the way, let’s take a closer look at some of these file systems.

Panasas PanFS

The Panasas PanFS parallel file system is an object-based file system designed for scale-out applications that require high performance in both I/O and bandwidth. Unlike NFS or CIFS, which Panasas also supports, PanFS uses the parallel DirectFLOW protocol, which is the foundation of the upcoming pNFS(Parallel NFS) standard, which is the major advance in the upcoming NFS version 4.1. The key benefit to Panasas parallel storage is said to be superior application performance.

Where NFS servers require that all I/O requests go through a single NAS filer head, PanFS enables parallel transfer of data directly from the clients or server nodes into the storage system. With Panasas, the NAS head is removed from the data path and is no longer the I/O bottleneck. Case in point: Panasas parallel storage is installed with the world’s highest performance computer system in the world, the Roadrunner system at Los Alamos National Lab in New Mexico. It generates close to 100 GB/s to a single shared file system.

“As a result of this architecture, Panasas parallel storage systems scale to thousands of users/servers, tens of Petabytes and can generate over 100GB/s in bandwidth,” said Rosenthal. “Other key features include its software-based RAIDarchitecture that enables parallel RAID reconstructions that are 5X to 10X faster than most storage systems.”

PanFS also includes Panasas Tiered Paritytechnology, which automatically detects and corrects unrecoverable media errors, which is important during reconstructions. Finally, this file system is optimized for use with many simulation and modeling applications.

Note, though, that Panasas systems are designed for file storage, not block storage. Therefore, it is typically not installed for transaction-oriented applications such as ERP, order entry or CRM. Instead, it tends to be deployed in applications where a large number of users or server nodes need shared access to a common pool of large files.

HP File Services

HP claims superiority by pushing symmetry over parallelism. The product is aimed at medium-sized customers who need to seamlessly increase application throughput far in excess of traditional NAS products and easily grow storage capacity online without service disruption. HP StorageWorks 4400 Scalable NAS File Services includes an HP StorageWorks 4400 Enterprise Virtual Array with dual array controllers and 4.8 TB of storage, three file serving nodes, management and replication software, and support for Windows or Linux. With three file serving nodes and dual array controllers, the 4400 Scalable NAS File Services does not have a single point of failure.

Downsides?

“The 4400 Scalable NAS File Services is less suitable for high-performance computing applications that require more than 6 GB/sec of throughput,” said Duncan.

Quantum StorNext

StorNext is certainly the platform of choice for anyone using Apple. Further, in media rich environments where Apple, Windows and other systems must interact, StorNext appears to have the market cornered. For example, StorNext is commonly used in demanding video production and playback applications because of its ability to handle the large capacity and frame rates of high-definition content. How does it do beyond that niche?

“The key differentiators between StorNext and other shared file systems are our tight level of integration with the archive tier (StorNext/StorageManager) along with the robust tape support, as well as the broad OS platform support,” said Nuss. “No other file system can support varieties of Linux, Unix, Apple and Windows within a single cluster environment.”

The StorNext file system is a heterogeneous, shared file system with integrated archive capability. It enables systems to share a high-speed pool of images, media, content, analytical data and other files so they can be processed and distributed rapidly, whether SAN or LAN connected. According to Nuss, it excels at both high-performance data rates and high capacity in terms of the file size as well as number of files in the file system.

IBM GPFS

The General Parallel File System (GPFS) from IBM has been out now for a few years.

“GPFS is a high-performance, shared disk, clustered file system for AIX and Linux,” said John Webster, an analyst at Iluminata Inc.

Originally designed for technical high performance computing (HPC), it has since expanded into environments which require performance, fault tolerance and high capacity such as relational databases, CRM, Web 2.0 and media applications, engineering, financial applications and data archiving.

“GPFS is built on a SAN model where all the servers see all the storage,” said Neville. “To allow data access from systems not attached to the SAN, GPFS provides a software simulation of a SAN, allowing access to the data using general purpose networks such as Ethernet.”

Data is striped across all the disks in each file system, which allows the bandwidth of each disk to be used for service of a single file or to produce aggregate performance for multiple files. This performance can be delivered to all the nodes that make up the cluster. GPFS can also be configured so that there are no single points of failure. On top of the core file service features, GPFS provides functions such as the ability to share data between clusters and a policy-based information life cycle management (ILM) tool where data is migrated among different tiers of storage, which can include tape.

In addition, GPFS can be used at the core of a file-serving NAS cluster where all the data is served via NFS, CIFS, FTP or HTTP from all nodes of the cluster simultaneously. Further nodes or storage devices can be added or removed from the cluster as demands change. The IBM Scale Out File Services (SoFS) offering, based on GPFS, includes additional functionality.

“As file-centric data and storage continues to expand rapidly, NAS is expected to follow the trend of HPC, Web serving, and other similar industries into a scale-out model based on standard low-cost components, which is a core competency for GPFS,” said Neville.

More to Come

While most of the vendors above claim global superiority on multiple fronts, most are willing to admit some areas of weakness. The bottom line appears to be that on-site testing and liberal use of free trial periods is required to see just how these various file systems play in your environment.

Further, we have only scratched the surface. So a follow-up article will cover more data on NetApp (NASDAQ: NTAP), Sun (NASDAQ: JAVA) as well as some of the traditional file system protocols like CIFS and NFS. After all, not everyone needs super high performance.

Back to Enterprise Storage Forum

Drew Robb
Drew Robb
Drew Robb is a contributing writer for Datamation, Enterprise Storage Forum, eSecurity Planet, Channel Insider, and eWeek. He has been reporting on all areas of IT for more than 25 years. He has a degree from the University of Strathclyde UK (USUK), and lives in the Tampa Bay area of Florida.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.