Lustre Buying Guide

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Lustre is a high-performance open source file system that is particularly popular on storage platforms used in High Performance Computing (HPC) and Supercomputing environments.  While it may not be on everyone’s lips across the storage universe in general, it generated a lot of buzz at this month’s SC14 Supercomputing conference in New Orleans.

Based upon Linux, it was originally designed, developed and maintained by Sun Microsystems (then Oracle) with input from many individuals and companies in the open source community. 

The basic design is for it to be massively parallel to enable I/O performance and scaling beyond the limits of traditional file systems. What makes it the file system of choice for supercomputing is that it scales to tens of petabytes, hundreds of Gb per sec and thousands of clients with very high levels of reliability.

That’s why the bulk of the top 100 systems on the list of the world’s fastest supercomputers utilize Lustre. This includes the Tianhe-2 system at the National Supercomputer Center in Tianjin, China which tops the list, with an interconnect that is much faster than InfiniBand.

“Lustre has been doing quite well and is very much alive both in the traditional high-performance computing (HPC) area as well as high-productivity compute and high-profit compute, not to mention in and around big-data as well as object and scale-out storage,” said Greg Schulz, an analyst at StorageIO Group. “Where Lustre has had success is for those environments or scenarios that need to read or write very large datasets or files requiring parallel access to files compared to general purpose and most scale-out NAS solutions that are targeted for many concurrent access of small files.“

Some of the bigger names in Lustre today, said Schulz, include DataDirect Networks (DDN), Seagate, OpenSFS and Intel. In addition, companies like Cray, HP, Dell, Groupe Bull, IBM, NetApp, SGI and IBM partner with some of these organizations as part of their own also Lustre offerings.


OpenSFS is the home of the open-source Lustre file system.  The Lustre Community Portal  supports developers, admins, and users with downloadable Lustre releases, documentation, development tree access, issue reporting, working groups, mailing lists, and more.

OpenSFS is a nonprofit organization founded in 2010 to advance Lustre development, ensuring it remains vendor-neutral, open, and freely downloadable. OpenSFS participants include vendors and customers. The community has a roadmap for Lustre development.


Update: Intel shuts down its Lustre file system business.

Intel has been a big contributor to Lustre in general.

“On average, more than 80 percent of new development in the open source version of Lustre is done by Intel – working closely with other source contributors,” said Brent Gorda, general manager of the High Performance Data Division, Intel.

It offers a suite of Intel feature-enhanced ‘superset’ versions of Lustre software, which includes:

Enterprise Edition for Lustre – this couples Lustre with features that help mid-tier HPC and commercial users exploit the scalable performance of Lustre, such as Intel Manager for Lustre, as well as software connectors to lower barriers to adopting Hadoop MapReduce application on HPC configurations for storage. These connectors allow Lustre to replace HDFS, the default Hadoop file system, and improve the Hadoop resource and job scheduler when used on HPC systems.

Cloud Edition for Lustre – this provides parallel storage software for HPC applications that are using dynamic “pay as you go” resources via Amazon Web Services.

Pricing, said Gorda, is based on the number of object storage servers used within a specific or discrete storage solution. Object storage servers run Lustre server software layered over a distribution of Linux, either Red Hat or SUSE.

Key markets served span from legacy HPC segment, such as the oil/gas exploration and reservoir modeling domain, to newer markets whose problems have become larger and more complex, and the urgency to solve them very high. Examples are genomics (within life sciences) and commercial enterprises that have technical applications, and use HPC-class storage software like Lustre, as part of their overall mix of applications.


Seagate acquired Xyratex, which had previously acquired its Lustre-based solution from ClusterStor along with some Lustre pieces from Oracle. Seagate just announced version 2.0 of the ClusterStor Engineered Solution for Lustre. This adds features for small workgroup clusters to large-scale compute clusters requiring storage support of up to 1 TB/sec performance and up to 90 PB storage capacity from a single file system.

Benefits reported are improved metadata performance and scalability by implementing the Distributed Namespace (DNE) features in Lustre 2.5.  In addition to the Base Metadata Management Server capability, ClusterStor users have the option to add up to sixteen Lustre Distributed Namespace metadata servers per single file system. This is said to offer a metadata performance leap of up to 700% and up to 16 billion files per file system. 

“ClusterStor 2.0 is the only fully engineered solution integrating all aspects of hardware, software, management and full Lustre support for the latest 2.5 version of the Lustre parallel file system,” said Torben Kling Petersen, Principal Engineer, Seagate Technology. “Our design concentrates the highest attainable performance from each individual disk drive to achieve the fullest effect at the system level and eliminate unnecessary overhead. Add to this the tight integration of the object store servers with the storage, and ClusterStor delivers the best performance, lowest energy consumption and the most reliable solution on the market.”


DDN has a long history of selling, installing and supporting Lustre.  It was the first company to provide a support contract to a user for Lustre back in 2002. In addition, DDN has a tight partnership with Intel.  It offers DDN technology in conjunction with delivering Intel’s Enterprise Edition v2.5 of Lustre.

“DDN has leveraged its experience to build its EXAScaler parallel file system appliance which delivers Lustre delivered on DDN’s HPC storage performance platform, the DDN SFA,” said Laura Shepard, Director of HPC & Life Sciences Marketing, DDN. “DDN writes clients for Lustre that optimize performance, offers read/write cache through two software solutions called ReACT and SFX.”

The EXAScaler Lustre appliance comes with support from DDN and Intel. This version offers the high sustained Lustre metadata performance, capable of exceeding 100,000 file creates per second, even when creating millions of files under intensive load. The EXAScaler appliance is said to be attractive to academic and scientific research organizations, as well as enterprises that are scaling at HPC levels such as those in oil and gas, financial services and manufacturing industries.

Photo courtesy of Shutterstock.

Drew Robb
Drew Robb
Drew Robb is a contributing writer for Datamation, Enterprise Storage Forum, eSecurity Planet, Channel Insider, and eWeek. He has been reporting on all areas of IT for more than 25 years. He has a degree from the University of Strathclyde UK (USUK), and lives in the Tampa Bay area of Florida.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.