Test Plan for Linux File System Fsck Testing, Fs_mark and Hardware: Page 2 - EnterpriseStorageForum.com
Hot Topics:

Test Plan for Linux File System Fsck Testing, Fs_mark and Hardware - Page 2

Fs_mark

Using fs_mark, the file system is filled and tested. There are a large number of options for fs_mark, but we will focus on only a few of them. An example of command line for creating 100 million files is the following:

# fs_mark -n 10000000 -s 400 -L 1 -S 0 -D 10000 -N 1000 -d /mnt/home -t 10 -k

 


where the options are:

  • -n 10000000: The number of files to be tested per thread (more on that later)
  • -s 400; Each file will be 400KB
  • -L 1: Loop the testing once (fs_mark testing)
  • -S 0: Issue no sync() or fsync() during the creation of the file system. Since fs_mark is not being used for testing file systems, we just care about creating the file system quickly
  • -D 10000: There are 10,000 subdirectories under the main root directory
  • -d /mnt/home: The root directory for testing; for this particular test, we are using only 1 root directory
  • -N 1000: 1,000 files are allocated per directory
  • -t 10: Use 10 threads for building the file system
  • -k: Keep the file system after testing
  • -N 10,000: Allocate 10,000 files per directory

With these options, there are a total of 1,000 files per directory, and there are 10,000 directories. This results in a total of 10 million files. However, note that the number of files specified by the "-n" option lists only 10 million files because each thread will produce "-n" files. Since we have 10 threads and we have 10 million files per thread, this results in a total of 100 million files.

Since we have 100 million files and each file is 400KB, the file system uses a total of 40TBs. This is about half of the 80TBs for the largest file system. With the goal of filling at least 50 percent of the file system for the specified number of files, the resulting file sizes are listed below.

  1. 80TB XFS File System
    • 100 Million files: 400KB file size
    • 50 Million files: 800 KB file size
    • 10 Million files: 4,000 KB (4MB) file size
  2.  

  3. 40TB XFS File System

    • 100 Million files: 200KB file size
    • 50 Million files: 100KB file size
    • 10 Million files: 2,000 KB (2MB) file size
  4.  

  5. 10 TB ext4 File System

    • 100 Million Files: 5KB file size
    • 50 Million Files: 10KB file size
    • 10 Million Files: 50KB file size
  6.  

  7. 5TB ext4 File System

    • 100 Million Files: 3KB file size
    • 50 Million Files: 6KB file size
    • 10 Million Files: 30KB file size

All of the tests except the last one use 50 percent of the space. The last one uses 60 percent of the space.

When possible, we will repeat the tests several times so we can report the average the standard deviation. In between tests, the file system will be remade and the fs_mark will be rerun to fill the file system. Due to the possibly large amount of time to fill the file system and run fsck, it is possible that only a few tests will be run.

Hardware

Dell has been kind enough to supply us with hardware for the testing. The hardware used is its NSS solution that uses xfs. The configuration consists of a single NFS gateway with two Intel 4-core processors, either 24 or 48 TBs of memory, and two 6Gbps RAID cards that are each connected to a daisy-chained series of JBODs. The JBODs have twelve 3.5" drives. The drives used are 7.2K rpm, 2TB NL-SAS drives. Each JBOD uses RAID-6 across 10 of the drives for actual capacity, leaving the other two drives for parity. So each JBOD provides 20TBs of capacity. The RAID cards use RAID-60 on their particular set of JBODs (RAID-6 within each JBOD and RAID-0 to combine them). Then LVM is used to combine the capacity of the two RAID cards into a single device used for building the file system.

For the 80TB configuration, a total of 48 drives is used (40 for capacity and 8 for parity). For the 40TB-configuration, a total of 24 drives are used (20 for capacity and 4 for parity). The smaller configurations used in the ext4 testing just use less of capacity of the 40TB configuration using LVM.

Summary and Invitation for Comments

This article is really the test plan that executes the ideas embodied in Henry's article. The focus is on testing the fsck time for Linux file systems, particularly xfs and ext4, on current hardware. To gain an understanding of how fsck time varies, several file system sizes and number of files will be tested.

We will be using fs_mark to fill the file systems and then just run the appropriate file system check and time how it long it takes to complete. It's a pretty straightforward test that should give us some insight into how fsck performs on current Linux file systems.

We want to encourage feedback on the test plan. Is something critical missing? Is there perhaps a better way to fill the file system? Is there another important test point? Speak now or forever hold your peace.

Jeff Layton is the Enterprise Technologist for HPC at Dell, Inc., and a regular writer of all things HPC and storage.

Henry Newman is CEO and CTO of Instrumental Inc. and has worked in HPC and large storage environments for 29 years. The outspoken Mr. Newman initially went to school to become a diplomat, but was firmly told during his first year that he might be better suited for a career that didn't require diplomatic skills. Diplomacy's loss was HPC's gain.

Follow Enterprise Storage Forum on Twitter


Page 2 of 2

Previous Page
1 2
 
Tags: testing, Dell, RAID, file system, xfs, fsck


Comment and Contribute

 


(Maximum characters: 1200). You have characters left.