Linux File System Fsck Testing -- The Results Are In - EnterpriseStorageForum.com
Hot Topics:

Linux File System Fsck Testing -- The Results Are In

After an extended delay, the Linux File System fsck testing results can now be presented. The test plan has changed slightly from our kickoff article previous article. We will review it at the beginning of the this article, followed by the actual results. Henry Newman will be reviewing the results and writing some observations in the next article in this series. As always we welcome reader feedback and comments.

FSCK Testing Plan

It has been a while since we started the fsck project to test fsck (file system check) times on Linux file systems. The lengthy delay in obtaining the results is due to the lack of hardware for testing. The original vendor could not spare the hardware for testing. A number of other vendors were contacted and due to various reasons none of them could provide the needed hardware for many, many months if at all. In the end, Henry used his diplomatic skills to save the day, persuading Data Direct Networks to help us out. Paul Carl and Randy Kreiser from DDN contacted me and agreed to provide remote access to the hardware (thank you, DDN!).

Paul used a DDN SFA10K-X with 590 disks that are 450GB, 15,000 rpm SAS disks. He used a 128KB chunk size in the creation. From these disks he created a number of RAID-6 pools using an 8+2 configuration (8 data disks and 2 parity disks). Each pool is a LUN that is 3.6TB in size before formatting. The LUNS were presented to the server as disk devices such as /dev/sdb1, /dev/sdc1, /dev/sdd1, ..., /dev/sdx1 for a total of 23 LUNs of 3.6TBs each. This is a total of 82.8 TBs (raw). The LUNs were combined using mdadm and RAID-0 to create a RAID-60 configuration using the following command:

mdadm -- create /dev/md1 -- chunk=1024 -- level=0 -- raid-devices=23 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1 /dev/sdm1 /dev/sdn1 /dev/sdo1 /dev/sdp1 /dev/sdq1 /dev/sdr1 /dev/sds1 /dev/sdt1 /dev/sdu1 /dev/sdv1 /dev/sdw1 /dev/sdx1

The result was a file system with about 72TB using "df -h" or 76,982,232,064 bytes from "cat /proc/partitions". A second set of tests were run on storage that used only 12 of the 23 LUNs. The mdadm command is,

mdadm -- create /dev/md1 -- chunk=1024 -- level=0 -- raid-devices=12 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1 /dev/sdm1

The resulting file system for this configuration is about 38 TBs using "df -h".

The server used in the study is a dual-socket, Intel Xeon system with Nehalem processors (E5520) running at 2.27 GHz and an 8MB L3 processor cache. The server has a total of 24GB of memory, and it was connected to the storage via a Qlogic Fibre Channel FC8 card connected to an FC switch that was connected to the storage. The server ran CentOS 5.7 (2.6.18-274 kernel). The stock configuration was used throughput the testing except for one component. The e2fsprogs package was upgraded to version 1.42, enabling ext4 file systems larger than 16TB to be created. This allows the fcsk performance of xfs and ext4 to be contrasted.

Building the file systems was done close to the default behavior that many system admins will adopt -- using the defaults. The commands for building the file systems are:

  • XFS: /sbin/mkfs.xfs -f /dev/md1
  • EXT4: /sbin/mke2fs -t ext4 -F /dev/md1

Mounting the file systems involved a little more tuning. In the case of XFS, I used the tuning options as stated by Dell, XFS -- rw,noatime,attr2,nobarrier,inode64,noquota. In the case of ext4, the mounting options used are defaults,data=writeback,noatime,barrier=0,journal_checksum.

The journal checksum was turned on within ext4 since I like this added behavior.


Page 1 of 3

 
1 2 3
Next Page
Tags: Linux, file system, fsck, scalability


Comment and Contribute

 


(Maximum characters: 1200). You have characters left.