We failed. Jeff and I tried and tried to get the hardware to run the file system tests that we had written about. Originally, Jeff had all of the hardware all lined up, but it was taken away at the last minute, as it was required to be used for a benchmark. Jeff tried some of his contacts and got nowhere. Finally, he called me and asked for my help. I called a number of storage vendors asking for 100 TB of RAIDed storage connected to a single server with at least two 8-lane PCIe buses and connectivity to the storage over IB, FC or SAS with Linux loaded. We also needed access to the RAID controller configuration for LUNs and tuning settings.
Jeff and I estimated we would need the hardware and remote access for about five days, which we thought was a conservative estimate. I called vendor after vendor, and when push came to shove, no one was willing to loan us this equipment. I told each vendor that Jeff and I had our QuinStreet writer hats on and not those of our day jobs, as we did not want the vendors to feel pressured, given our day jobs. Jeff had completed most of the scripts and was ready to go on a moment’s notice.
Our quest began in June with at least four unnamed storage vendors. We continued this quest in July, August and September, and finally gave up in October. Have any of you ever seen the show Man vs. Food? Jeff’s and my eyes were bigger than our stomachs, so to speak, as we thought we could get the hardware, but clearly that was not going to happen.
My question now to the community is: Why would four vendors not want to provide a small amount of hardware for testing by reasonable people who have spent years benchmarking and thus know the difference between a benchmark and a real-world test, which is what we were interested in running.
Industry/Vendor Challenge
As Jeff and I were unsuccessful getting hardware, we challenge the industry to do what we were going to do and publish the data here on Enterprise Storage Forum. The vendors all had the same seemingly good excuse that the hardware was in use for potential revenue projects. Jeff clearly described what we were going to do and why. Here is what he said:
The basic plan for the testing has only three steps:
- Create the file system
- Fill the file system >
- Run fsck and time how long it takes
That’s pretty much it–not too complicated at this level, but the devil is always in the details.
If you are going to step up to the plate and do this, I recommend reading Jeff’s entire article, “Test Plan for Linux File System Fsck Testing“.
Please read both pages, as there is important setup information on page 2. The test can be setup such that you can start it and walk away from it and come back hours or days later when it has completed. I am pretty sure we might be able to even provide Jeff’s scripts, but they will need to be confirmed, as we, of course, did not have any hardware to test the scripts with at scale.
I am also more than willing to help anyone accomplish this task by providing phone assistance or even on-site support if it happens to work into my schedule.
The way I see it is we have issued a challenge to the storage block community, including, but not limited to: Adaptec and its RAID cards, DDN, EMC, HDS, HP, IBM, LSI and its RAID cards, NetApp, Nexsan, Oracle, Xyratex, and anyone else I have not listed. Remember, this is not about testing the performance of your *system* (which might be one of the reasons why no one would provide hardware to Jeff); this is about testing the performance and scaling of fsck for both ext4 and xfs. Jeff’s article details the whole test procedure.
If anyone, whether a vendor or not (we just figure that vendors have access to more hardware than do users, but at this point beggars cannot be choosers), has a questions or wants to help, please click the send email button at the top of this article. I will reply as soon as possible.
Additionally, if someone actually does run the tests and writes an article on this topic, and it passes editorial review by Enterprise Storage Forum editor Amy Newman and me, you will get a small remuneration for the article. You cannot beat that can you?
There is not much more to say about this other than I was very disappointed in the vendor community for not providing the required hardware. Yes, there were good reasons for each vendor, but is it really possible that four major storage vendors all had no hardware available for a five-day test for four months? Being the conspiracy theorist that I am (we all know the shot came from the grassy knoll), I speculate that the vendors believe the results would be interpreted by the community as a storage vendor benchmark and not what it was–a file system test. If this is true, then we have a communications problem because no way, no how was that the intent, and we are sorry if there was any misunderstanding.
So call this Storage vs. File System. The issues are clear in our minds that something must be done with ext4 and xfs file system repair tools and metadata design in general. Use whatever tuning you want, including superblock alignment, RAID stripe tuning and volume manager tuning. Create the number of files and the file system size that Jeff documented and run the tests. The first person to do this gets their article published and a small remuneration, but we expect that the test plan be run according to standard industry practices for a real system, not some SBT (slimy benchmarking trick). As a reminder, the goal is to challenge the file system community to work to optimize the file systems, metadata access and most importantly recovery.
As I said, I am here to help and answer any questions. It is my extreme hope that someone can succeed where Jeff and I have failed. I think this test is important for the Linux community and storage industry. If anyone is interested in talking face to face about the tests and happens to be at the Supercomputing 11 in Seattle next week, please look me up.
If additional tests are done, we, of course, will publish them, but there will be no remuneration.
Henry Newman is CEO and CTO of Instrumental Inc. and has worked in HPC and large storage environments for 29 years. The outspoken Mr. Newman initially went to school to become a diplomat, but was firmly told during his first year that he might be better suited for a career that didn’t require diplomatic skills. Diplomacy’s loss was HPC’s gain.
Jeff Layton is the Enterprise Technologist for HPC at Dell, Inc., and a regular writer of all things HPC and storage.