While the technology is still in its infancy, there are several options for creating shared storage in the cloud for HPC workloads.
More articles by Jeffrey Layton
Understanding the IO patterns of your applications is good for end users and good for vendors.
Metadata repositories are growing so large that they is becoming a "big data" problem.
This final look at the top of the stack discusses the role of Hadoop in Big Data and how this all ties into analytical tools Mapreduce and R.
There are eight types of database applications for Big Data. Learn what they are and what they mean for data storage.
Big Data is everywhere, but what really is it? We define it and discuss how to get 'data' into 'Big Data.'
The results of our Linux fsck testing were posted last month. Now it's time to address the big questions that remain: What do the results tell us, what do they mean, and is the performance expected?
Our Linux file system Fsck testing is finally complete. Just how bad is the Linux file system scaling problem?
The only constant about storage technology is the fact that it is constantly changing. But where is it headed? Storage devices and tiering software are two areas ripe for speculation.
Our examination of the ever-growing Linux file system scaling problem continues. In part 2 of our State of File Systems Technology series, Jeff Layton describes the approach and specs to be used in running the fsck wall clock time benchmark/test.
SandForce controllers are found in a wide range of SSDs. The most interesting and unique feature of SandForce SSD controllers is that they use real-time data compression to improve performance and SSD longevity. The impact, however, depends on the compressibility of the data.
Do you have data that you haven't touched in a long time? Some fairly recent studies have shown that data is getting colder.
In part two of his look at solid state disk performance degradation, Jeff Layton puts an enterprise SSD through its paces to understand its performance characteristics before, during and after heavy use.
In part one of this two-part look at SSDs, Jeff Layton examines why SSD performance degrades over time and offers some potential solutions to the problem.
Solid state disk (SSD) development could easily stagnate at the 20-25nm mark without changes to todays materials and techniques for storing and retrieving data.