We’ve all probably heard more than we want to hear about clouds this week, thanks to EMC World, but there are some things you need to think about if you’re considering adopting a cloud model as part of your storage networking architecture.
Clouds have a place in data storage architecture planning, as do applications that might use clouds, such as Hadoop. The standard cloud method of data replication is to use low-cost hardware. By replicating the data in the event of failure, the theory is that you have data reliability. As most of the work I do is in large storage environments, and given what I know about drive failure rates, I have some huge misgivings about using this method to manage petabytes of data that need to be highly reliable.
So what I want to do is take you through a step-by-step analysis of the low-cost hardware used in most clouds. I did not look at the failure rates of the blade, just the storage. As part of this analysis, I went to the Web sites of all the major disk manufacturers and used the best values across all vendors, so my analysis is likely best case and your mileage may vary. Let’s go through this thought process step-by-step.
Hard Errors Per Petabyte of Data Moved
The hard error rate, also known as BER (bit error rate), has a big effect on reliability. All the disk vendors I reviewed specified the BER in terms of non-recoverable read errors per bits read (1 sector per 10EXX bits).
|
Enterprise SASdrives are not being used by anyone that I am aware of in a cloud architecture or Hadoop, given the huge cost difference between enterprise SAS and SATA drives. Most installations are using the cheapest hardware.
Time to Read a 2TB Drive
You will see why this is important later in the article; for now, just note the time required to read the data on a drive.
|
Number of Drives to Saturate a Channel
It is important to understand the number of drives needed to saturate different speed SONET channels. I have estimated the performance of the channels by derating the channel for TCP/IP and other packetization and retry overhead, being very conservative at 90 percent of channel rate and operating at full duplex at these speeds in both directions.
|
Clearly, it does not take a large number of drives to saturate the network bandwidth with failed disk drives.
Disk Drive Failure Per Year
There are two parts to drive failure formula. The first part is based on the hard error rate. If you move 111 TB of data, you can expect a disk that cannot read data that was written on consumer SATA drives. The number for enterprise SATA is 1.1 PB. The other component to failure is something called annualized failure rate (AFR). This is based on a yearly percentage of the total number of drives and is an estimate provided by the drive vendor. It should be noted that very few drive vendors provide AFR for consumer SATA drives. The next table shows the number of drives using 2 TB SATA for various storage amounts and the expected number of failures per year.
|
The other aspect of this is failure based on the BER, and since this is based on data movement, I will again choose a conservative number for usage and estimate that the drive will use 5 percent of its total bandwidth year-round.
|
To determine total failures, you need to add the BER to the AFR numbers using the 5 percent usage.
|
If you take the 5 percent value and divide by 365 for total failures, you will get this number of failures per day:
|
A small increase to 7.5 percent usage of total bandwidth yields this number of failures per day for each of the storage volumes:
|
Total Amount of Data to be Moved for Failures
Now to the meat of the issue: For the 5 percent use case and 10 PB of storage, you will have an average of 15 consumer-grade SATA drives failing per day. Each of the drives takes approximately best case 24,390 seconds to be read and written over the network. At most, you can have the full bandwidth of 3.37 drives, and you have a total of 276 MB/sec of bandwidth for 24 hours. So using some simple math, that is 276 MB/sec*3600*24 equals total MB/sec per day. Doing the same math on the disk drives for each drive, you need 82 MB/sec for 24,390 seconds*15 drive failures. Here how that math works out for a few scenarios:
|
Any negative number means that the drive replication requirement exceeds the channel bandwidth. So, for example, if you have 10 PB and OC-48 and 5 percent drive usage, that translates to 6,167,659 MB of bandwidth that exceeds the channel, or about 71 MB/sec over the 24 hour period. Obviously, this becomes a bigger and bigger problem over time, as you cannot replicate the data as fast as it’s lost. It is a a statistical probability that you are going to eventually lose data if you have 10 PB, and it will not take long. The only architectural option is a third copy of the data, which is very costly. The crossover point for an OC-48 channel with 5 percent usage of the storage system is between 5 PB and 10 PB, and with 7.5 percent usage you only have 42 MB/sec (3,652,149/(3600*24)) of spare bandwidth at 5 PB of storage space. What is needed is much faster networking, which comes at a cost, or more reliable storage, which also isn’t cheap.
I am sure cloud companies trade these costs off every day and figure out what the best method is for optimizing the costs. Is it possible that some of them don’t understand some of the basic hardware issues? I sure hope that is not the case. Clearly, cloud storage works just fine for less than 5 PB for an OC-48 channel and consumer SATA storage. How many clouds have more than that much storage today? I have no idea, but certainly some do, and 10 to 20 PB archives are common for large storage users.
Cloud architecture is far more complex than architecting for local storage. Cloud storage could be designed with a RAID back end, eliminating much of the problem, but most clouds I see do not use RAID because of the cost. The bottom line is that cloud architecture and design is not easy, and for large data volumes I cannot see how clouds can be cheaper than local storage.
Drive reliability and bandwidth will limit cloud adoption, and it’s a problem that may never get solved. Bandwidth will continue to get cheaper, but drive reliability hasn’t improved much, and data will likely continue to grow faster than bandwidth anyway. Perhaps network-based deduplication could help — assuming the data can be deduped. But for now at least, there doesn’t seem to be much of an alternative to good old-fashioned data centers for very large data stores.
Henry Newman, CTO of Instrumental Inc. and a regular Enterprise Storage Forum contributor, is an industry consultant with 28 years experience in high-performance computing and storage.
See more articles by Henry Newman.
Follow Enterprise Storage Forum on Twitter