The Real Cost of Storage, Part 2

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Henry Newman In the first part of this series, we began an examination of the real cost differences between Fibre Channel (FC) and SATA.

My assumption was that in environments that require performance, SATA is more costly than FC or the new SAStechnology. While SATA may be anywhere from one-third to 90 percent cheaper than FC, depending on whether you measure performance or capacity, there are some other issues to consider, and that’s what we’ll touch on here.

There were two good articles on drive reliability that came out of the USENIX FAST conference earlier this year, one on Disk failures in the real world, and the other on Google’s experience.

There are two important things that I took away from these articles. One is that disk drives in RAID controllers do not have reliability specifications anywhere near what the drive manufacturers say, and that SATA drives are far less reliable than FC or SAS when used in the same way.

The likely reason for this is that RAID vendors say a drive has failed at the first hint of a problem, such as not responding within the vendor’s specific response timeout framework, but it is likely that the drives could run significantly longer and recover from that error. For example, if a drive does not respond in a few seconds, some vendors I know will report that the drive has failed and mark the drive for reconstruction, but more than likely that drive will eventually respond back. Disk drive vendors do not consider that a failure because the drive eventually did respond.

Another thing to consider is IOPS. The ratio of random IOPS for SATA and FC is about the same ratio as the MB/sec performance differences, given the actual performance in MB/sec and improved seek and latency performance of FC drives. So for the same reasons, I believe that FC drives remain a better choice than SATA drives and the same IOPs formulas could be applied.

Numbers and Requirements

I am not going to use the data from the FAST paper since the disk drives used in the study are mostly old technology, but I will use the concepts that drives do not last as long as drive vendors say, since RAID vendors have far greater margins for performance and response than drive vendors, as Garth Gibson stated in the FAST paper.

On Seagate’s Web site they state that for 15K FC drives, the Annualized Failure Rate (AFR) is 0.62 percent.

Not long ago on the same site, they stated this in time, and the time was 1.2 million hours. For argument’s sake, let’s use 500,000 hours, which is what many RAID vendors use. The AFR then becomes 1.49 percent.

For SATA drives, the AFR is 0.73 percent, based on Seagate data.

Let’s assume that translates into about 1 million hours, based on what used to be on the site, and let’s also assume that the RAID vendors say 500,000 hours and 300,000 hours. I think it is far more likely that drives fail at 300,000 hours based on what I have seen under heavy loads, but to be fair I will use both numbers. So the AFR is 1.46 percent for 500,000 hours and 2.44 percent for 300,000 hours.

Adding It All Up

Remember my conjecture is that SATA is more costly in performance environments, whether IOPS or streaming.

The following information came from vendor Web sites in March and April of 2007:

Drive Vendors
Drive Type Vendor GB Form Factor RPMs in K Watts Idle Watts running Avg. MB/sec MB/sec per Watt
FC Hitachi 147 3.5 inch 15 8.92 12.02 93 7.7
FC Hitachi 300 3.5 inch 10 10.8 13.4 91 6.8
FC Seagate 300 3.5 inch 15 13.7 18.8 99 5.3
FC Seagate 147 3.5 inch 15 10.7 16.5 99 6.0
SAS Seagate 73 2.5 inch 15 5.8 8.2 95.5 11.6
SAS Seagate 146 2.5 inch 10 5.2 7.8 72 9.2
SATA Seagate 750 3.5 inch 7.2 9.3 13 78 6.0
RAID Failure Rates
AFR 1.49%
  4+1 8+1
300 GB FC 15K
no AFR
17067 15360
Failures per year 254 229
Likelihood of 2 in a day 69.67% 62.70%

Since performance requires power for HBA and switch ports, here are some power requirements for data path technology:

QLogic dual port 4 Gbit HBA requires 6.5 watts. The BTU not provided on the company Web site.

Cisco 9500 series director switches for each port, 4 Gb port on a 48 port blade, uses 16.46 watts, and 3 BTUs are required to cool the heat generated.

Here is some information provided by a RAID vendor for 16 drive trays and their controller:

 

  • Each controller draws approximately 60 watts
  • Drive Tray with 16 Seagate 500GB Tonka-2 SATA drives = 375 watts
  • Drive tray with 16 4Gbit 146GB 15K.4 drives = 382 watts
  • The number of BTUs required to cool varies with the drives used and the capacity of the drives (larger capacities typically require larger power)
    • 16 500 GB Seagate SATA drives = 1283 BTUs
    • 16 146GB Seagate 4Gbit FC drives = 1300 BTUs

As you can see from simple multiplication, an FC tray uses 118 watts (382-(16*16.5)), and a SATA tray uses 167 watts (375-(16*13)). This is likely because of the additional SATA controllers needed compared with the optical FC connection. That is a 31 percent increase in power for FC and a 45 percent increase in power for SATA.

An Example

Let’s say you need 100 GB/sec of sustained performance around the clock for your enterprise. This is not really that much performance today. It might be at the upper end for some of you reading this article, but take a bank, insurance company, the auto industry, pharmaceutical firm or any large business and this is not really a large performance requirement. Let’s add up what you will need for disk drives.

For FC drives, I am assuming based on vendor data that the minimum transfer is 75 MB/sec per drive for inner cylinders, which are my estimates for both drive types using about 55 percent of maximum performance. Some of this is based on information that used to be on various disk drive vendors’ Web sites. I have put in drive counts for RAID 4+1 and 8+1 and am assuming the failure rates above.

SATA Failure Rates
AFR 1.46%
  4+2 8+2
750 GB FC 15K
no AFR
35721 29767
Failures per year 522 435
Likelihood of 2 in a day 142.88% 119.07%

Based on this information and the time to rebuild, which varies from 4 to 12 hours based on my estimates and the performance requirements. I think that you would need two extra 4+1s or 8+1s to ensure you meet your performance requirement, bringing the total drive count to 17,077 for RAID 4+1, and 15,378 for RAID 8+1. Note that this does not include hot spares.

The two extra LUNs will require two additional HBAs and two additional switch ports, and likely an additional part of a RAID controller; I am not adding these in, but please keep that fact in mind.

For SATA, here are the numbers:

SATA Failure Rates
AFR 2.44%
  4+2 8+2
750 GB FC 7.2K
no AFR
35721 29767
Failures per year 872 726
Likelihood of 2 in a day 238.79% 198.99%

Given the likelihood of multiple failures in a day, and to get to the same point of failure as FC, I would estimate you would need 5 extra LUNs for 1.46 percent failure and 8 extra LUNs at 2.4 percent, based on the performance requirement.

For 1.46 percent, the disk totals come to 35,751 for RAID 4+2, and 29,817 for RAID 8+2.

For 2.44 percent the disk totals come to 35,801 for RAID 4+2, and 29,847 for RAID 8+2.

So from a pure watts perspective, for the drives alone you will need:

Power Consumption
FC Count Watts Per Drive Total KW for drives
1.49%
AFR
RAID 4+1 17077 18.8 321.05
RAID 8+1 15378 18.8 289.11
 
SATA
1.46%
AFR
RAID 4+2 35751 13 464.76
RAID 8+2 29817 13 387.62
2.44%
AFR
RAID 4+2 35801 13 465.41
RAID 8+2 29847 13 388.01
Total Power Costs
FC Count Watts Per Drive Total KWatts Drives Cost .09 Per KW Hour Yearly Cost Yearly Cost Est in RAID controller 5 year cost of power 5 year cost of power Est in RAID 5 Year Cost Difference 5 year cost Difference Est in RAID
1.49%
AFR
RAID 4+1 17077 18.8 321.05 $28.89 $253,113.93 $331,579.25 $1,265,569.64 $1,657,896.23  
RAID 8+1 15378 18.8 289.11 $26.02 $227,931.49 $298,590.25 $1,139,657.43 $1,492,951.23  
SATA
1.46%
AFR
RAID 4+2 35751 13 464.76 $41.83 $366,419.15 $531,307.77 $1,832,095.75 $2,656,538.83 $566,526.11 $998,642.60
RAID 8+2 29817 13 387.62 $34.89 $305,600.40 $443,120.57 $1,528,001.98 $2,215,602.87 $388,344.55 $722,651.64
2.44%
AFR
RAID 4+2 35801 13 465.41 $41.89 $366,931.61 $532,050.83 $1,834,658.05 $2,660,254.17 $569,088.41 $1,002,357.94
RAID 8+2 29847 13 388.01 $34.92 $305,907.87 $443,566.41 $1,529,539.36 $2,217,832.07 $389,881.93 $559,935.85

This does not include the cost of removing the BTU, which would add about one-third to the power costs and thus close the gap further.

Total Power Costs
Drive Count of Drive Initial Cost 5 year cost of drives with power 5 year cost of drives with power Est in RAID Real Cost Difference in times Real Cost Difference in times in RAID
FC 4+1 17077 $17,077,000 $18,342,570 $20,000,466  
FC 8+1 15378 $15,378,000 $16,517,657 $18,010,609  
For 1.46%
RAID 4+2 35751 $9,617,019 $11,449,115 $14,105,654 1.60 1.42
RAID 8+2 29817 $8,020,773 $9,548,775 $11,764,378 1.73 1.53
For 2.44%
RAID 4+2 35801 $9,630,469 $11,465,127 $14,125,381 1.60 1.42
RAID 8+2 29847 $8,028,843 $9,558,382 $11,776,214 1.73 1.53

The cost of power and cooling, along with additional drives, controllers and switch ports, makes the cost savings of SATA negligible at best over time. SATA drives may be 90 percent cheaper, but after all costs are factored in for performance environments — and we didn’t include the cost of BTU removal or additional controllers and switch ports (plus power and cooling for those items) — the SATA cost advantage rapidly dwindles.

 

The drive costs were taken from my last article and the best prices I found on froogle.com for each drive type. What you see does not take into account the cost of extra RAID controllers for the increase in SATA drives and the extra rebuilds that will be required. It does not take into account the extra switch ports, HBAs and the power to remove the heat from all of this extra equipment.

Remember this series has been about performance environments. My conclusions suggest that my initial gut feeling seems to be correct: that the cost benefit of SATA does not pan out for performance environments and that FC or the coming SAS drives are a better choice. My calculations did not include the cost of BTUs for the heat generated, extra HBAs, switch ports and the associated cost of power for each. Nor did it take into account the future direction of energy costs.

I will let you draw your own conclusions, but I for one will be strongly recommending against SATA drives for performance environments.

Henry Newman, a regular Enterprise Storage Forum contributor, is an industry consultant with 26 years experience in high-performance computing and storage.
See more articles by Henry Newman.

 

Henry Newman
Henry Newman
Henry Newman has been a contributor to TechnologyAdvice websites for more than 20 years. His career in high-performance computing, storage and security dates to the early 1980s, when Cray was the name of a supercomputing company rather than an entry in Urban Dictionary. After nearly four decades of architecting IT systems, he recently retired as CTO of a storage company’s Federal group, but he rather quickly lost a bet that he wouldn't be able to stay retired by taking a consulting gig in his first month of retirement.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.