My assumption was that in environments that require performance, SATA is more costly than FC or the new SAS technology. While SATA may be anywhere from one-third to 90 percent cheaper than FC, depending on whether you measure performance or capacity, there are some other issues to consider, and that's what we'll touch on here.
There are two important things that I took away from these articles. One is that disk drives in RAID controllers do not have reliability specifications anywhere near what the drive manufacturers say, and that SATA drives are far less reliable than FC or SAS when used in the same way.https://o1.qnsr.com/log/p.gif?;n=203;c=204660765;s=10655;x=7936;f=201812281308090;u=j;z=TIMESTAMP;a=20400368;e=i
The likely reason for this is that RAID vendors say a drive has failed at the first hint of a problem, such as not responding within the vendor's specific response timeout framework, but it is likely that the drives could run significantly longer and recover from that error. For example, if a drive does not respond in a few seconds, some vendors I know will report that the drive has failed and mark the drive for reconstruction, but more than likely that drive will eventually respond back. Disk drive vendors do not consider that a failure because the drive eventually did respond.
Another thing to consider is IOPS. The ratio of random IOPS for SATA and FC is about the same ratio as the MB/sec performance differences, given the actual performance in MB/sec and improved seek and latency performance of FC drives. So for the same reasons, I believe that FC drives remain a better choice than SATA drives and the same IOPs formulas could be applied.
Numbers and Requirements
I am not going to use the data from the FAST paper since the disk drives used in the study are mostly old technology, but I will use the concepts that drives do not last as long as drive vendors say, since RAID vendors have far greater margins for performance and response than drive vendors, as Garth Gibson stated in the FAST paper.
On Seagate's Web site they state that for 15K FC drives, the Annualized Failure Rate (AFR) is 0.62 percent.
Not long ago on the same site, they stated this in time, and the time was 1.2 million hours. For argument's sake, let's use 500,000 hours, which is what many RAID vendors use. The AFR then becomes 1.49 percent.
For SATA drives, the AFR is 0.73 percent, based on Seagate data.
Let's assume that translates into about 1 million hours, based on what used to be on the site, and let's also assume that the RAID vendors say 500,000 hours and 300,000 hours. I think it is far more likely that drives fail at 300,000 hours based on what I have seen under heavy loads, but to be fair I will use both numbers. So the AFR is 1.46 percent for 500,000 hours and 2.44 percent for 300,000 hours.
Adding It All Up
Remember my conjecture is that SATA is more costly in performance environments, whether IOPS or streaming.
The following information came from vendor Web sites in March and April of 2007:
|Drive Type||Vendor||GB||Form Factor||RPMs in K||Watts Idle||Watts running||Avg. MB/sec||MB/sec per Watt|
|300 GB FC 15K |
|Failures per year||254||229|
|Likelihood of 2 in a day||69.67%||62.70%|
Since performance requires power for HBAs and switch ports, here are some power requirements for data path technology:
QLogic dual port 4 Gbit HBA requires 6.5 watts. The BTU not provided on the company Web site.
Cisco 9500 series director switches for each port, 4 Gb port on a 48 port blade, uses 16.46 watts, and 3 BTUs are required to cool the heat generated.
Here is some information provided by a RAID vendor for 16 drive trays and their controller:
- Each controller draws approximately 60 watts
- Drive Tray with 16 Seagate 500GB Tonka-2 SATA drives = 375 watts
- Drive tray with 16 4Gbit 146GB 15K.4 drives = 382 watts
- The number of BTUs required to cool varies with the drives used and the capacity of the drives (larger capacities typically require larger power)
- 16 500 GB Seagate SATA drives = 1283 BTUs
- 16 146GB Seagate 4Gbit FC drives = 1300 BTUs
As you can see from simple multiplication, an FC tray uses 118 watts (382-(16*16.5)), and a SATA tray uses 167 watts (375-(16*13)). This is likely because of the additional SATA controllers needed compared with the optical FC connection. That is a 31 percent increase in power for FC and a 45 percent increase in power for SATA.
Let's say you need 100 GB/sec of sustained performance around the clock for your enterprise. This is not really that much performance today. It might be at the upper end for some of you reading this article, but take a bank, insurance company, the auto industry, pharmaceutical firm or any large business and this is not really a large performance requirement. Let's add up what you will need for disk drives.
For FC drives, I am assuming based on vendor data that the minimum transfer is 75 MB/sec per drive for inner cylinders, which are my estimates for both drive types using about 55 percent of maximum performance. Some of this is based on information that used to be on various disk drive vendors' Web sites. I have put in drive counts for RAID 4+1 and 8+1 and am assuming the failure rates above.
|750 GB FC 15K |
|Failures per year||522||435|
|Likelihood of 2 in a day||142.88%||119.07%|
Based on this information and the time to rebuild, which varies from 4 to 12 hours based on my estimates and the performance requirements. I think that you would need two extra 4+1s or 8+1s to ensure you meet your performance requirement, bringing the total drive count to 17,077 for RAID 4+1, and 15,378 for RAID 8+1. Note that this does not include hot spares.
The two extra LUNs will require two additional HBAs and two additional switch ports, and likely an additional part of a RAID controller; I am not adding these in, but please keep that fact in mind.
For SATA, here are the numbers:
|750 GB FC 7.2K |
|Failures per year||872||726|
|Likelihood of 2 in a day||238.79%||198.99%|
Given the likelihood of multiple failures in a day, and to get to the same point of failure as FC, I would estimate you would need 5 extra LUNs for 1.46 percent failure and 8 extra LUNs at 2.4 percent, based on the performance requirement.
For 1.46 percent, the disk totals come to 35,751 for RAID 4+2, and 29,817 for RAID 8+2.
For 2.44 percent the disk totals come to 35,801 for RAID 4+2, and 29,847 for RAID 8+2.
So from a pure watts perspective, for the drives alone you will need:
|FC||Count||Watts Per Drive||Total KW for drives|
|FC||Count||Watts Per Drive||Total KWatts Drives||Cost .09 Per KW Hour||Yearly Cost||Yearly Cost Est in RAID controller||5 year cost of power||5 year cost of power Est in RAID||5 Year Cost Difference||5 year cost Difference Est in RAID|
This does not include the cost of removing the BTU, which would add about one-third to the power costs and thus close the gap further.
|Drive||Count of Drive||Initial Cost||5 year cost of drives with power||5 year cost of drives with power Est in RAID||Real Cost Difference in times||Real Cost Difference in times in RAID|
The cost of power and cooling, along with additional drives, controllers and switch ports, makes the cost savings of SATA negligible at best over time. SATA drives may be 90 percent cheaper, but after all costs are factored in for performance environments — and we didn't include the cost of BTU removal or additional controllers and switch ports (plus power and cooling for those items) — the SATA cost advantage rapidly dwindles.
The drive costs were taken from my last article and the best prices I found on froogle.com for each drive type. What you see does not take into account the cost of extra RAID controllers for the increase in SATA drives and the extra rebuilds that will be required. It does not take into account the extra switch ports, HBAs and the power to remove the heat from all of this extra equipment.
Remember this series has been about performance environments. My conclusions suggest that my initial gut feeling seems to be correct: that the cost benefit of SATA does not pan out for performance environments and that FC or the coming SAS drives are a better choice. My calculations did not include the cost of BTUs for the heat generated, extra HBAs, switch ports and the associated cost of power for each. Nor did it take into account the future direction of energy costs.
I will let you draw your own conclusions, but I for one will be strongly recommending against SATA drives for performance environments.
Henry Newman, a regular Enterprise Storage Forum contributor, is an industry consultant with 26 years experience in high-performance computing and storage.
See more articles by Henry Newman.