Brother, Can You Spare a Petabyte?

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Henry Newman Let’s face it: Times are tough and there’s a lot of pressure to cut costs. I hear it all the time from my customers.

But it’s not as simple as choosing the cheapest data storage technology. If you care about your data — and if you’re reading this, you probably do — then you need to consider the technology and reliability tradeoffs of storage technologies, whether you’re an enterprise, small business or even an individual home user (my own home backup and data protection scheme borders on the paranoid). Storage costs aren’t just about the price of the hardware or software; they’re about operating and maintenance costs — and the cost of lost or corrupt data.

When I am trying to help customers understand the technology tradeoffs, the first thing I do is to try to understand what their requirements are. Usually I get a glazed look or get told to just solve the problem, and sometimes I’m told that the requirement is for storage that’s as cheap as possible. Very few people actually understand their requirements, and even fewer know how to apply them.

 

SATA, SAS and Tape

Let’s look at the example of choosing between different types of disk and tape drives. You might say these can all be taken care of by RAID, but there are some important things to consider; I think that even the bean counters don’t want you to put the company’s data at risk.

The biggest issue is the hard error rate of the technology. Every disk and tape drive has a hard error rate specified in the average number of bits, which if read or written, will return an error saying that the device cannot be accessed. There are lots of reasons for hard errors, such as media errors, head errors and media failures. It doesn’t matter what the cause is; what matters is how often it happens for each of the devices.

If you have a hard error with a RAID-5 LUN, the LUN will need to be rebuilt, and hopefully you won’t get another hard error or the data will be lost. With RAID-6, another hard error is still not catastrophic, as you have two parity devices.

You know what they say about lies and statistics, but the hard error rates below come from drive manufacturers for both disk and tape.

Device Hard error rate in bits  Equivalent in bytes PB equivalent Days to hit at 120 MB/sec Days to hit at 200 MB/sec
Consumer SATA 10E+14  12.5E+13 0.89 92 55
Enterprise SATA 10E+15  12.5E+14 8.88 920 552
Enterprise SAS/FC 10E+16  12.5E+15 88.82 9,198 5,519
LTO 10E+17 12.5E+16 888.18 91,982 55,189
T10000B 10E+19 12.5E+18 88,817.84 9,198,247 5,518,949

You have to remember that the bit error rate (BER), which is also known as the hard error rate, is completely different from the annualized failure rate (AFR) of the device. One way to look at it is the failure of a single access compared to the failure of the whole device. Sometimes with some RAID controllers, the failure of a single access is the failure of the device, but you have to remember that BER is measured in bits of transfer and AFR is measure in hours. A device can fail just sitting doing nothing, but the BER is based on device usage. If you care about your data, this is a critical issue.

Some lower-end storage systems use consumer-level SATAdrives, which if used heavily can fail pretty quickly. The problem is that in RAID devices, sometimes if one device fails, other devices will fail during rebuild. The bottom line is that you need to consider the disk drives and your exposure to data loss as part of any storage decision. Buying the cheapest stuff on the market might get you the storage you want, but it might wind up costing you your data.

The cost per GB for SASand Fibre Channel drives is much higher than SATA, but few people realize that for important data you should include the reliability calculation as part of the decision-making process. If your data is critically important to your organization, having a BER that’s ten times better is an important consideration; clearly the cost difference per GB between SATA and SAS/FC isn’t nearly as great. Even in tough times, it is important to consider not just the initial cost, but the cost of losing what is important.

 

Tape versus Deduped Disk

I have seen no reputable study showing that disk and tape costs per GB are even close. Tape always wins on cost, but do you have to write everything to tape?

Data deduplication has become one of the fastest-growing segments of the storage market, if not the fastest. There are many companies that provide dedupe technology. Some are integrated hardware platforms, while others are just software. Some of the claims of 50 to 1 reduction in the amount of data backed up are realistic in environments such as VMware (NYSE: VMW), but other environments such as media files do not get anywhere near that ratio and compression is often similar or even better.

Dedupe can speed up the backup process if there is enough bandwidth to the dedupe device compared with the bandwidth to tape. With tape latency and other issues, dedupe will likely be a big winner over standard tape backup from a time perspective, and depending on the size of the backup and the number of tapes, tape slots and the cost of the dedupe system, it can even be a cost savings. Of course, the real issue for backup isn’t backing up the data, but restoring it. Keep in mind that the dedupe platform can expand data faster than it can likely write it to the channel.

One of the biggest complaints I hear about tape is that it is slow. The latency for tape to load and thread and be ready hasn’t changed much since the advent of the tape cartridge, but that isn’t the real issue. More often than not, the real issue for backup and tape performance is that tapes are faster today than most networks that they are attached to. Take the following facts. In 2000, LTO uncompressed data rates were 20 MB/sec and most networks were 1Gb, or realistically about 80 MB/sec to 90 MB/sec, so the network was four or more times faster, and about half that with compression.

LTO-4 today boasts a 120 MB/sec uncompressed data rate, 240 MB/sec compressed, and with 10GbE networks to the backup server, you have a little more breathing room, but not much. But the problem is that very few people have an end-to-end 10GbE network, and remember you will be bound by the slowest point on the network. The same is true with tape — if you are using FC-2 with LTO-4, for example, FC-2 has a 200 MB/sec limit and LTO-4 with compression is 240 MB/sec. Add to this that most people put multiple tape drives on the same FC connection and you have a performance issue that is again caused by the network.

This is why if you are going to use tape — which is, after all, not only cheaper than disk, but also more reliable if handled and stored properly — to use tape efficiently you need to stream the device at full rate, including compression, so disk-to-disk-to-tape (D2D2T) is the way to go. To accomplish this requires using either a VTL or backup software that manages a D2D2T framework, and this usually is an added expense for the software. The tradeoff between D2D2T, VTLs and dedupe, or a combination of one or more, is a complex decision that depends on the dedupability of your data, the state of your network, the cost of the additional hardware and software and other factors such as power, training and floor space. One benefit of a D2D2T system could be deduping data before writing to tape, thus saving even more money.

And another factor to consider: if you’re eliminating multiple copies of data, make sure the one you’re keeping is right. Check with your dedupe vendor to make sure they have proper checks for ensuring data integrity and reliability (see Data Corruption: Dedupe’s Achilles Heel).

The disk and tape tradeoffs are pretty clear. Tape is cheaper and potentially more reliable than disk, but you need the right infrastructure to make it efficient. Dedupe has promise for saving on storage costs, but cheap disks carry the potential for data loss. With apologies to Rush, you can’t get Something For Nothing in the data storage market, but hopefully you now know something about spending your money wisely.

 

Henry Newman, a regular Enterprise Storage Forum contributor, is an industry consultant with 28 years experience in high-performance computing and storage.
See more articles by Henry Newman.

Follow Enterprise Storage Forum on Twitter

 

Henry Newman
Henry Newman
Henry Newman has been a contributor to TechnologyAdvice websites for more than 20 years. His career in high-performance computing, storage and security dates to the early 1980s, when Cray was the name of a supercomputing company rather than an entry in Urban Dictionary. After nearly four decades of architecting IT systems, he recently retired as CTO of a storage company’s Federal group, but he rather quickly lost a bet that he wouldn't be able to stay retired by taking a consulting gig in his first month of retirement.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.