RAID Revs for the Future

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Seventeen years ago this month, David Patterson, Garth Gibson and Randy Katz published their landmark research paper, A Case for Redundant Arrays of Inexpensive Disks (RAID), which helped establish the modern RAID industry. Nearly two decades later, RAID technology remains as vibrant as ever, and in fact, its best days may be yet to come.

In The Beginning

Although RAID is pervasive today, it was by no means an overnight success story. Gibson, who is now CTO at Panasas, began work on RAID in 1986 while a grad student at the University of California at Berkeley. He co-wrote the paper the following year and saw it published in 1988. Among the concepts in the paper were the formal definitions of RAID levels 1 through 5. Yet by 1990, Gibson still wasn’t sure that the future was all that bright for RAID.

“I though there was no chance for RAID, we had given it all sorts of papers and visibility,” Gibson told Enterprise Storage Forum. “By 1993 to 1994, it was really obvious that it was just a lag time for the industry to internalize it and develop R&D and find a market for it.”

Although Gibson and his co-authors effectively defined the five RAID levels, they didn’t patent their definitions and didn’t trademark the term RAID either. In Gibson’s view, those decisions helped fuel the eventual growth and pervasiveness of RAID.

“If we had patented the name and taxonomy for, say, RAID level 5, then the majority of companies would not have used it,” Gibson said. “The benefits that it has provided as an organizing principle would simply have not happened.”

Virtualization Began With RAID

Among the hottest trends in storage today is virtualization, a trend that began with RAID.

“RAID subsystems were the first forms of virtualization, but when people talk about virtualization today, they usually want to talk about something new, so they don’t consider RAID to be that,” Gibson said.

At its most basic level, virtualization is about putting an abstraction layer on top of something physical.

“One of the principles of RAID is the ability to group multiple drives together and present them as a logical volume,” said Evaluator Group analyst Greg Schulz. “There’s a low level of virtualization and abstraction happening there.”

According to Arun Taneja, founder and consulting analyst at the Taneja Group, users and analysts did not apply the term ‘virtualization to RAID until recently.

“It was applied first and foremost to the ability of certain products to actually allow me to use different companies disk systems,” Taneja said. “RAID is a form of virtualization within a single box. It’s come kind of backwards, but fundamentally, RAID is 100% a virtualization concept.”

Virtualizing RAID

Now RAID itself is becoming virtualized, abstracting the data from a disk concept into a much broader object-based concept. One of the ways that RAID is evolving as both a storage strategy and a virtualization concept is the idea of placing data parity into the files themselves, like RAID in a file concept.

“What RAID in the file does is it allows you to customize RAID to every file,” Gibson said. “It allows you to have non-redundant data in the midst of redundant data, and for us, it allows properties so we can reconstruct faster.”

Gibson’s firm Panasas is doing RAID in a file today, and he noted that EMC’s Centera system does it as well.

According to Schulz, another concept that is gaining momentum is the idea of linking RAID nodes together into a larger storage mechanism. The concept is sometimes referred to as RAIN — Redundant Array of Independent Nodes. The current industry buzz word for RAIN is “storage grid.”

Schulz explained that in a RAIN setup, there are multiple servers, each with disk drive and RAID functionality, all working together as a RAIN, or a parity or mirrored implementation.

A RAID Standard?

Even though the RAID concept has been around since 1986 and is installed in countless millions of implementations, it is a technology that lacks standard implementations.

“There is no RAID standard,” said Taneja. “At the fundamental concept level there is agreement. The implementations are proprietary and unique to every vendor. If an application wanted to look inside a RAID system, the only way to do that is if the storage vendor gives the application vendor the APIs to look inside the box; there is no standard way of doing that.”

A new effort underway at the Storage Networking Industry Association (SNIA) called Disk Data Format (DDF) may be the answer. At the moment, if you’re running a server with a RAID card to control the disks, and the RAID card fails and you’re unable to get the same type of card, you may be in for a rude awakening.

“You should be able to get that RAID card from possibly a different vendor, pop it in and be able to continue to get access to your data,” Gibson said. “You really can’t do that right now, as the data is a proprietary format of the RAID implementation.”

DDF is an attempt to standardize where the bits are on the disk so users can replace one RAID controller with another.

“That is going to create opportunities for people that would have lost data to not lose data,” Gibson said. “It’s also going to commoditize the RAID controller market.”

Growing Data Protection Needs

There are a number of next steps in the continuing evolution of RAID. In Taneja view, RAID data protection needs to continue to improve. In a typical RAID5 implementation, one drive can go down and the others will take over. However, Taneja argues that in an era with increasing numbers of SATA drives, the possibility of two drives failing is more of a reality than it was with Fibre Channel or SCSI.

One of the potential solutions for the data protection issues is what Network Appliance is doing with RAID DP (double parity). DP adds an additional parity disk to each RAID group in a server volume, which offers protection against the possibility of two drives failing.

“NetApp is shipping storage in a dual drive RAID DP as a default,” Taneja said. “They’ve already gone away from shipping as RAID 4, where everything they used to ship would always go out as RAID 4. That’s a very telling sign.”

The other issue that needs to be fixed as part of the evolution of RAID, according to Taneja, is RAID rebuild times, which he feels are much too high today.

“This is more important because SATA drives are so much bigger in capacity and the rebuild time is directly proportional to the amount of capacity on the drive,” he said.

While rebuilding, the RAID group takes a performance hit, which can be an issue for many users. If another drive fails during the rebuild, the issue can snowball into a crisis.

“With SATA, the probability of second failure is much higher,” Taneja said. “So users are highly vulnerable during rebuild.”

In Taneja view, significant improvements need to happen, and no current implementation is quite adequate.

“All the vendors are working on the issue using all kinds of techniques, but as far as I’m concerned, none of them is adequate enough for the risks that SATA brings into the picture,” Taneja said.

Here to Stay

After more than 17 years, it would appear as if RAID is here to stay, at least for the foreseeable future.

“RAID is a term that people can latch on to,” Schulz said. “They may not fully understand it, but a lot of people know about it. We’ll see new things come about that are RAID — like RAID6 and enhanced RAID — but RAID will remain a basic building block.”

Indeed, RAID6 and RAID DP offer some hope for SATA drive users, with their ability to support a second drive failure.

Gibson also believes that RAID terminology will stick around for a while still.

“The term will last well into a major change in the way it’s done,” Gibson said. “It implies a notion of reliability and tradeoff of reliability against performance. It’s a checklist item for storage, so users don’t want to see it go away.”

For more storage features, visit Enterprise Storage Forum Special Reports

Sean Michael Kerner
Sean Michael Kerner
Sean Michael Kerner is an Internet consultant, strategist, and contributor to several leading IT business web sites.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.