As long as cyber attacks and viruses continue to infiltrate computer networks, and as long as systems continue to fail for a myriad of other reasons as well, anxiety over data protection and recovery will remain at or near the top of the list of major concerns for IT managers across the globe.
The costs associated with data loss are staggering. According to the National Archives and Records Administration, Washington, DC, 93 percent of companies that lost their data center for 10 days or more due to a disaster filed for bankruptcy within one year of the disaster. And 50 percent of those businesses that found themselves without data management for this same time period filed for bankruptcy immediately.
In addition, according to the 7th Annual ICSA Lab’s Virus Prevalence Survey (conducted in 2002), recovery from desktop-oriented disasters costs the average company between $100,000 and $1,000,000 per year. These figures include both hardware and software costs.
Even with all of the backup and recovery solutions available in today’s market, the question that keeps popping up is whether or not real-time back-up and recovery exists today.https://o1.qnsr.com/log/p.gif?;n=203;c=204660765;s=10655;x=7936;f=201812281308090;u=j;z=TIMESTAMP;a=20400368;e=i
The Debate over Real-Time Backup and Recovery
Phil Goodwin, senior program director for the META Group, seems to think that it does not. He believes that real-time backup and recovery will not arrive until storage virtualization matures, which he foresees as not occurring until 2005 or 2006.
Jon Toor, director of marketing at ONStor, believes that Mr. Goodwin is correct in that some form of virtualization is required for real-time backup and recovery. After all, he says, the intent is to provide uninterrupted access to data, irrespective of where the data is located. “But virtualization is a broad term, and it implies a level of complexity that is not essential – and perhaps not even desirable – for a backup and recovery role,” asserts Toor.
An example, he says, would be the ONStor filer. In this environment, servers and clients on the LAN store filed-based data to a SAN filer, which then stores it to SAN-attached disk. “On a scheduled basis, the SAN filer replicates the data to a second disk array. With a second data copy to draw on, data accesses may be transitioned from the primary to the secondary array at any time. The client may be unaware of this transition. In this sense, it is a form of virtualization, but a simplified form that is technically accessible now,” says Toor.
Zophar Sante, vice president of marketing for SANRAD, believes that real-time replication does exist today through virtualization. “Real-time replication is done through advanced mirroring techniques from the network layer, meaning that intelligent SAN gateways have the ability to replicate incoming writes and replicate them in a synchronous real-time fashion to independent storage systems located within a SAN,” he says.
Sante adds that if a write packet comes into a SANRAD virtualization switch, it is converted to a FC write request and replicates the write request, sending it to individual storage systems within a FC SAN architecture. “We are doing this today even if the storage systems are in different buildings or even across town connected using FC direct or FC tunneling. This technique means that data is replicated in real time to a remote storage system, and because these remote systems are generally disk based, their performance is equal to that of the primary system,” he concludes.
Data Protection Initiatives Remain a Top Priority
A recent survey of 400 IT managers revealed that data protection initiatives are among their top priorities, and that backup and recovery remains a major pain point. The IT managers surveyed cited excessive resource drain, high costs, and overall poor reliability as the major issues. But beyond seeing the problems, says Toor, these managers also see the opportunities for improvement. With the costs of disk-based processes falling and new technologies emerging, users now believe that viable solutions are near. “In fact,” says Toor, “the same survey indicated that 60 percent of IT managers plan to invest in disk-to-disk solutions within the next year.”
Sante says the majority of customers contacting SANRAD are looking for solutions that address three major concerns: the ability to eliminate single points of failure caused by individual storage systems, multi-pathing capabilities, and disk-based remote backup.
“In the new era, IT professionals want to replicate their data in real time to two independent storage systems located in independent physical locations, “ says Sante. Multi-pathing means that any server on the network has an alternate path to the primary and secondary storage systems. “If the path or storage system is down, the server automatically re-routes to an alternate path and/or alternate storage system,” he explains.
In addition, he says, most popular backup applications already have the ability to back up to virtual tape on disk or directly to disk volumes. “By using iSCSI, these backup disk systems can be remotely located across town or across the country.” And, he adds, disk backup is 10 times faster than most taped backup systems.
Backup Is a Major Resource Drain
Data loss can be very costly, not only in dollars and downtime, but also in productivity. So at what point does downtime associated with backup and recovery put an organization at risk? According to a 2001 survey conducted by Ontrack Data Recovery Services, 40 percent of the respondents said 72 hours, 21% said 48 hours, 15% said 24 hours, 8% said 8 hours, 9% said 4 hours, 3% said 1 hour, and 4% said within the hour.
It's also been said that backup and recovery comprises roughly 60 to 70 percent of the cost effort associated with storage management. Toor believes that the percentage varies widely, but whatever the percentage is, he does believe that IT managers are likely to agree that backup continues to be a major resource drain.
“Too much of the process requires manual intervention, and too much monitoring is required to detect failure,” he says. “These are low value-add activities that do nothing to improve immediate service levels or long-term infrastructure.” Consequently, he believes much of the time currently devoted to backup and recovery management could be more constructively applied elsewhere.
Sante agrees that the percentages are probably correct, but thinks that a better term would be high availability and business continuity. “Backup and recovery take time, but they are only a part of the data availability challenge,” he says. “What if the storage system dies (RAID controller failure, theft, fire, etc.) or the network connection dies (switch goes down, cable cracks, etc.) — these are just as bad, and having a backup on tape does not get clients connected to the data. Having the data is meaningless if the sever can’t get to it or read it,” he concludes.
Does Real-Time Backup and Recovery Exist Today? is a two-part feature. The second article will answer the following questions:
The majority of organizations still use tape backup systems and probably will for some time, but will today's backup and recovery processes be obsolete in the next 24 months?
As disk drives – particularly ATA devices – get cheaper, will more organizations move to disk-based backup? Is tape still the preferred method for customers whose servers reside in data centers? Are organizations now looking toward "hot" database backups, which work while the database is still running, because they cannot afford to shut down their databases? Are organizations showing a greater interest in data-replication services? Although disk-based backup will continue to make inroads into tape's territory, will most organizations use it as an adjunct for tape rather than as a replacement? And as vendors introduce cheaper disk arrays, will organizations use "staging to disk," where disk storage is used for a couple of months and then the data is moved to tape for longer-term storage? Will tape continue to be used because it enables data not replicated to disk to be stored offsite?
As disk drives – particularly ATA devices – get cheaper, will more organizations move to disk-based backup?
Is tape still the preferred method for customers whose servers reside in data centers?
Are organizations now looking toward "hot" database backups, which work while the database is still running, because they cannot afford to shut down their databases?
Are organizations showing a greater interest in data-replication services?
Although disk-based backup will continue to make inroads into tape's territory, will most organizations use it as an adjunct for tape rather than as a replacement? And as vendors introduce cheaper disk arrays, will organizations use "staging to disk," where disk storage is used for a couple of months and then the data is moved to tape for longer-term storage?
Will tape continue to be used because it enables data not replicated to disk to be stored offsite?