Book Excerpt: SAN Backup and Recovery Page 9 -

Book Excerpt: SAN Backup and Recovery Page 9

Download the authoritative guide: Enterprise Data Storage 2018: Optimizing Your Storage Infrastructure

By W. Curtis Preston

Establish the backup mirror

As shown in Figure 4-8, you must establish (reattach) the backup mirror to the primary disk set (1), causing the mirror to copy to the backup mirror any regions that have been modified since the backup mirror was last established (2).

Figure 4-8. Establishing the backup mirror


The following list shows how this works for the various products:

This functionality is available on the RAID Array 8000 (RA8000) and the Enterprise Storage Array 12000 (ESA12000) using HSG80 controllers in switch or hub configurations. The SANworks Enterprise Volume Manager and Command Scripter products support Windows NT/2000, Solaris, and Tru64.

Although Compaq uses the term BCV to refer to a set of disks that comprise a backup mirror, their arrays don't have any commands that interact with the entire BCV as one entity. All establishing and splitting of mirrors takes place on the individual disk level. Therefore, if you have a striped set of mirrored disks to which you want to assign a third mirror, you need assign a third disk to each set of mirrored disks that make up that stripe. In order to do this, you issue the following commands:

set mirrorset-name nopolicy

set mirrorset-name members=3

set mirrorset-name replace=disk-name

First, the nopolicy flag tells the mirror not to add members to the disk group until you do so manually. Then you add a member to the mirrorset by specifying that it has one more member than it already has. (The number 3 in this example assumes that there was already a mirrored pair to which you are adding a third member.) Then, you specify the name of the disk that is to be that third mirror.

Once this relationship is established for each disk in the stripe, it will take some time to copy the data from the existing mirror to the backup mirror. To check the status of this copy, issue the command show mirrorset mirrorset-name.

These commands can be issued via a terminal connected directly to the array, or via the Compaq Command Line Scripter tool discussed earlier.

On EMC, establishing the BCV (i.e., backup mirror) to the standard (i.e., primary disk set) is done with the symbcv establish command that is part of the EMC Timefinder package. (Timefinder is available on both Unix and Windows.) When issuing this command, you need to tell it which BCV to establish. Since a BCV is actually a set of devices that are collectively referred to as "the BCV," EMC uses the concept of device groups to tell Timefinder which BCV devices to synchronize with which standard devices. Therefore, prior to issuing the symbcv establish command, you need to create a device group that contains all the standards and the BCV devices to which they are associated. In order to establish the BCV to the standard, you issue the following command:

# symbcv establish -g group_name [-i]

The -g option specifies the name of the device group that you created above. If the BCV has been previously established and split, you can also specify the -i flag that tells Timefinder to perform an incremental establish. This tells Timefinder to look at both the BCV and the standard devices and copy over only those regions that have changed since the BCV was last established. It even allows you to modify the BCV while it's split. If you modify any regions on the BCV devices (such as when you overwrite the private regions of each device with Veritas Volume Manager so that you can import them to another system), those regions will also be refreshed from the standard, even if they have not been changed on the primary disk set.

Once the BCV is established, you can check the progress of the synchronization with the symbcv verify -g device_group command. This shows the number of "BCV invalids" and "standard invalids" that still have to be copied. It also sometimes lists a percentage complete column, but I have not found that column to be particularly reliable or useful.

On HDS, establishing the shadow volume (i.e., backup mirror) to the primary mirror is done with the paircreate command that is part of the HDS Shadowimage package. (Shadowimage is available on both Unix and NT.) When issuing this command, you need to tell it which secondary volume (S-VOL) to establish. Since an S-VOL is actually a pool of devices that are collectively referred to as "reserve pool," HDS uses the concept of groups to tell Shadowimage which S-VOL devices to synchronize with which primary volumes (P-VOL). Therefore, prior to issuing the paircreate -g device_group command, you need to create a device group that contains all the primary mirrors and the BCV devices to which they are associated.

In order to establish (i.e., synchronize) the S-VOL to the P-VOL, issue the following command:

# paircreate -g device_group

If the S-VOL has been previously established and split, you can also specify the pairresync command that tells Shadowimage to perform a resynchronization of the pairs.This tells Shadowimage to apply the writes to the P-VOL, which are logged in cache, to the S-VOL because it has been split. It even allows you to modify the S-VOL while it's split. If you modify any regions on the S-VOL devices (such as when you overwrite the private regions of each device with Veritas Volume Manager so that you can import them to another system), those regions are also refreshed from the primary, even if they have not been changed on the primary mirror.

Once the S-VOL is established, you can check the progress of the synchronization with the pairdisplay -g device_group or -m all command. This shows you the number of "transition volumes" that still have to be copied and the percentage of copying already done.

Put the database in backup mode

As shown in Figure 4-9, once the backup mirror is fully synchronized with the primary disk set, you have to tell the data server to put the database in backup mode (1). In most client-server backup environments, this is normally accomplished by telling the backup software to run a script prior to a backup. The problem here is the backup client, where the script is normally run, isn't the host where the script needs to run. The client in Figure 4-9 is actually the Backup Server B, not Data Server. This means that you need to use something like rsh or ssh to pass the command from Backup Server B to Data Server. There are now ssh servers available for both Unix and Windows. The commands you need to run on Data Server will obviously depend on the application.

Figure 4-9. Backing up the backup mirror


Here are the steps for Exchange, Informix, Oracle, and SQL Server:

Exchange must be shut down prior to splitting the mirror. This is done by issuing a series of net stop commands in a batch file:

net stop "Microsoft Exchange Directory" /y

net stop "Microsoft Exchange Event Service" /y

net stop "Microsoft Exchange Information Store" /y

net stop "Microsoft Exchange Internet Mail Service" /y

net stop "Microsoft Exchange Message Transfer Agent" /y

net stop "Microsoft Exchange System Attendant" /y

Informix is relatively easy. All you have to do is specify the appropriate environment variables and issue the command onmode -c block. Unlike Oracle, however, once this command is issued, all commits will hang until the onmode -c unblock command is issued.

Putting Oracle databases in backup mode is no easy task. You need to know the names of every tablespace, and place each tablespace into backup mode using the command alter tablespace tablespace_name begin backup. Many people create a script to automatically discover all the tablespaces and place each in backup mode. Putting the tablespaces into backup mode causes a minor performance hit, but the database will continue to function normally.

SQL Server
As discussed previously, it isn't necessary to shut down SQL Server prior to splitting the mirror. However, doing so will speed up recovery time. To stop SQL Server, issue the following command:


Click here to buy book

Building SANs with Brocade Fabric Switches

W. Curtis Preston has specialized in designing backup and recovery systems for over eight years, and has designed such systems for many environments, both large and small. The first environment that Curtis was responsible for went from 7 small servers to 250 large servers in just over two years, running Oracle, Informix, and Sybase databases and five versions of Unix. He started managing this environment with homegrown utilities and eventually installed the first of many commercial backup utilities. His passion for backup and recovery began with managing the data growth of this 24x7, mission-critical environment. Having designed backup systems for environments with small budgets, Curtis has developed a number of freely available tools, including ones that perform live backups of Oracle, Informix, and Sybase. He has ported these tools to a number of environments, including Linux, and they are running at companies around the world. Curtis is now the owner of Storage Designs, a consulting company dedicated entirely to selecting, designing, implementing, and auditing storage systems. He is also the webmaster of


1. This term may be changed in the near future, since iSCSI-based SANs will, of course, use the LAN. But if you create a separate LAN for iSCSI, as many experts are recommending, the backups will not use your production LAN. Therefore, the principle remains the same, and only the implementation changes.

2. As mentioned later in this chapter, SCSI devices can be connected to more than one host, but it can be troublesome.

3. This is actually a high rate of change, but it helps prove the point. Even with a rate of change this high, the drives still go unused the majority of the time.

4. 1.575 TB ÷ 8 hours ÷ 60 minutes ÷ 60 seconds = 54.6 MB/s

5. There are several tape drives capable of these backup speeds, including AIT-3, LTO, Mammoth, Super DLT, 3590, 9840, and DTF.

6. 20 minutes × 24 hosts = 480 minutes, or 8 hours

7. These are Unix prices. Obviously, Windows-based cards cost much less.

8. Although it's possible that some software products have also implemented a third-party queuing system for the robotic arm as well, I am not aware of any that do this. As long as you have a third-party application controlling access to the tape library and placing tapes into drives that need them, there is no need to share the robot in a SCSI sense.

9. Network Appliance filers appear to act this way, but the WAFL filesystem is quite a bit different. They store a "before" image of every block that is changed every time they sync the data from NVRAM to disk. Each time they perform a sync operation, they leave a pointer to the previous state of the filesystem. A Network Appliance snapshot, then, is simply a reference to that pointer. Please consult your Network Appliance documentation for details.

10. It was the Microsoft's partnership with Veritas that finally made this a reality. The volume manager for Windows 2000 is a "lite" version of Veritas Volume Manager.

11. Prior to 9i, this was done with the suvrmgr command, but this command has been removed from 9i.

12. There are vendors that are shipping gigabit network cards that offload the TCP/IP processing from the server. They make LAN-based backups easier, but LAN-free backups are still better because of the design of most backup software packages.

13. It's not quite 100%, since the second stripe doesn't have to be a RAID 5 set. If it were simply a RAID 0 set, you'd need about 90% more disk than you already have.

14. If your backup software supports library sharing.

Page 9 of 13

Previous Page
1 2 3 4 5 6 7 8 9 10 11 12 13
Next Page

Comment and Contribute


(Maximum characters: 1200). You have characters left.



Storage Daily
Don't miss an article. Subscribe to our newsletter below.

By submitting your information, you agree that may send you ENTERPRISEStorageFORUM offers via email, phone and text message, as well as email offers about other products and services that ENTERPRISEStorageFORUM believes may be of interest to you. ENTERPRISEStorageFORUM will process your information in accordance with the Quinstreet Privacy Policy.

We have made updates to our Privacy Policy to reflect the implementation of the General Data Protection Regulation.
Thanks for your registration, follow us on our social networks to keep up-to-date