Ensuring Business Continuance via High Availability SANs

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

The highest priority of any IT organization should be business continuance through effective data protection. During a time of unparalleled growth in the enterprise, centralized data protection in the form of a Storage Area Network (SAN) is the clear answer for safeguarding corporate data. SANs offer an enterprise-wide solution to business continuance challenges, enable more reliable and frequent back-ups, allow for more rapid access to data across the enterprise, and provide true, enterprise-wide system management at the core.

Over the next decade, the three trillion dollar IT industry will commence re-architecting to bring “data” to the core of its enterprise, thereby facilitating the transformation to information-driven business. According to IDC (a leading IT industry analyst), protection of and access to that data will drive new architecture strategies. The enterprise must take extraordinary measures to ensure that the most valuable enterprise asset, its data, is well managed and always available.

The highest priority of any IT organization is business continuance through effective data protection. While the most innocuous collapse of protection merely impacts careers, in the worst case, data loss due to failed or missing backups devastates profitability and opens the company to potential litigation. An absent data protection strategy united with a “defining” moment will undermine the enterprise. Until recently, very straightforward solutions were employed for data protection — a tape device or library with sufficient capacity was usually attached to the server and well-known procedures were implemented to ensure the execution of the backups.

The volume of data demanded during normal enterprise operations has increased dramatically due to the combination of plunging storage costs and growing online enterprise practices. With the advent of client-server computing, company data has become widely distributed throughout the enterprise, making it nearly impossible for IT management to express absolute confidence that every server is regularly and reliably backed-up. In a 24×7 operation, individually managing each of these distributed backups has become prohibitively expensive, if not impossible.

Dependence on data access for routine operations has become increasingly prevalent as nearly every aspect of the enterprise is now automated. The backup window traditionally has been defined as “off hours” — overnight; whereas now, application data is expected to remain available 24 hours a day, seven days a week, effectively closing that window. Also, as enterprises expand their operations globally (owing to the Internet), it becomes increasingly difficult — if not impossible — to shut down any part of the enterprise, which drives the requirement to sustain the ability to scale without interruption. Naturally, as the backup window is shrinking (or is non-existent in some cases), backups are demanding more and more time due to ever-increasing data volume.

It has been estimated by the Gartner Group that backup costs comprise 34% to 56% of total storage investments, a figure that is rising as IT organizations increasingly mandate round-the-clock availability. The repeated implementations of hardware and the personnel performing the backups are also significant components of this cost. In most cases, this is still completed one server at a time.

Solving the Data Protection Challenge

Several technologies must be integrated to solve such complex data protection challenges. An effective, efficient means of centralizing the backup hardware and management is required, and a separate data transport is necessary to isolate the backup traffic from the Local Area Network (LAN). Additionally, sophisticated software is necessary to allow centralized backups of distributed servers to occur while applications remain online. Finally, there are the experienced enterprise-class storage integration professionals who are vital to the success of the solution.

The incorporation of a new means of data transport with speeds greater than those currently supported by the LAN would be the optimal solution and would have the added benefit of being uninhibited by the distance limitations of directly attached Small Computer System Interface (SCSI) devices. A centralized data protection scheme becomes practical through the development of this model.

Furthermore, the centralization of storage and backup tape resources is facilitated by the implementation of a Storage Area Network (SAN). Most significantly, rather than contributing to congestion of the LAN, all data and backup traffic is transported over the high-speed SAN. Ultimately, this improves LAN performance by isolating the data traffic from the command and control traffic and by freeing the LAN from carrying backup data.

During a time of unparalleled growth in the enterprise, centralized data protection is the clear answer for safeguarding corporate data. All of this comes at a lower total cost of ownership, plus a central backup mechanism ends up improving reliability, speeds recovery, and ensuring better protection.

Page 2: The Interoperability Challenge

The Interoperability Challenge

Designing a complete SAN-based solution is an extremely complex task which requires research, testing, resolution and documentation in vast detail. To ensure interoperability, each component’s various software, firmware, and hardware revision levels must be tested. Furthermore, the complete solution must be examined in order to ensure inclusive compatibility with the intended applications, and detailed processes and methodologies must be developed and documented in order to provide predictable and repeatable results.

There is also the considerable risk of lengthening the return on investment period and perhaps even rendering the project fiscally unfeasible by randomly integrating new technologies within the data center. As a result, a task as complex as architecting a SAN is rarely attempted solely by IT staff, and if it is attempted, it is only for traditional core mission-critical applications. Consequently, many advantageous applications of new technologies are usually not explored until a clear implementation plan has been adopted.

Problems

Distributed data protection is unreliable, difficult to administer, and expensive. To manage the backup, each server requires dedicated tape hardware and personnel. So, while accountable for the protection of all enterprise data, central IT management is entirely dependent on distributed personnel for ensuring backup execution. Compounded by the accumulation of individual backup sites is the complexity and expense of reliably performing best practices, such as judiciously maintaining off-site copies of backups for disaster recovery.

Furthermore, while the centralization of backups by transporting data over the existing LAN to data center tape libraries was an option in the past, today this approach offers limited scalability, preventing it from being able to support the rapid growth in storage that is outpacing the capacity expansion of the LAN. Additionally, backup traffic critically impacts the performance of other key applications, and the LAN is unable to provide a stable platform for mission-critical protection of data traffic. This is due to the diverse number of attached devices. Finally, security may be compromised by routing backup traffic over the LAN.

Page 3: LAN Constraint Freedom

LAN Constraint Freedom

Storage consolidation creates a common pool of storage that is shared among all servers; thus, one or more new or existing tape libraries can easily (and without disruption) be connected to the SAN. They are also controlled by a dedicated backup server running backup application software (such as VERITAS NetBackup or Tivoli Storage Manager). All backups are managed from this server via an intuitive graphical user interface, thus allowing an administrator to manage a larger number of servers.

Through the automation of the entire process, a single administrator can also manage the backup of thousands of users connected to numerous servers while benefiting from a reduction in day-to-day monitoring and having to respond to exception conditions. With numerous scheduling options for frequency, administrators can now define backups as full, incremental, or differential. Formal data protection policies can also be defined within the backup application to ensure consistent protection across the enterprise.

Most backup applications are database-aware, which allows for non-disruptive backups of online databases with minimal impact to CPU performance. Prior to the backup, backup agent processes running on each server establish snapshots or transparent split-mirror copies of each database. The data from servers may then be backed up to the tape library by applications that deliver multiple data streams from all servers simultaneously. So, with only minimal command traffic sent over the LAN, all backup data traffic is routed over the SAN. For server-specific off-site backup or co-location support, some backup applications can also automatically create multiple copies of a backup or de-multiplex a tape.

Since each element of the data protection solution is scalable, administration and management of future growth is greatly simplified. Without disruption, each server attached to the SAN may quickly allocate additional storage as required from the existing central pool. In order to accommodate future needs, the storage, tape libraries, control processors, and the SAN itself can be flexibly and extensively scaled as necessary.

For server-less backup, a true data protection solution offers complete hardware and application support. Without first sending the data through the production application server’s CPU, each disk device receives instructions to copy data directly to the tape device. By conveying data directly to the backup device, server-less backup significantly reduces the application server’s CPU overhead and accelerates the backup process.

Protection Value

A true data protection solution significantly improves productivity by ensuring higher availability of data throughout the backup process and rapid access to and restoration of lost data. This is in addition to the greater protection realized with more reliable and frequent backups. By shifting backup traffic to the SAN and extending the LAN’s useful life, the existing LAN’s performance is dramatically improved.

Finally, a robust data protection solution delivers a rapid return on investment that can be measured in just a few months, not years. While the need for new backup device purchases for each new server is eliminated, existing hardware investments can be better utilized. Storage that was once individually connected to each server gains higher utilization when centralized, while the associated cost of management and administration is reduced proportionately. Most importantly, the cost of downtime due to data protection or loss is significantly reduced or virtually eliminated, which of course directly contributes to the bottom line of the business.

Page 4: Summary and Conclusions

Summary and Conclusions

Over the next decade, the IT industry will begin rebuilding its data management systems to bring data to the core of the enterprise. The result of this is a transformation to truly information-driven business. However, in order to ensure that their most valuable and continually growing asset (data) is well managed and continually available throughout this transformation, companies must take extraordinary measures. Enterprises face the daunting challenges of system reliability and availability, fast recovery from any changes or scheduled downtime, data security, scalability and interoperability, and the need to realize a lower total cost of ownership as they evolve into core-to-edge information-driven enterprises.

Business continuance can be assured by giving the highest priority to effective data protection as well as by aiming directly at the center of these challenges. While even the most innocuous lapse in protection can impact careers, data loss due to failed or missing back-ups can devastate profitability.

The implementation of a Storage Area Network (SAN) centralizes and protects critical storage and backup tape resources that house enterprise data. SANs offer an enterprise-wide solution to business continuance challenges, enable more reliable and frequent back-ups, allow for more rapid access to data across the enterprise, and provide true, enterprise-wide system management at the core.

Finally, centralized data protection through SAN implementation is the clear answer for safeguarding corporate data at a time of unparalleled growth and transformation in the enterprise. A central back-up mechanism speeds recovery, improves reliability, and ensures better protection — all at a lower cost of ownership — by maximizing the enterprise’s existing IT investments and further assures successful business continuance initiatives.


John Vacca is an information technology consultant and internationally known author based in Pomeroy, Ohio. Since 1982, John has authored 39 books and more than 485 articles in the areas of advanced storage, computer security and aerospace technology. John was also a configuration management specialist, computer specialist, and the computer security official for NASA’s space station program (Freedom) and the International Space Station Program, from 1988 until his early retirement from NASA in 1995. John was also one of the security consultants for the MGM movie titled : “AntiTrust,” which was released on January 12, 2001. John can be reached on the Internet at jvacca@hti.net.


»


See All Articles by Columnist
John Vacca

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.