Download the authoritative guide: Enterprise Data Storage 2018: Optimizing Your Storage Infrastructure
In the good old days of corporate computing, the mainframe was all IT had. System users can clearly remember how data center staff brushed them off every time they asked for a change to a critical application or a more up-to-date approach to an application design. Users were told that any request would take 18 person-months of development time to deliver, would need senior-level approval or any other litany of well-known responses. To the average worker, it seemed that in the world of corporate computing, control of the mainframe was the ultimate power.
Therefore, it was no surprise that when Open Systems entered the market place, the pendulum began to swing. Departments fled the iron grip of the mainframe-centric data center and started doing things on their own, giving birth to the age of decentralized computing. And it hasn't stopped at the departmental level. According to Fred Moore of Horison Information Strategies, fifty percent of all digital data is kept on personal computers and not integrated into enterprise IT.
Now, however, a change is on the horizon as the pendulum begins to swing back to a re-centralization of computing. But why would this move ever be contemplated after the problems of the past? As in so many cases, the answer is monry. As executives scrutinize and tighten the corporate computing budget, mainframe applications begin to make much more economic sense. And nowhere does is make better sense that in the area of data backup and recovery.
Effective disaster prevention is a new corporate mandate given the events of the past year. Yet, with the volume of data increasing at an exponential rate, running on a myriad of platforms located anywhere in the enterprise, how could a company ensure it's fully protected? According to Horison Information Strategies, the growth of data over the next years will generate ten times more storage volume than can be managed on open systems. And what corporation can afford to continually add tape libraries as their open system capacity is maximized.https://o1.qnsr.com/log/p.gif?;n=203;c=204650394;s=9477;x=7936;f=201801171506010;u=j;z=TIMESTAMP;a=20392931;e=i
Many enterprises can limit these expenditures simply by utilizing the mainframe as the primary backup server. Often, the mainframe already has massive amounts of tape storage and, over the last 25 years, it has perfected the management and rotation of tape storage to and from the disaster recovery site. Using the mainframe's tape storage capacities and disaster recovery processes eliminates the need to invest in expensive tape libraries and tape storage that only would be used for server backup.
In addition to containing the costs of tape backup storage, utilizing the mainframe as a backup server provides containment of infrastructure and personnel costs as well. This is because using the mainframe as a backup server provides the following benefits;
- Automatically ensures sufficient scalability to handle future storage growth.
- Automatically provides 24x7 availability to backup storage.
- Automatically puts server backup into the hands of experienced storage specialists.
- Ensures server backup tapes are automatically sent to the disaster recovery site.
What's more, by using the existing high-bandwidth mainframe technologies such as ESCON or SAN, IT departments can achieve the fastest LAN-less backup single stream and aggregate throughput while also achieving fast restore times.
Containing costs by leveraging existing IT infrastructure and mainframe technology keeps the total cost of the enterprise backup and recovery solution to a minimum. It's a win-win situation. Server administrators are relieved of the growing burden of backup and the enterprise has minimized the cost of providing a scalable backup and recovery system.
Companies that utilize the mainframe environment in backup and recovery overcome the shortfalls of de-centralization without turning back the clock on their end-users. Leveraging SAN technology to connect mainframe and open systems allows for ultra high speeds of transferring backup and recovery data back and forth between the platforms. In the mainframe environment, the accumulated terabytes of backup data from all the open systems servers and workstations are not a threat - just another day at the office for the mainframe.
Obviously, the mainframe is a central resource, and even in enterprises with several data centers, there will typically be scores of remote locations, compared to very few central sites. How, in such situations, could the mainframe become an integral part of a backup solution for such environments? Well, In addition to reliability and scalability, today's mainframe also stands for manageability, and that feature can be made available to the entire enterprise, from central to remote, from servers to workstations and from individual files to intricate database structures.
As a working example, Wright State University in Dayton, Ohio backs up over 90 different desktops and servers running eight different operating systems located at the main and branch campuses. The day-to-day backup work for this distributed application is managed by the mainframe due to the many advantages it delivers including speed and efficiency. The mainframe includes many standard ease-of-use utilities, which help perform standard tasks such as figuring out which tapes are needed for backup. Many open system applications require the system administrator to manage this task.
While speed of backup is an important factor, it is often the speed of recovery that can make or break a company. According to a META Group survey more than 30% of companies that suffer a catastrophic disaster such as fire, flood or earthquake, never reopen their doors. What's more, of the companies surveyed more than 70% had not yet developed a disaster recovery and business continuity plan. Creating a disaster recovery plan focusing on the mainframe with its ability to quickly recover data simply makes sense in today's world. Add to that the fact that the mainframe offers unrivalled availability and that experts (in the areas of policy management, practices and procedures) are located in the mainframe data center, and the reasons for relying on the mainframe are even more compelling.
Companies need to take a fresh look at the mainframe as it no longer presents a threat to end users. Instead, the mainframe can provide safety, reliability and peace of mind for every application manager in the enterprise. Data centers now have the opportunity to establish themselves as service providers again, and this time around it can be as backup service providers for one simple reason. Because the mainframe is well suited for the task.
About the Author : Christian Traue is the Director of Product Management for Tantia Technologies