Download the authoritative guide: Enterprise Data Storage 2018: Optimizing Your Storage Infrastructure
In the good old days of corporate computing, the mainframe was all IT had. System users can clearly remember how data center staff brushed them off every time they asked for a change to a critical application or a more up-to-date approach to an application design. Users were told that any request would take 18 person-months of development time to deliver, would need senior-level approval or any other litany of well-known responses. To the average worker, it seemed that in the world of corporate computing, control of the mainframe was the ultimate power.
Therefore, it was no surprise that when Open Systems entered the market place, the pendulum began to swing. Departments fled the iron grip of the mainframe-centric data center and started doing things on their own, giving birth to the age of decentralized computing. And it hasn't stopped at the departmental level. According to Fred Moore of Horison Information Strategies, fifty percent of all digital data is kept on personal computers and not integrated into enterprise IT.
Now, however, a change is on the horizon as the pendulum begins to swing back to a re-centralization of computing. But why would this move ever be contemplated after the problems of the past? As in so many cases, the answer is monry. As executives scrutinize and tighten the corporate computing budget, mainframe applications begin to make much more economic sense. And nowhere does is make better sense that in the area of data backup and recovery.
Effective disaster prevention is a new corporate mandate given the events of the past year. Yet, with the volume of data increasing at an exponential rate, running on a myriad of platforms located anywhere in the enterprise, how could a company ensure it's fully protected? According to Horison Information Strategies, the growth of data over the next years will generate ten times more storage volume than can be managed on open systems. And what corporation can afford to continually add tape libraries as their open system capacity is maximized.
Many enterprises can limit these expenditures simply by utilizing the mainframe as the primary backup server. Often, the mainframe already has massive amounts of tape storage and, over the last 25 years, it has perfected the management and rotation of tape storage to and from the disaster recovery site. Using the mainframe's tape storage capacities and disaster recovery processes eliminates the need to invest in expensive tape libraries and tape storage that only would be used for server backup.
In addition to containing the costs of tape backup storage, utilizing the mainframe as a backup server provides containment of infrastructure and personnel costs as well. This is because using the mainframe as a backup server provides the following benefits;
- Automatically ensures sufficient scalability to handle future storage growth.
- Automatically provides 24x7 availability to backup storage.
- Automatically puts server backup into the hands of experienced storage specialists.
- Ensures server backup tapes are automatically sent to the disaster recovery site.
What's more, by using the existing high-bandwidth mainframe technologies such as ESCON or SAN, IT departments can achieve the fastest LAN-less backup single stream and aggregate throughput while also achieving fast restore times.