Fair Isaac Corp., best known as FICO, is the credit-scoring agency that enables consumers to check credit ratings and manage their financial health online. However, the San Raphael, Calif.-based company’s primary business is the delivery of creative business analytic modeling via software, service bureau, and ASP solutions. Almost all leading U.S. banks and credit card issuers, as well as a host of insurers, major retailers, telecommunications providers, healthcare organizations, and government agencies rely on Fair Isaac solutions. Data storage and recovery, therefore, are absolutely business critical.
When FICO decided to migrate from mainframes to a distributed architecture environment in 1998, no one foresaw the storage sprawl and backup overload nightmare they would find themselves in by 2002.
“We had 20 DLT 8000 tape drives handling 3 TB per day of full and incremental backups, with reliability running at 60% due to poor configuration and hardware failures, requiring extensive manual intervention,” explains Simon Wiltshire, Open Systems Director at FICO. He quickly realized that the company required a rebuild of its storage environment from the ground up.
Several guiding principles were established for the project. The new storage infrastructure not only needed to have in place the basics of scalability, redundancy, and reliability, it also had to be able to cope with continued future growth. To achieve this, FICO focused on operational simplicity as well as leveraging the capabilities of existing staff.
In order to meet these core requirements, FICO gravitated towards an experienced consultancy with proven tape SAN experience. It selected Minneapolis, Minn.-based CNT, which also happened to have local resources to service a FICO data center sited in Plymouth, Minn. CNT provided a project manager and a technical expert who worked closely with FICO staff and StorageTek hardware engineers. Having the internal team involved throughout the entire project lifecycle was seen as vital to success, rather than bringing in a complete consultancy team or outsourcing the project.
After a thorough review of the options, the architecture finalized to meet FICO’s SAN requirements included a single StorageTek 9310 PowderHorn Tape Library storage module, 8 StorageTek 9940A tape drives, VERITAS NetBackup Data Center, Brocade SilkWorm 2800 fabric switches, and Sun V880 backup servers. This provided both the data storage capacity and transfer horsepower required to meet FICO’s present needs and offered easy expansion of the SAN for future potential.
The following diagram illustrates the architecture of the FICO Tape SAN:
Phase 1
Phase 1 of the migration project incorporated confirmation of the hardware selection, designing and implementing the hardware infrastructure, preparing migration plans, and beginning the actual data migration with the eight largest databases. Phase 2 involved migrating the remaining 220 servers, stabilizing and tuning the SAN, and completing the training and handover.
The project undertaken went as follows: revisiting existing backups and archive requirements on the mainframe, and then categorizing these requirements by business needs with the aim of simplifying and consolidating the underlying business processes. Only when that step had been fully completed did FICO feel that it could consider a move to a new architecture.
“Getting right into our business processes to clarify business unit assumptions and validate the pre-existing SLA’s was fundamental,” says Wiltshire. “We could then even out the current incremental and full backup practice we had ended up with over time.” He reports that this approach allowed the company to better schedule and automate backups to meet business demands. Further, it permitted FICO to centralize the control of all jobs at the same time.
FICO completed Phase 1 during a three-month period and Phase 2 one month later. Wiltshire confirms that he completed the project on time and on budget, and believes it to be one of the few successful consulting projects FICO has undertaken. The main benchmark of success set at the project outset was backup reliability, and this went from 60% to 98% daily reliability, an especially impressive feat when you consider the backup volume of 25TB per week with a system capability of 1TB per hour.
“To succeed, we learned that you have to have the right expertise available, especially if you are using a new technology”, says Wiltshire. “Have your team involved in the entire project and set clear expectations and guiding principles.”
In addition, he considers it essential to fully verify that assumptions and requirements are based in fact and not emotion in order to control the project scope. Last but not least, his advice is to always plan for the future.
“Planning for the future was the lesson of our earlier SAN efforts in ’98-’01, and served as hard-learned but invaluable experience for this project, which has been an undoubted success,” concludes Wiltshire.