Case Study: Taming the Storage Jungle

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Managing storage in the old mainframe days was a simple matter. Everything was stored in one place, under the watchful eyes of the operators.

But client/server and low-cost disks changed all that. PCs now come with more storage than most mainframes used to have. Add to this all the capacity strewn throughout the enterprise’s file and print servers, database servers, file storage systems, and web servers scattered around the organization and you end up with a vast jungle of capacity that is a nightmare to manage.

“Hard coupling of storage to the servers and applications results in low storage utilization, increases management complexity, and continually drives up the cost for operational support,” says Robert Smalley, senior project specialist in the Bank of Montreal’s Mid-Range Services Department. “We needed to centralize and simplify the storage management.”

Planning for the Future

The Bank of Montreal, part of the BMO Financial Group, is Canada’s oldest bank with more than 33,000 employees and $247 billion in assets. The bank’s staff, together with customers accessing account information over the Internet or through ATMs, is serviced by the company’s 7,500 IT workers. BMO has three main data centers — two in Toronto where the bank is headquartered and the other in Chicago, where its U.S. unit, Harris Bank, is located.

As with most large enterprises, BMO uses several operating systems, with a different type of direct-attached disk for each operating system. BMO’s AIX servers use IBM’s proprietary Serial Storage Architecture, which transfers data at up to 160 Mbps; its Sun servers utilize LVS RAID arrays; and its Windows NT/2000 servers use a variety of disks from IBM and Compaq.

Combined, the datacenters hold more than 43 terabytes of storage split among 300 servers. While this is adequate for current needs, the company anticipates capacity demand growth of 20% to 30% per year. Providing for such expansion couldn’t be done in the same haphazard method as it had been in the past, so rather than trying to expand the existing mishmash of storage, BMO decided to take a strategic approach to the problem — a five-year, $20 million plan to consolidate and simplify storage.

Page 2: Dropping Direct Attached

Dropping Direct Attached

That strategy consists of replacing the existing direct-attached storage with a series of Storage Area Networks (SANs). Each of the three major data centers receives its own SAN, with additional SAN islands deployed at the office towers and for different operating systems and business needs.

“Our strategy is to deploy solutions that support cross-platform resource sharing and measurable service and management improvements which provide cost-effective, highly available, ‘on demand’ networked-storage solutions, which facilitate automation of business continuity operations,” Smalley explains.

Rolling out the new equipment began in September when the bank installed a 125TB STK9840 tape storage system from Storage Technology Corp. of Louisville, Colo. To improve management of the data, the bank also deployed IBM’s Tivoli Storage Manager on an IBM eSeries p660, a rack-mounted midrange UNIX server.

Just as important as the storage equipment, though, is the connection between the user and the files. Any bottleneck along the line wastes even the highest performance backend, so BMO wanted to put in a 64-port director class switch. It evaluated switches from several manufacturers, including the StorageTek SN6000 and McDATA Corp.’s (Broomfield, Colo.) ED-6064 Director, but wound up selecting INRANGE Technologies Corp.’s FC/9000 Fibre Channel Director. BMO was already using other INRANGE hardware, and the FC/9000 supports IBM’s 17 Mbps fiber-optic channel Enterprise Systems Connection (ESCON).

“We already had favorable past experience and familiarity with this technology since the bank was utilizing an INRANGE CD/9000 switch with the zOS mainframe,” says Smalley. “This implementation allowed BMO to utilize existing cabling and ESCON infrastructure.”

The main datacenter in Toronto received two of the FC/9000s so there wouldn’t be a single point of failure. A third unit went to the other Toronto facility, which functions as a backup for the main datacenter. This backup data center will also be adding a second FC/9000 so that both locations have full dual-fabric switching.

Once the tape drives and switches were in place, it was time to roll out the centralized storage equipment, starting with a 10TB IBM TotalStorage Enterprise Storage Server (ESS). The ESS scales up to 384 disks and offers a total capacity of 55.9TB. In addition to ESCON, it also connects via Fibre Channel, 2Gbit Fiber Channel/FICON, and SCSI. It shares storage for devices running IBM’s proprietary operating systems (OS/400, OS/390, and zOS) as well as Unix, Windows NT/2000, and Linux.

Page 3: The Missing Piece


The Missing Piece

BMO will continue rolling out its centralized storage through 2006. Besides the new hardware, the bank is also reducing the number of UNIX flavors it supports, cutting down to just Sun’s Solaris and IBM’s AIX. In addition, it is trimming the number of supported database environments to save on support and licensing costs.

The Fibre Channel infrastructure and disk sub-systems should all be in place by the end of this year. Connecting and migrating UNIX storage is in progress, scheduled for completion in 2005. Next year, BMO will start connecting and migrating the mainframe, midrange, and Intel server storage.

Although project completion is still several years down the road, nevertheless there are already noticeable results.

“We have increased connectivity to disk space so we almost have ‘storage on demand,’ Smalley relates, “and we have seen some performance throughput gains.”

There is one piece, however, that is missing from the equation — storage management. Hardware prices are down to a few pennies per megabyte, but managing and maintaining that storage runs an estimated six to ten times the hardware costs. Smalley did use Tivoli Storage Manager for part of the project, but he has yet to find a product that would adequately manage the entire group of SANs.

“We need a ‘world class’ storage resource management tool for open systems disk storage,” he says, “but the software doesn’t yet have the necessary maturity to manage a structure such as ours.”

This story originally appeared on Datamation.

Back to Enterprise Storage Forum

Drew Robb
Drew Robb
Drew Robb is a contributing writer for Datamation, Enterprise Storage Forum, eSecurity Planet, Channel Insider, and eWeek. He has been reporting on all areas of IT for more than 25 years. He has a degree from the University of Strathclyde UK (USUK), and lives in the Tampa Bay area of Florida.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.