When it came to solving its data storage problems, the Haas School of Business at the University of California’s Berkeley campus hit the books. What it learned was that the solution to more affordable storage was to implement a complementary iSCSI-based storage area network (SAN).
Founded in 1898, making it the oldest U.S. business school at a public university, the Haas School found itself running out of Fibre Channel (FC) ports on an FC-based storage network to accommodate an increasing number of servers.
“We knew that we didn’t want to buy additional Fibre Channel technology for hosts that really didn’t need it,” says Chris Harwood, system administrator at the Haas School.
The solution he opted for was deployment of a tiered storage architecture that included an IP SAN and an existing FC SAN. Harwood rough estimate is that the school saved tens of thousands of dollars by going the IP SAN route and also implemented a solution that was the right fit for the application.
The Haas School of Business is a mini campus of three connected buildings around a central courtyard. The school has 160 faculty members, 170 staff, and about 1,700 students in both undergraduate and graduate degree programs.
Two years ago, Harwood recognized that the school’s appetite for storage was growing. He was also aware that the cost of FC SAN equipment was on the rise. Up until that point, the school had FC-based storage for its servers and direct-attached internal disk on its servers, about 35 hosts in all. The core functionality for the storage was for e-mail and SQL server databases utilizing multiple paths to a redundant storage array architecture.
“I realized that it was becoming more expensive to purchase SAN and connect the host to storage than servers were costing,” says Harwood.
According to Harwood, an FC HBA card for the server typically costs $1,000-$1,500, and the school was running multiple paths in addition to individual switch ports on a gigabit switch for another $1,000, plus additional software at another $1,000, and on top of that was the cost of the disks in the array.
After doing some research, Harwood learned that iSCSI technology using a standard GigE adapter, at a cost of about $300, would mean big savings. “We could get almost the same functionality for a fraction of the cost,” he says.
For clustered critical applications, such as e-mail and databases, the business school required multiple path capability. But for the research computing infrastructure — a high-performance computing cluster — and software installation services, iSCSI and one path would suffice. “If the host went down for reboot midday, it wouldn’t be a big problem,” says Harwood.
Before purchasing iSCSI storage, Harwood wrote down his criteria and read up on technology in magazines. One of the most important requirements he had was compatibility with the school’s older storage architecture, which included older Dell PV650s, and multiple operating systems. “I also wanted an ASIC-based [non-PC] device because they tend to be robust,” he says.
His search for iSCSI switches boiled down to two vendors, StoneFly Networks and SANRAD. When he dug a little deeper into the two vendor’s solutions, SANRAD was the only vendor offering the ASIC architecture.
SANRAD put Harwood in touch with reseller RADirect Inc., which shipped a demo unit to the school, but not before he had several telephone conversations with the reseller as well as with other product users who had the PV650s.
“The University of Alaska was using the SANRAD V-Switch 3000 iSCSI switch in an environment similar to ours and they said it worked fine,” says Harwood.
The school received the demo unit in Spring 2004, a V-Switch 3000 with SCSI and FC ports. “We made sure the switch could see all three storage arrays, attaching some SCSI arrays to our research computing environment,” says Harwood. The V-3000 was also used for backup to disk for a backup server in limited production.
This initial configuration, with two servers, was in place for about six to eight months. “We went from adding 150Gig for backup to 4TB,” he noted, plus another four hosts.
The school purchased its first V-Switch 3000 in July 2004 and will be adding two more servers — one for video streaming and a second general purpose file server — and it also purchased a second V-Switch 3000 and will cluster the switches.
Harwood is also taking advantage of the product’s high-end virtualization feature. “As the need arises, anytime we bring up services we go through the matrix,” he adds.
The virtualization features of the V-Switch 3000 make allocating storage simpler, according to Harwood. “On a storage array, we allocate one large LUN and use the V-Switch to split it into smaller volumes for the host,” he says.
For a cluster SQL server, for example, he took 100GB and split it into 10GB quorum disk, 40GB data disk, and 45GB backup and log disk.
The Haas School of Business considers its recent purchase of iSCSI-based storage a smart choice. Not only did the school save money on the initial purchase, but it also expects to see additional savings as the network grows. Management is also easy, says Harwood.
He also notes that upgrades to the SANRAD management tool make it more user-friendly, and he likes the new performance statistic tool.
For more storage features, visit Enterprise Storage Forum Special Reports