Supercomputer Center Tabs IBM for Storage

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

IBM has snapped up a large contract with Ohio Supercomputer Center (OSC) in which Big Blue will provide the facility with hardware and virtualized storage software to manage and run OSC’s research applications.

With the infrastructure, OSC data such as global climate modeling, weather forecasting, and genetic sequencing will be fed to thousands of researchers in Ohio in order to reduce storage infrastructure costs and application downtime.

In the multi-million dollar contract announced Friday, OSC is setting up IBM’s TotalStorage SAN File System to work in harmony with its TotalStorage FAStT storage servers and its TotalStorage SAN Volume Controller.

The deal is highlighted by IBM’s SAN File System, which uses software to virtually link remote server and storage hardware so that mounds of data can be stored and accessed from one access point. The file system, managed by the SAN Volume Controller and supported by IBM’s FAStT servers, is designed to cut back on the complexity and costs associated with using disparate storage products.

The storage system implementation is a win for IBM, which competes with the likes of EMC , HP , and VERITAS in the race to help customers store and manage millions of files. The news follows IBM’s fourth quarter earnings announcement Thursday, a quarter in which the TotalStorage system enjoyed a 14 percent revenue gain compared to the fourth quarter of 2003.

With the new system, the center will now have over 600 terabytes worth of storage capacity — akin to the storage space that would be needed for over 500 million books. OSC said it believes the new system will offer a big performance boost over its previous systems.

“The large capacity storage will facilitate the massive data stores created by computing activities of researchers across Ohio,” said Paul Buerger, OSC Leader of Systems and Operations, in a press statement. “The capacity and performance of this new storage environment will allow researchers to attack problems that may have been difficult or impossible to address previously.”

Formerly known as Storage Tank, SAN File System provides a single point from which to manage files and databases. It was built with autonomic and grid technologies from IBM Research, which had worked on it for some six years before it saw the light of day last October.

SAN File Systems is geared to support thousands of computers, petabytes of data, and billions of files, and eventually it will work with products from other vendors as part of IBM’s attempt to steal market share from competitors.

Story courtesy of Internet News.

Back to Enterprise Storage Forum

Clint Boulton
Clint Boulton
Clint Boulton is an Enterprise Storage Forum contributor and a senior writer for covering IT leadership, the CIO role, and digital transformation.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.