On Demand Storage Stressed at Comdex

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

A largely disappointing Comdex 2003 just wrapped up in Las Vegas. With attendance down to one third of the level achieved right after Sept. 11th, 2001, the organizers are gamely attempting to put a positive spin on things. But when you can walk through the Comdex exhibit hall in a couple of hours and meet everyone you intended to (without appointments) in a morning, it’s hard to see much to smile about.

It’s obvious somebody somewhere got it in their head that 50,000 quality attendees would be more desirable than a quarter of a million people from all walks of life. Granted, several vendors did have some positive things to say about this shift.

“I notice that lead quality was up compared to other shows,” said David Houde, a customer service engineer with Somix Technologies, a network management company from Sanford, Maine. “Attendees seemed to be focused on their specific needs and had particular vendors mapped out before the show.”

Most booths, however, grumbled about the turnout, and the consensus was largely negative. As a result, the pizzazz, extravagance, and overhype that Comdex once embodied is no more. It seemed like a meeting with an old boxing champ. You remember the strutting arrogance and unwavering self-belief he exuded in his prime, and can’t quite reconcile that with the run-down man before you. Advice to Comdex: emulate Ali or Foreman and win the title again, or get out of the ring.

One worthwhile activity at the show, however, was the on demand computing sessions. A selection of experts from the various vendors touting on demand in its various forms, as well as industry analysts and end users, discussed the subject at length in a series of sessions.

“We saved $300,000 a year through our on demand strategy by being able to cancel our frame relay contract,” said Kenneth McCardle, assistant vice president of information systems at Southern Farm Bureau Casualty Insurance Company of Ridgeland, Miss., one of the largest property/casualty providers in the nation. The company achieved this using several components of Computer Associates’ Unicenter (including Business Process Views) as well as Vmware to virtualize Windows servers. “We’ve used the technology to reduce the turnaround of insurance applications from two to three weeks down to 30 minutes or less.”

OK. But what is on demand all about? Or adaptive computing as HP calls it? Or seamless computing (Microsoft), N1 (Sun), or utility computing (as the media seems to prefer)?

Page 2: KISS — Keep it Simple, Storage

KISS — Keep it Simple, Storage

Richart Escott, director of storage management software at HP, explains the need for on demand storage by highlighting the storage explosion – 5 Exabytes added last year, equivalent to 500,000 times the entire contents of the Library of Congress. In response, organizations have over provisioned and are suffering because of low utilization.

He lays out four elements that are an integral part of HP’s adaptive enterprise initiative:

Simplification: By eliminating customization and reducing the number of elements to manage, automation can be implemented

Standardization: This can be accomplished by adopting standard interfaces, enterprise architectures, and processes. This also calls for standards such as IP-based networking and the SNIA’s Storage Management Initiative Specification (SMI-S)

Modularity: Monolithic structures must be broken down, with modular systems deployed, as well as virtualized servers and storage

Integration: In order to manage the link between business and IT, it has to be possible to rapidly connect applications and business processes either inside or outside the enterprise

Interestingly, all the big vendors outlined a similar vision for on demand computing and agreed with each other on its primary element: to provide computing resources and storage when you need them and where you need them without hassle. Instead of wrestling with administrative complexities, you simply allocate and have it available. This can be provided within the company itself, or the resources can be made available by a third party.

“In a way it is ‘back to the future,’ as we appear to be evolving back to a model similar to the old mainframe days, where blade servers integrated in racks are being used to consolidate storage,” says Rich Napolitano, vice president of Sun’s storage systems group.

Napolitano’s presentation highlighted the four components of storage: disks and arrays, access via switching by Ethernet or FC, data services, and applications (such as storage management, data center management, storage resource management, etc.). The old problems with vendor lock-in, he contends, were caused by data services being tied in too closely with disks and arrays. Today, the value is moving from RAID and disks up the food chain towards data services.

Napolitano used the analogy of the electric motor. Once an item that everybody bought, nobody purchases electric motors by themselves any more; rather, they come built into many other devices and products. The same is expected to happen to RAID and storage hardware. Such components will be built into on demand solutions, but will not be the focus of the purchase. Instead, the innovation is occurring in such areas as volume management, striping, snapshots, and virtualization. And that’s why standardization is so important — to move from an era of proprietary hardware to one where storage can be automated within a heterogeneous environment.

“Even EMC is endorsing SMI-S; it’s a standard the whole industry seems to be getting behind in order to achieve simplification,” says Napolitano. “That will help to make storage usable between mere mortals instead of getting bogged down in the plumbing.”

The tremendous need for on demand storage was well illustrated by Jens Tiedeman, IBM’s vice president of storage software. He made it clear that it is still difficult to build and manage a heterogeneous SAN. Thus, maximizing the utilization of physical assets remains a serious challenge. With each and every component having a unique interface, installation and configuration is a problem to end users.

He chose the example of the telephone industry in the 1930s. Somebody did a survey at that time and determined that based on the predicted increase in the number of calls, the country would need 100 million telephone switch operators by 1980.

“We face a similar situation today with storage,” says Tiedeman. “In a decade we would need such a huge number of storage administrators that it couldn’t possibly happen.”

Page 3: Autonomic Computing

Autonomic Computing

Another speaker from IBM laid out another wave of technology innovation that is destined to impact the storage landscape — autonomic computing. IBM has essentially folded this term into its on demand strategy. The term autonomic basically means self-governing or independent. It is based upon the concept of the autonomic nervous system — the body governs heart beat, blood flow, glands, etc. without the person having to pay any attention to them.

“The amount it costs to manage the infrastructure is now more than the cost of the infrastructure itself,” contends Rick Telfer, director of autonomic computing at IBM. “We need autonomic systems to bring about simplicity.”

Essentially, autonomic computing heralds intelligent open systems that:

a) Manage complexity

b) Know themselves

c) Continuously tune themselves

d) Adapt to unpredictable conditions

e) Provide a safe environment by being self-configuring, self-healing (fixing problems as well as determination of causes), and self-optimizing (tuning systems to the workload at hand)

Telfer stresses that such technology would not eliminate the jobs of database administrators and storage administrators. “By automating the environment, you remove all the grunt work and manual entry,” he explains. “That helps these administrators do their real jobs and perform their proper functions.”

Other fears about on demand were addressed by Charlie Boyle, director of N1 architecture at Sun. “N1 does not mean that Hal takes over the organization tomorrow,” he said.

He preferred to address the concept in terms of today’s realities — systems with utilization rates that average anywhere from two to 25 percent. The goal of N1 is to be able to manage the entire storage landscape or data center from one system — i.e. virtualization that links storage to business processes, services, or specific servers depending on needs.

“The reality, though, is that most companies are not ready for such an advanced architecture,” says Boyle. “Existing IT infrastructures are typically too complex and not matured enough to be able to transition easily.”

Even so, he touts several organizations that have implemented the technology in a rudimentary form. One expects to realize three-year savings of $10.2 million with an online transactions system, while another has experienced a reduction in time spent on service provisioning from one week to a day.

“While the gains are real, there are many unknown processes that need to be addressed and sorted out inside any organization before you will see real business value,” concludes Boyle.

Lori Wigle, director of the enterprise platform group at Intel, concurs. “On demand is no grand solution,” she says. “If the basics are out, you cannot achieve a virtual stage. Yet many companies today don’t know how many servers they have or where they are situated.”

She sees these various on demand initiatives as a means of eliminating the complexity of storage and I/O. Blades, she believes, are a great platform for autonomic capabilities.

“Self-healing and self-optimization are a good four years away,” predicts Wigle. “I believe it will take four years for true virtualization to become a reality.”

Feature courtesy of EnterpriseIT Planet.

»


See All Articles by
Drew Robb

Drew Robb
Drew Robb
Drew Robb is a contributing writer for Datamation, Enterprise Storage Forum, eSecurity Planet, Channel Insider, and eWeek. He has been reporting on all areas of IT for more than 25 years. He has a degree from the University of Strathclyde UK (USUK), and lives in the Tampa Bay area of Florida.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.