The Data Center of the Future Page 2 -

The Data Center of the Future Page 2


Companies have been moving toward storage virtualization for years. Now they are looking to do the same with the rest of their IT resources. Virtualization brings all the computing resources into a common interface, where they can be viewed as a single system.

This solves two major problems for the data center. First, it cuts down on the time needed for configuring and assigning resources, since the virtualization software dynamically assigns the traffic load to the best available server. Otherwise, the administrator has to set up the services handled by each machine. It also cuts costs by reducing over-provisioning.

A typical scenario today is for each application to be assigned to its own server, with a second server acting as a backup or development server. Without virtualization, both servers need to be oversized so they can comfortably manage the greatest anticipated traffic load. With virtualization, however, this over-provisioning of a single server can stop since all the available servers are viewed as a single system.

Evolving Standards

Finally, the data center of the future will be based on common standards in order to ensure greater interoperability and ease the management burden. Currently, there are two competing systems.

One of these is the data center markup language (DCML). DCML is an XML-based specification that provides a structured model and encoding to describe, construct, replicate, and recover data center environments and elements. It's a new effort that was started by EDS and Opsware in mid-October, 2003. Six weeks later, the DCML Organization had a website up and running (, about fifty members, and plans to issue its 1.0 specification for public comment by the end of the year. Eventually, the organization will submit the spec to a standards body such as the Distributed Management Task Force (DMTF) for approval.

Microsoft, meanwhile, is offering its own XML specification called the system definition model (SDM). Last May, the company demonstrated SDM in conjunction with HP. SDM helps to automatically configure Windows servers and applications.

The DCML says its standard will accept SDM information in order to manage Windows servers as part of a heterogeneous environment. Microsoft, however, is not a member of the DCML. Also missing are major hardware manufacturers such as Dell, IBM, Hitachi, and HP. Computer Associates, BEA, BMC, and other major management software vendors, however, are part of the DCML Organization.

These are some of the factors affecting the future development of data centers. But what will all this add up to from the viewpoint of a data center manager? For starters, the job will become more about provisioning services than about knowing the ins and outs of all the data center's specific components. Just as consumer hardware and applications are plug-and-play, look for enterprise applications and hardware to become self-configurable as well.

Barring a disaster like SCO winning its lawsuits, many if not all of your machines will be running on Linux. Autonomic systems will correct most errors without human intervention. Open standards will make interoperability problems a thing of the past, and the hardware costs associated with stocking the new-age data center will be marginal. And while the data center won't run itself, it will be easier to manage than ever before.

Feature courtesy of Enterprise IT Planet.

» See All Articles by Drew Robb

Page 2 of 2

Previous Page
1 2

Comment and Contribute


(Maximum characters: 1200). You have characters left.



Storage Daily
Don't miss an article. Subscribe to our newsletter below.

Thanks for your registration, follow us on our social networks to keep up-to-date