Server users aren’t the only ones who need to worry about power and cooling issues. Rising capacity demand means that storage now accounts for nearly as much of a data center’s power load as servers.
That was the message at Liebert’s AdaptiveXchange show this week in Columbus, Ohio, where more than 2,200 IT, facility and data center managers were on hand to find out how to get a handle on their load and cooling problems.
According to a survey of Liebert’s data center user group, this has grown into a serious issue. 33 percent indicate they will be out of power and cooling capacity by the end of 2007. 96 percent stated they would be out of capacity by 2011.
“For many years, availability has always been of primary importance in IT,” says Bob Bauer, president of the Emerson division. “The survey indicates that users now find heat and power density to be far greater challenges than availability.”
Due to the dramatic rise in power demands of late, data centers are finding themselves with no margin for error in the event of a cooling outage. Almost three quarters admitted that they were down to having a 20 minute window after an AC shutdown. In other words, they have twenty minutes to fix the problem before their servers overheat and begin to shut down.
“Energy efficiency is everyone’s problem, not just the server guys,” says Roger Schmidt, a distinguished engineer at IBM. “Although storage is not the big power gorilla in the data center, the rise in capacity demand means that storage now accounts for a substantial amount of the power load.”
How much? During a keynote, Dell CTO Kevin Kettler shared an analysis of his company’s data center. Within the category of IT equipment power usage, servers dominated with 40 percent, followed by storage at 37 percent and networking/telecom at 23 percent.
“Power, cooling and thermal loads are now top of mind,” says Kettler. “To drive for maximum efficiency, we have to look at all areas, from the client to the silicon, software, storage, servers and the data center infrastructure.”
Looking a few years down the road, he believes there will be a convergence of storage channels such as InfiniBand, Fibre Channel, and Ethernet into one integrated fabric that falls under the 10Gb Ethernet banner.
“We are moving towards a unified fabric using 10Gb Ethernet,” he says. “We are already beginning to see the standardization of storage connector slots. Rack design is changing to incorporate all types of connector.”
Hot Blades
A big reason for the acceleration of power needs, of course, is the popularity of the blade architecture. According to Bauer’s numbers, 46 percent of his customers are already implementing blades and another 24 percent are in the planning stages. IDC predicts that blades will represent 25 percent of all servers shipped by 2008.
While blade servers let you pack a lot more power into a smaller space, they are far more heat dense than a traditional rack. In 2000, for example, a rack of servers consumed 2 kW. By 2002, the heat load had risen to 6 kW. Today, a rack of HP BladeCenter or IBM xSeries servers consumes 30 kW. Some analysts are forecasting that 50 kW racks could be on the market within a couple of years.
What does this mean for the world of storage? All those blades are missing a vital element — a hard drive. They need a place to store data. So blades are generally accompanied by large banks of disk arrays. And the hard drives within those arrays are getting packed in tighter than ever. The result is a cooling nightmare that has existing AC systems struggling to cope.
In most data centers, hot and cold aisle arrangements are used to feed cold air under a raised floor and up through perforated tiles onto the front of the racks. Under normal loads, the cold air cools all servers in the racks. However, today’s heavy loads mean that when the air reaches the top servers, it is already hot. Some are being fed air that has reached a temperature of 80 degrees C. It’s no surprise, then, that two-thirds of failures occur in the top third of the rack.
“Hot aisle/cold aisle architectures are a challenge at heavy loads,” says Bauer. “At 5 kW or more, the upper part of the rack is hot and lower part is cooler. Raised floor systems are no longer enough.”
That 5 kW number is one storage managers had better pay attention to. If they take a look at their own racks, they may well find they are already in that ballpark.
“Storage racks are consuming anywhere from 5 to 8 kW these days,” says Schmidt. “Tapes also get uncomfortable if you jerk around the temperature.”
Figures he received from the IBM tape center in Tucson, Arizona, revealed a threshold of no more than 5 degrees C per hour for tapes. This has been released by the American Society for Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) as the recommended standard for data centers.
To address power problems, Liebert announced a high-capacity version of its GXT UPS, a 10 kVA system that can protect up to 8,000 watts of equipment and takes up 6U of rack space. Another version is due out by year end that will enable multiple units to work together to deliver 16,000 watts of protection for high-density racks.
In addition, the company displayed its supplemental cooling products on the exhibit floor. Some sit beside the rack to cool nearby servers, while others position heat exchangers above the racks that blow chilled air down into the cold aisle. The Liebert XDO, for example, provides an additional 10 kW of cooling per rack.
“Cooling is moving closer to the source of the load,” says Bauer. “It is necessary to have supplemental cooling above or behind the rack.”
Storage Evaluation
Just as cooling is moving close to the load, so data centers are relocating to bring themselves near cheap abundant power. Google, for example, is establishing a data center beside a hydroelectric dam in Oregon in order to secure 10 MW. Microsoft and Yahoo have similar agendas.
If everyone else is looking at power, it makes sense that storage administrators should also get in on the act. IBM’s Schmidt says that energy efficiency needs to be looked at from end to end — and that includes storage. He also recommends that storage managers investigate power demands more closely during the product evaluation phase.
“Cooling and power are destined to become much more of a factor when people are choosing between different disk arrays,” says Schmidt.
For more storage features, visit Enterprise Storage Forum Special Reports