Keeping Storage Cool

Enterprise Storage Forum content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Data centers have traditionally been cooled by chilled air blown out of air conditioning units. More recently, though, with heat densities rising, vendors have been taking the cooling closer to the source. Known as supplemental cooling, the idea is to bring the cold air source nearer to overly hot chips or spinning disks.

But how about the notion of putting liquid directly beside or even inside equipment? For many using Computer Room AC (CRAC) systems in blade center environments, this no longer seems such a crazy idea. In fact, it might not be too long before we see the appearance of liquid cooled disk arrays and tape units.

“The closer we get fluid to the racks, the better,” says Nick Aneshansley, vice president of technology at Sun Microsystems. “The plain fact is that liquid-based cooling is a lot more efficient than air cooling.”

AC, he says, consumes way too much power. In some server rooms, it consumes almost as much power as the servers and storage arrays it cools. With the current trend towards packing in servers and blades as tightly as possible, it may well be more efficient to take the liquid into the data center, as opposed to leaving it on the periphery.

Mainframe Roots

Such an idea, of course, is far from original. Ask any mainframer about it and they’ll tell you this was considered the norm many years ago in that field. But for a generation raised on Windows or Unix server-based environments, CRAC has been the norm. A chiller unit sits on the periphery or outside the data center. Large CRAC units blow cold air around the room and keep the equipment cool — or at least they used to.

Take the case of an HP EVA rack. There are now so many disks packed in so densely that normal cooling routines often fail to keep up.

“We had reached the point where an EVA was producing too much heat,” says Richard Brooke, an HP enterprise infrastructure specialist. “Cooling now has to be engineered into disk arrays as they are getting so hot.”

HP is not alone in this regard. To stay competitive, just about any array vendor has packed in hard drives by the gross into recent products. Any array at the 5 kW or above power consumption mark may struggle to stay cool — particularly if it is surrounded by even denser server blades racks.

As a result, supplemental cooling systems have been introduced to bring cooling closer to the source. These units sit beside or above a rack and introduce additional cold air where it is most needed. But even these systems are being stretched to the limit. Enter various innovations around liquid cooling.

HP, for one, is promoting a cooling system that accompanies its standard 10,000 G2 cabinet — the G2 can be used for EVA arrays or HP servers. This 42U box comes with a cooling unit that is attached to the side. Essentially, it is piped into an existing chiller unit so that cold water is brought beside the storage in order to blow cold air directly onto the equipment.

“It provides 20 gallons per minute of water chilled to 5 or 10 degrees C,” says Brooke. “It can handle a 30 kW power load.”

HP keeps the liquid in a box separate from the server/storage rack. Chilled water works in combo with a fan and a heat exchanger to push cold air into the servers. The hot air is fed back in to the heat exchanger and cooled once again.

Refrigerated Storage

But chilled water isn’t necessarily the only liquid being considered. Refrigerants are also being used in data centers. Egenera, for instance, has improved the cooling potential of its latest blades in its BladeFrame system by integrating it with Liebert XD cooling technology.

Liebert XD is a waterless cooling solution that includes a pumping unit or chiller and an overhead piping system to connect cooling modules to the infrastructure. The pumping unit ensures the coolant exists only as a gas in the controlled environment to eliminate the potential for damage from leaks or condensation.

“Refrigerant can be a good substitute for water,” says Andreas Antonopoulos, an analyst at Nemertes Research. “As they evaporate at room temperature, they cannot cause flooding.”

The system has quick-connect couplings and flexible piping that allows coolant to be delivered to cooling units mounted directly to the back of the BladeFrame. If you need to move equipment racks or cooling modules, disconnect the pipe and reconnect where needed. One pumping unit or chiller provides 160 kW liquid cooling capacity for up to eight BladeFrame systems. It adds $300 to $400 per blade to the BladeFrame price tag.

“Liebert XD uses a liquid refrigerant to handle up to 20,000 watts per rack and is highly effective,” says Michael Bell, an analyst at Gartner Group.

He believes liquid has to be considered as a technology to adopt in the mid-term. That vision might not quite be there yet — wet aisles where liquid is brought out to the racks via a network of pipes — and there are still plenty of kinks to be worked out. But Bell believes that data centers better start thinking ahead.

“While you perhaps don’t need to install pipes to every server, at least make sure you have the plumbing infrastructure in place to make liquid cooling easy to implement in the future,” says Bell.

One company very much on the leading edge of liquid cooling is Cooligy of Mountain View, Calif. It has developed technology that actually takes water to the individual server or storage components. Currently, it is focused on cooling the chip. But future applications could include taking some of the heat out of disk arrays.

It works using a micro-heat exchanger stationed on top of the chip. Water is fed into the heat exchanger to cool the chip. Evaporated water is then cooled in a radiator and fed back into the system. Currently, systems have been designed to cool high-end workstations that keep two 125 watt Xeon chips about 10 degrees F cooler than can be achieved via air cooling. However, it may be three or four years before such breakthroughs make a broad appearance in the data center.

Fear of Water

Many, understandably, are worried about putting water in their data centers. Imagine the repercussions of a water leak inside an EMC Symmetrix.

“Some of those concerns are unjustified, as well engineered systems would not really increase the risk of flooding,” says Antonopoulos. “The real difficulty is standardization.”

In order to achieve the end goal of moving the fluid close to the source of heat, there is a requirement for standardized in-rack fluid delivery systems or standardized on-chip fluid delivery.

The good news is that standards are now being proposed. The American Society for Heating, Refrigeration and Air-Conditioner Engineers (ASHRAE) is working with the large system OEMs such as Dell, HP, Sun and IBM and has issued a couple of books with guidelines for liquid cooling and best practices with regard to how it should be used with IT equipment.

But standards are one thing and market adoption quite another. Server and storage vendors are not going to add to their R&D and manufacturing costs unless the market demands the wholesale introduction of liquids.

“If 90 percent of racks have liquid already available, it makes sense for manufacturers to have chips and other components also be cooled using that infrastructure,” says Christian Belady, a chief technologist at HP. “Adoption rates will ultimately decide where the technology goes.”

Meanwhile, IBM has a project in the works known as Intelligent Bricks — Hardware (previously referred to as IceCube). This is a three-dimensional storage array where each brick contains multiple disks, a processor and high-bandwidth network communications hardware. The goal is to develop this concept to the point where one storage administrator can manage a petabyte of storage.

“I have trouble envisioning the concept of liquid cooled storage, but IBM IceCube is using liquid cooling to put a huge amount of storage in a small amount of space,” says Roger Schmidt, a distinguished engineer at IBM. “But whether we ever get there on our commercial products, I’m not so sure.”

For more storage features, visit Enterprise Storage Forum Special Reports

Drew Robb
Drew Robb
Drew Robb is a contributing writer for Datamation, Enterprise Storage Forum, eSecurity Planet, Channel Insider, and eWeek. He has been reporting on all areas of IT for more than 25 years. He has a degree from the University of Strathclyde UK (USUK), and lives in the Tampa Bay area of Florida.

Get the Free Newsletter!

Subscribe to Cloud Insider for top news, trends, and analysis.

Latest Articles

15 Software Defined Storage Best Practices

Software Defined Storage (SDS) enables the use of commodity storage hardware. Learn 15 best practices for SDS implementation.

What is Fibre Channel over Ethernet (FCoE)?

Fibre Channel Over Ethernet (FCoE) is the encapsulation and transmission of Fibre Channel (FC) frames over enhanced Ethernet networks, combining the advantages of Ethernet...

9 Types of Computer Memory Defined (With Use Cases)

Computer memory is a term for all of the types of data storage technology that a computer may use. Learn more about the X types of computer memory.