High-density, power-hungry systems run hot—are fluid cooling systems the solution? Water, water everywhere, and not a drop—in the datacenter! That was true for a while, after mainframes were phased out by distributed minicomputers and standalone servers. But today, water and other liquids are returning to newly-consolidated datacenters, because dense equipment configurations use more power and produce more heat.

For example, the Children’s Medical Center in Dayton, OH, pumps water within 10 feet of the server and telco equipment at its two datacenters.

“Like all IS people, I was a little worried about bringing liquid into the datacenter,” said Chuck Rust, senior network analyst at Children’s Medical Center. “But after all the testing we did before ever powering anything up, I feel very confident that we have not taken on any risk.”

Vendors, including APC, Liebert, HP, CA, Knürr and Egenera, are getting in on the liquid cooling act, anticipating that many more datacenters will adopt this technology. Some vendors have introduced systems that make it easy to feed water and other refrigerants right up beside servers and, in some cases, directly to the chips.

Gartner analyst Michael Bell says many companies have maxed out their datacenters’ power and cooling capacities by virtue of adding highdensity computer equipment such as blade servers. “Water is still experimental,” he said, “but it will penetrate the market in a big way in the next two or three years.”

Back To The Future

The concept of bringing water into the datacenter is hardly original; mainframes have harnessed water cooling for decades. But in the Windows/ Unix world, it’s a fairly recent idea, and one that is catching on fast.

The reason is simple: The rising densities of newer multi-core chips and blades are putting more heat into the rooms than older Computer Room AC (CRAC) systems were designed to handle. CRAC systems basically blow cold air from chiller systems around the room in order to keep equipment cool.

“What we are looking at nowadays is a full set of high-heat hotspots, even from the lower-end Intel systems which are packed closer and closer together in racks,” said Clive Longbottom, an analyst at Quocirca. “The old approach is no longer valid—just trying to keep the temperature in the room at a steady level, with specific cooling only for the high-value boxes.”

CRAC technology generally works in conjunction with hot and cold aisle arrangements. Cold air is fed under a raised floor and up through perforated tiles to the computing hardware. Under normal loads, the cold air cools all the gear in the racks. But power density has shifted markedly in the last few years.

In 2000, for example, a rack of servers consumed 2 kW. By 2002, the heat load had risen to around 6 kW. Today, a rack of HP BladeCenter or IBM System x servers consumes something like 30 kW. And 50 kW racks may be with us within the next few years.

In such environments, air that reaches the top servers is already hot—sometimes 80 degrees F. And that can lead to failures. Figure 1 shows the heat profile of an overloaded server room. The cold air from the CRAC, marked in blue, fails to reach the tops of the racks. As a result, the hot air, marked in red, is pulled down into the cold aisles. The CRAC must be supplemented, and liquid solutions are the obvious candidate.

“Water is 3,500 times more efficient than air at removing heat and will become the preferred method,” said Gartner’s Michael Bell.

Simple efficiency is backed up by economics. CRAC is a power hog. In some server rooms, it consumes almost as much power as the servers and storage arrays it cools. Take the case of Dell’s principal datacenter in Round Rock, TX. An analysis by Dell CTO Kevin Kettler found that only 41 percent of the total power required by the facility actually reached the IT equipment. Power distribution consumed 28 percent and cooling took up 31 percent.

“Power and thermal demands are now top of mind,” says Kettler. “and power management clearly means more than just AMD versus Intel.” (Kettler is referring to recent efforts by both of these companies to improve the power performance of their chips. Also see BCR, December 2006.)

With power bills skyrocketing and AC accounting for around a third of the total, it’s no wonder liquid cooling is gaining a following. Major computing OEMs like Sun, HP, Dell and IBM have partnered with cooling solution providers, developed their own liquid cooling solutions, and are also working to address their power performance and heating issues.

AC Overload

Children’s Medical Center, mentioned above, is perhaps a classic case. It originally had two AC units, with only one running at a time and the other on standby. As the datacenter become overcrowded, it added a third AC unit, and began running two at a time.

“Within a year, all three were running and we were just hoping that we would have no major heat-related failures,” said Children’s senior network analyst Chuck Rust, whose datacenter was experiencing one or two minor heat-related failures every few months.

When the facility expanded recently, a second datacenter was created in space directly adjacent to the original one. Now, both datacenters have an APC InfraStruXure system using chilled water to cool a mixed environment of Unix, Windows and Novell servers. Each datacenter also includes an EMC SAN, and all equipment is connected via a Cisco backbone.

Rust reports there have been no heat-related failures, and the hot spots have definitely diminished. “Whereas the room used to feel hot beside certain servers,” he said, “we don’t have that problem anymore.”

How about the economics? Rust said he doesn’t have specific before-and-after cost metrics, but he thinks that chilled water is more efficient than the AC solution, for a couple reasons.

“In the older datacenter,” he said, “we had an issue with cable running under the raised floor blocking air flow, and over time it became very inefficient. In addition, in our new cooling system, hot air is contained in a small area. This makes the overall area much easier to cool.”

Fear Of Flooding

Not everyone is buying the idea of bringing water into the datacenter. Take the case of Pomona Valley Hospital Medical Center (PVHMC) in Pomona, CA. A wave of technological upgrades in its datacenter led to greatly increased heat loads. CIO Kent Hoyos investigated chilled water solutions, but decided not to risk it. He called in an engineering firm and they recommended 20 Liebert XD high-density cooling units.

The units are mounted on top of the racks. Each delivers up to 500 watts per square foot using a coolant that is pumped as a liquid to the cooling module. There, it vaporizes to a gas when it absorbs heat energy.

“I like the fact that there is no risk of liquid leaking into my datacenter,” said Hoyos.

With all 20 of the XD units running at full capacity, Hoyos said the temperature in the datacenter fell by more than 30 degrees. The system eliminated heat-related equipment failures and reduced help desk calls. Better still, PVHMC expects that it will be able to double its server capacity using the existing cooling equipment.

“Refrigerants can be a good substitute for water,” confirmed Andreas Antonopoulos, an analyst at Nemertes Research. “They evaporate at room temperature, so they can’t cause flooding.”

Many Alternatives

While Liebert and APC are the Big Two in power/ cooling, they aren’t the only game in town. HP, for instance, offers a modular cooling system (MCS) that uses chilled water and, according to the company, can triple the standard cooling capacity of a single server rack. The MCS is essentially a box that mounts to the rack and can cool up to 30 kW in a single rack.

“We had reached the point where a full rack was producing too much heat,” said Richard Brooke, an HP enterprise infrastructure specialist. “Cooling now has to be engineered into the system because the servers are getting so hot.”

According to Brooke, MCS can feed about 20 gallons per minute of water chilled to less than 10 C. HP keeps the liquid in a box that is attached to the side of a standard HP 10000 G2 Series rack. MCS uses the customer’s existing chilled water supply and distributes cool air across the front of the rack. This works in combination with a fan and a heat exchanger to push cold air into the servers. The hot air is fed back into the heat exchanger and cooled once again.

MCS doesn’t come cheap, however—pricing starts at $30,500, according to the company.

Knürr has developed a concept similar to HP’s known as the Cooltherm cabinet. Unlike HP’s box appended to the rack, however, Knürr builds the cooling into the enclosure. The cabinet draws on existing chilled water supplies and can handle higher loads than the HP MCS—above 30 kW. Cooltherm is cheaper, too, at around half the price of the HP MCS.

Not to be outdone, IBM offers Cool Blue (also known as Rear Door Heat eXchanger), a watercooled door that is placed on the back of racks in the server room, cooling the air that the system puts out. It also uses the customer’s existing chilled water supply and is designed to fit onto IBM server racks, with prices starting at $4,299.

High-end blade manufacturer Egenera has improved the cooling potential of its latest blades in its BladeFrame system by integrating it with Liebert XD. The system has quick-connect couplings and flexible piping that allows coolant to be delivered to cooling units mounted directly to the back of the BladeFrame. If you need to move equipment racks or cooling modules, you disconnect the pipe and reconnect where needed. One pumping unit or chiller provides 160 kW liquid cooling capacity for up to eight BladeFrame systems. It adds from $300–400 per blade to the BladeFrame price tag.

On the leading edge, Cooligy has developed technology that takes water in tiny channels to the individual server components. Currently, it is focused on cooling the chip. It works using a micro-heat exchanger stationed on top of the chip. Water is fed into the heat exchanger, and evaporates while cooling the chip. The evaporated water is then cooled in a little radiator and fed back into the system. Currently, systems have been designed for high-end workstations that keep two 125 watt Xeon chips about 10 degrees F cooler than can be achieved via air cooling.

Another new development is SprayCool, from ISR. The rack-level product sprays a cooling liquid developed by 3M directly on the components inside the server. The liquid is nonconductive and noncorrosive, and evaporates instantly. SprayCool modules attach to each processor and are hooked up to a SprayCool system at the back of the rack. Similar to the Cooligy concept, CPU-generated heat is removed from the servers and transported to the bottom of the rack. ISR supports specific HP, Dell and IBM xSeries servers, and the manufacturer is also developing systems to cool processors directly by spraying a chemical agent onto the chip set.

For a low-tech alternative, CoolingWorks provides a cheap way to take water into the datacenter— sort of. Its CoolRad-22T unit is a radiator that is attached to the front of a server. The water is sealed inside the radiator pipes and cooled via a couple of fans. Obviously this won’t solve all your hot spots, but at a price of less than $50, it could help supplement other systems.

Michael May, datacenter lead associate for Midwest Independent Transmission System Operator (ISO) in Carmel, IN, for example, has installed four of these CoolRad-22T units to address soaring server room heat density, along with six more Power Distribution Units (PDU) and a new UPS unit inside the space. He manages three facilities encompassing around 10,000 square feet using mainly Windows servers from HP and Sun. He knows these CoolRad units are only a short-term solution.

“Additional servers are going into the existing spaces, and incoming blades are causing major power and cooling concerns about being able to continue supporting the needs of the business,” said May. “But eventually we will run out of the big three—space, power and cooling.”

Bring In The Plumber

Gartner’s Bell says that liquid has to be considered in the mid-term by anyone with heat density issues. Although there are still plenty of kinks to be worked out, he recommends datacenter managers start envisioning “wet aisles,” where liquid is brought out to the racks via a network of pipes.

“While you may not need to install pipes to every server, at least make sure you have the plumbing infrastructure in place to make liquid cooling easy to implement,” said Bell.

For example, anyone installing the Liebert XD system, such as PVHMC (mentioned above), has to install an overhead piping system to connect the cooling modules on top of the racks to a pumping unit or chiller. The pumping unit ensures the coolant exists only as a gas in the controlled environment to eliminate the potential for damage from leaks or condensation.

Similarly, facilities staff at Children’s Medical Center added piping to bring water from the chilling system to a connection on top of the APC InfraStruXure units.

“We had to move one valve around because it did not line up correctly with the InfraStruXure gear, but that was an oversight on our side,” said Chuck Rust. “Since this was a new install, we really did not have to do a great deal of moving equipment around. All the existing chilling gear was still in our older datacenter next door.”

Conclusion

It might be a hard sell to convince some IT managers that it makes sense to bring water into the datacenter through pipes installed immediately above their equipment. Refrigerants, of course, help alleviate some of those concerns. PVHMC felt nervous about water, yet it was confident about the refrigerant.

But fears of the chaos potentially wrought by water damage may be unwarranted. At Children’s Medical Center, for example, both water and coolant are being used. And the water itself only comes within about 10 feet of the actual servers.

Some of those concerns are unjustified, according to Nemertes’ Antonopoulos. “Well engineered systems should not really increase the risk of flooding,” he said. “The real difficulty is standardization.”

Antonopoulos suggests that standardized inrack fluid delivery systems or standardized onchip fluid delivery would greatly assist the goal of efficient and safe liquid cooling in the datacenter, because such standards would create rules and best practices as well as common interfaces between racks and cooling gear. Currently, the American Society for Heating Refrigeration and Air-Conditioner Engineers (ASHRAE) is working with large system OEMs such as Dell, HP, Sun and IBM on this very subject. So far, it has issued a couple of manuals containing guidelines for using liquid cooling with IT equipment.

Electrical power, cooling and floor space are sometimes deemed the three physical constraints to datacenter design. As business units increase their demand for storage, processing and applications, IT managers will have to keep trading off these related and dependent costs. And it seems likely that will mean more water and other refrigerant use in more datacenters.

Leave comment

Your email address will not be published. Required fields are marked with *.