Data center managers don’t have to be told that equipment is growing denser and hotter as multi-core CPUs and graphics processing units proliferate. The average server density per rack more than tripled between 2011 and 2020, according to Uptime Institute. Some racks now draw much as 16 kilowatts and the high-performance computing environments that are increasingly common for artificial intelligence workloads may demand up to 50 kilowatts.

For more than 50 years the standard approach to cooling data processing equipment has been air-conditioning. While that solution is appropriate when workloads are predictable, growth scenarios demand a different approach. As data center managers plan for the future, they need to factor in the need to accommodate denser racks as well as the toll air-conditioning takes on power usage, equipment reliability, and the environment.
Air cooling has a few advantages over other technologies – but just a few. The technology is well understood, and skills are plentiful. Systems have become more efficient over time. The use of hot and cold aisles better targets airflows to reduce costs and enhance efficiency. In some cases, air cooling can even be directed to individual racks based on their unique requirements.
However, air-conditioning is also inherently wasteful because the equipment is typically provisioned to cool an entire room or data rather than just the equipment inside. Operators also often over-invest in air cooling to be on the safe side. As a result, over-provisioning “is probably a more common issue than under provisioning due to rising rack densities,” the Uptime Institute wrote.
Costs add up
Air cooling also comes with overhead. In most cases, it requires expensive raised flooring to manage airflows. Cabling must be carefully placed to avoid blockages; errors can disrupt air flows or even funnel hot exhaust air back into intake ducts, threatening the integrity of computing equipment.
As power requirements of server racks continue to grow into the future due to the widespread adoption of power-hungry machine learning workloads, more fans and pumps will be required to cool them. More equipment means a great risk of failure and a greater need for backups and frequent maintenance.
Then there are the environmental costs to consider. Air-conditioning draws a lot of power. While Canada is fortunate to generate more than 60% of its electricity from hydropower, the United States still relies on fossil fuels for nearly 80% of its generating capacity. Thermoelectric power plants are also the largest source of U. S. water withdrawals, requiring an average of nearly 12,000 gallons per megawatt hour generated in 2020. In a world in which climate change and drought are increasingly compelling issues, heavy use of electrical power can drag on an organization’s sustainability goals.
Data center managers who have already invested in air cooling have little incentive to switch if they expect workloads to remain stable, but how many can say that with certainty? To prepare for a future of nearly certain growth, most need to look beyond air cooling for a long-term solution.
Liquid advantage
Liquid cooling is an increasingly compelling option. The concept isn’t new – mainframes and supercomputers were water-cooled as far back as the 1960s – but new liquid types and system designs have given this alternative new relevance.
Liquid cooling has a host of advantages over air-conditioning. Liquids are up to 1,000 times better at conducting heat than air. A single-phase liquid cooling system has just three moving parts: a coolant pump, a water pump, and a cooling tower or dry cooling fan. That cuts maintenance costs and failure risk. Raised floors and containment aisle aren’t required and immersion racks can be spaced close together because airflow isn’t an issue. From a cost and sustainability perspective, there is no comparison: Liquid cooling consumes far less power than air-conditioning, resulting in up to a 90% reduction in cooling-related energy consumption.
The liquid cooling market is expected to top $3 billion by 2026, growing at a compound annual rate of better than 50%. There are three basic types of liquid cooling.

A rear-door heat exchanger uses doors with radiator-like fins mounted at the back of the rack. A supply hose delivers chilled, conditioned water to the heat exchanger and a return hose pumps warm water back to a chilling unit. This technology can save up to 80% of cooling costs for server racks, is space-efficient, and is relatively easy to install and maintain. The drawbacks are that rear-door heat exchangers take up more space than some alternatives and they require dedicated plumbing, which is both expensive and potentially catastrophic should a line break occur.
Direct-to-chip cooling works inside the computer’s chassis. The cool liquid is pumped through plates that sit atop power-intensive components like CPUs and GPUs. Warm liquid is circulated back to a cooling device or heat exchanger. The advantage of this approach is that it is an efficient way to cool the most heat-intensive equipment. There are several disadvantages, however. Direct-to-chip cooling is less efficient than other liquid-based alternatives because it cools only a few components but not the entire chassis or rack. Air conditioning is typically used to offset this limitation. There are also reliability issues because of the many small parts and fittings involved.
Immersion cooling on the rise
Immersion cooling is the newest liquid-based option and the one generating the most excitement. It involves physically submerging electronic components such as servers and storage devices in a non-conductive liquid coolant. There are two types of immersion cooling: single-phase and two-phase. Each has its advantages.
Two-phase immersion cooling is costlier and more complex but also more energy efficient. A single-phase system is less expensive to install and maintain but consumes more power because of the need for a heat convection pump.
These are among the reasons Dr. Moises Levy, senior principal analyst for data center power and cooling for technology research firm Omdia, calls immersion liquid cooling “the most promising technology based on thermal performance.”
Dr. Moises Levy, Senior Principal Analyst at Omdia, a research firm specializing in power and cooling technology, refers to immersion cooling as ”the most promising technology based on thermal performances” and highlights five main advantages:
- Low cost – Savings in operational cooling expenses can yield ROI times of less than 12 months based on power savings alone. There is also no need for retrofitting data centers with raised floors or cold aisles.
- High energy efficiency – Immersion cooling systems transfer heat directly from the electronic components to the liquid coolant, which can then be easily and efficiently cooled. No energy is wasted cooling the room or equipment that doesn’t need it. This leads to significant energy savings.
- Increased server density – Single-phase immersion cooling eliminates the need to dissipate heat from high-density server racks, which is a shortcoming of air-cooling systems that can cause hotspots and equipment failure. In contrast, immersion cooling systems can easily handle high-density server racks with a dissipation capacity of up to almost 100 kW in the space of two standard racks.
- High scalability – Data center expansion can require significant investments in cooling capacity and may even demand structural changes to buildings. Immersion cooling allows servers and storage devices to be added without major infrastructure change.
- Improved component reliability and longevity – The non-conductive liquid used in immersion cooling eliminates the risk of electrical short circuits while also protecting electronic components from airborne dust, vibration, and other environmental factors that are a byproduct of air cooling. Hardware lifespans are typically improved by 30%.
3% of the world’s power goes to data centres,” says Eliot Ahdoot, Hypertec’s Chief Commercial and Innovation Officer. “Immersion cooling systems allow you to take maximun performance with minimized power use. Coupled with heat recuperation, you can go down to 70% reduction.”
This post is also available in: FR