This article, written by Robin Underwood, explores the optimization of heat rejection in data centers, exploring innovative cooling solutions, environmental considerations, and advanced strategies for enhancing efficiency and sustainability in IT infrastructure.

Data centers have ever-growing server demands, requiring innovative cooling solutions. Liquid-cooled racks, a rising trend, introduce new possibilities for heat rejection systems. Our data center clients prioritize cost reduction, faster construction timelines and environmental responsibility. To achieve these goals, a seamlessly integrated heat rejection system, tailored to the building's design and local conditions, is crucial.

While data hall requirements are established early in design, the chosen heat rejection system must adapt to local conditions, such as climate and water availability. For global data center operators, achieving consistency across facilities offers advantages like standardized, streamlined procurement (including DfMA modules) and condensed construction timelines. However, multiple options for heat rejection systems are crucial to ensure optimal efficiency based on local factors.

This balance between consistency and site-specific optimization is a key consideration in modern data center design for worldwide clients.

The table below is based on large data center deployments ranging from 5MW to 100MW data center / campus, although, some information may be relevant for smaller installations.

Energy/Water Efficient Heat Rejection Systems

An efficient heat rejection system design is limited by the following parameters:

  • Supply air/water temperature: the design temperature the cabs/racks/CDU require during normal conditions.
  • Peak summer and yearly external temperatures: the design temperature the external heat rejection equipment must be rated to, typically there will also be an additional allowance for recirculation for multi-unit installations (usually validated by an external CFD based on extreme temperatures and wind conditions).

In an ideal scenario, the peak summer design temperature would be lower than the required supply air/water temperature in the data hall. This allows the data center to rely solely on the heat rejection plant and no chillers - minimizing energy consumption and resulting in low PUE values. PUE is dependent upon whether the heat rejection plant operates using adiabatic cooling.

However, for most data centers, peak summer temperatures exceed the required supply temperature. This necessitates mechanical cooling to maintain design conditions. The larger the temperature difference, the harder the mechanical cooling system must work.  Compared to free cooling, mechanical cooling requires substantial electrical input, which ultimately contributes to more heat being rejected. This can further increase water consumption if the heat rejection plant utilizes adiabatic cooling.

Reducing the Mechanical Cooling Demand

The obvious, although usually impractical, way would be to locate the data center in a colder climate where the average temperature is lower, allowing for free cooling to operate for longer and reducing the mechanical cooling hours operated. This won't be practical for most data center operators due to cost, land, power and connectivity availability.

A more practical way to reduce the mechanical cooling demand is by increasing the air/water temperature supplied to the data hall. For example, migrating to ASHRAE A1 Allowable temperature range from ASHRAE A1 Recommended during peak external ambient conditions could reduce OPEX considerably.

ASHRAE Environmental Classes (Allowable) for Data Centres1

ASHRAE Environmental Classes (Allowable) for Data Centers1

Hot and cold aisle separation is standard in all new data centers, however, improved airflow management and hot aisle containment (HAC) leakage reduction can have a positive improvement on air temperatures in the room.

Air Side Economizers

In data centers operating within ASHRAE A1 Allowance temperature and located in colder/dryer climates, direct evaporative coolers (DEC) are a highly efficient solution, however, they utilize a larger footprint on the site. These systems can potentially operate in free cooling mode year-round, leveraging both dry and adiabatic cooling based on external conditions. This can translate to low reliance on mechanical cooling and potentially exceptional PUE values.

Using water for adiabatic cooling allows the unit to cool the air close to the wet bulb temperature of the ambient air as opposed to the dry bulb temperature (dry cooling). Due to the wet bulb temperature being often many degrees below coincident dry bulb temperature, in high dry bulb conditions this can mitigate the requirement for mechanical cooling.

water usage diagram, heat rejection, data centres

Data Cente with DEC unit

Chilled Water System Flexibility

The most common systems installed in data centers use chilled water as the medium between the internal and external units. Chilled water systems offer significant flexibility:

  • Data hall configuration: Chilled water systems can accommodate various data hall layouts and are suitable for future integration of liquid cooling technologies.
  • Heat rejection plant location: The heat rejection plant can be positioned on the roof or even remotely from the data hall itself, providing valuable design freedom.
  • External heat rejection options: Chilled water systems are compatible with a range of external heat rejection options, including dry cooling, dry cooling with partial evaporative assist, and full adiabatic cooling. This allows for optimization based on climate and environmental considerations.

A key factor in maximizing the heat rejection system (with chilled water) efficiency is minimizing the heat rejection unit approach temperature i.e. the difference between the external air temperature and the chilled water outlet temperature. Every degree closer results in mechanical cooling being eliminated for potentially hundreds of hours, leading to significant energy savings. In hot, dry climates, using adiabatic cooling which cools towards the wet bulb temperature which is often much lower than the dry bulb temperature, can further reduce the number of hours of mechanical cooling required. The balance between reducing energy / water consumption as part of the design is investigated as part of each project design.

 

Water Usage

A growing challenge for data center providers is water availability. Data centers utilizing evaporative cooling systems need a consistent water supply for process cooling and obtaining this can be increasingly difficult in some regions. Water stress levels (calculated by UN-Water) are low for European regions but there is much higher stress in North Africa and Asia, which may lead to fewer data centers obtaining water for process cooling.

Space Efficiency of Heat Rejection Plant

The external heat rejection unit size is typically based on the required heat rejection capacity and approach temperature (external air inlet temperature vs CHW/air outlet temperature). Generally, the heat rejection plant is located as close to the data halls as possible, however, where there is a drive towards higher-density data centers the typical space for external heat rejection may be limited. To achieve an increased heat rejection capacity in limited space, the approach temperature of the unit may have increase (leading to an increase in the mechanical cooling hours annually).

Based on previous Bryden Wood data center projects, we have plotted heat rejection plant properties based on their heat rejection capacity (kW/m2) vs approach temperature(K).

Note: The footprint shown is for the unit only and exclude any access/air intake space. For plant which are part of chilled water systems, the ancillary equipment is not included in the footprint such as pumps, heat exchangers and pressurization systems.

Some strong correlations can be observed:

  • Cooling towers offer a much lower approach temperature compared to dry air coolers (similar kW/m2). This is because cooling towers utilize evaporative cooling where water and air mix, improving heat exchange compared to the limitations of a dry air cooler coil. It's worth noting that cooling towers range in height from 4 to 8m tall compared to up to 3m of height for DAC and HDAC, which helps to improve the capacity of the cooling tower.
  • DEC & IAC units: While achieving low approach temperatures, their heat rejection capacity per unit volume is typically lower. This is because they rely solely on air cooling, and air has a lower specific heat capacity than water.
  • HDAC units: These units present a trade-off between approach temperature and capacity per unit volume. The graph shows both operating modes of the same size HDAC unit providing two different heat rejection capacities.
  • Air-cooled chiller (Free Cooling): This option is included in the graph, but the chiller size is at a disadvantage because it incorporates additional components beyond just the heat unit (chiller) such as evaporators, compressors, condensers, and, in some cases, a CHW pump. However, some air-cooled chillers achieve results comparable to dry air coolers.

All adiabatic plants require water treatment equipment and water storage, which, based on the installation and resilience required, can be considerable. However, these have a lower priority to be situated adjacent to the equipment and can be located elsewhere on site.

Emergence of Liquid Cooling

Data centers can leverage liquid cooling strategies to optimize efficiency. One approach combines direct-to-chip liquid cooling for processors and GPUs with traditional air cooling for other components within the rack. Initially, both systems might be connected to the same chilled water infrastructure.

Single-phase immersion cooling offers the ability for higher rack densities than direct-to-chip, however, the IT equipment must be immersion ready which is not standard off-the-shelf equipment and for some IT manufacturers, not yet available. For some passive immersion cooling products, it is estimated 3 – 5% of the heat escapes from the immersion cooling unit into the room, therefore, for high-capacity rooms some air cooling will still be required.

With the trend of CHIP manufacturers ever increasing their compute capacity, supply temperatures of the chilled water system may be required to remain low whilst increasing the cooling capacity (increasing ΔT of the chilled water). This could see the emergence of two tiers of systems: high compute systems with high density at lower chilled water temperatures (at a higher ΔT) and lower compute systems (still high compared to today’s IT) with a lower density at higher chilled water temperatures enabling much more free cooling.

Alternatively, a liquid-cooled rack with air heat rejection (on the back of the rack) can be installed to provide liquid cooling whilst maintaining the existing air-cooled system, but this is a less common approach.

Onward Look to Heat Rejection

In the realm of air-cooled data centers, the industry is increasingly gravitating toward ASHRAE A1 allowable temperatures for data halls. However, there is little indication of a shift toward A2 or higher temperature ranges. Heat rejection systems, including mechanical cooling, have started to reach a plateau, with manufacturers making incremental enhancements to accommodate higher chilled water temperatures within the ASHRAE A1 range.

The next significant leap in the industry lies in the adoption of liquid cooling. Many larger data center operators are now integrating liquid cooling solutions into their facilities. However, the lack of a standardized design topology for liquid cooling systems and their various configurations poses a challenge. We may witness the emergence of two distinct data center topologies: the energy-efficient approach versus the high-density model, or perhaps a hybrid combination of both. These design choices will significantly impact heat rejection systems and other critical aspects, such as IT infrastructure development.

While the industry strives to reduce PUE in data centers, sustainability efforts should also focus on minimizing process water consumption during adiabatic cooling. Cooling towers, which consume substantial amounts of water year-round, may eventually be phased out or replaced with hybrid systems that prioritize dry cooling methods or utilize HDAC units.

By embracing these trends, data centers can enhance efficiency, reduce environmental impact, and pave the way for a more sustainable future in the ever-evolving landscape of IT infrastructure.

References

  1. ASHRAE TC 9.9 white paper 2011 Thermal Guidelines for Data Processing Environments
  2. BAC Catalogue - Series 3000 Cooling Tower
  3. BAC Catalogue - TrilliumSeries™ Adiabatic Cooler
  4. EVAPCO – Dry Cooling 101