This article, written by Robin Underwood, explores the optimisation of heat rejection in data centres, exploring innovative cooling solutions, environmental considerations, and advanced strategies for enhancing efficiency and sustainability in IT infrastructure.

Data centres have ever-growing server demands, requiring innovative cooling solutions. Liquid-cooled racks, a rising trend, introduce new possibilities for heat rejection systems. Our data centre clients prioritise cost reduction, faster construction timelines and environmental responsibility. To achieve these goals, a seamlessly integrated heat rejection system, tailored to the building's design and local conditions, is crucial.

While data hall requirements are established early in design, the chosen heat rejection system must adapt to local conditions, such as climate and water availability. For global data centre operators, achieving consistency across facilities offers advantages like standardised, streamlined procurement (including DfMA modules) and condensed construction timelines. However, multiple options for heat rejection systems are crucial to ensure optimal efficiency based on local factors.

This balance between consistency and site-specific optimisation is a key consideration in modern data centre design for worldwide clients.

The table below is based on large data centre deployments ranging from 5MW to 100MW data centre / campus, although, some information may be relevant for smaller installations.

Note: It is assumed that hot and cold aisle separation is part of the baseline of the design. CAPEX and OPEX costs may vary from site to site based on local factors, the metric shown here is a baseline and shall be reviewed on a project basis. Where 2 options are shown for mechanical cooling, the option in bold has been used for the comparison below as typically this is the most common.

Energy/water efficient heat rejection systems

An efficient heat rejection system design is limited by the following parameters:

  • Supply air/water temperature: the design temperature the cabs/racks/CDU require during normal conditions.
  • Peak summer and yearly external temperatures: the design temperature the external heat rejection equipment must be rated to, typically there will also be an additional allowance for recirculation for multi-unit installations (usually validated by an external CFD based on extreme temperatures and wind conditions).

In an ideal scenario, the peak summer design temperature would be lower than the required supply air/water temperature in the data hall. This allows the data centre to rely solely on the heat rejection plant and no chillers - minimising energy consumption and resulting in low PUE values is dependent upon whether the heat rejection plant operates using adiabatic cooling.

However, for most data centres, peak summer temperatures exceed the required supply temperature. This necessitates mechanical cooling to maintain design conditions. The larger the temperature difference, the harder the mechanical cooling system must work.  Compared to free cooling, mechanical cooling requires substantial electrical input, which ultimately contributes to more heat being rejected. This can further increase water consumption if the heat rejection plant utilises adiabatic cooling.

Reducing the mechanical cooling demand

The obvious way would be to locate the data centre in a colder climate where the average temperature is lower, allowing for free cooling to operate for longer and reducing the mechanical cooling hours operated. Finding a climate with a lower peak summer temperature will reduce the demand on the critical equipment, potentially reducing the CAPEX of the system. Although, this may not be practical for most data centre operators due to cost, land, power and connectivity availability.

A more practical way to reduce the mechanical cooling demand is by increasing the air/water temperature supplied to the data hall. For example, migrating to ASHRAE A1 Allowable temperature range from ASHRAE A1 Recommended during peak external ambient conditions could reduce OPEX considerably.

ASHRAE Environmental Classes (Allowable) for Data Centres1

ASHRAE Environmental Classes (Allowable) for Data Centres1

Hot and cold aisle separation is standard in all new data centres, however, improved air flow management and hot aisle containment (HAC) leakage reduction can have a positive improvement on air temperatures in the room.

Air Side Economisers

In data centres operating within ASHRAE A1 Allowance temperature and located in colder/dryer climates, direct evaporative coolers (DEC) are a highly efficient solution, however, they utilise a larger footprint on the site. These systems can potentially operate in free cooling mode year-round, leveraging both dry and adiabatic cooling based on external conditions. This can translate to low reliance on mechanical cooling and potentially exceptional PUE values.

Using water for adiabatic cooling allows the unit to cool the air close to the wet bulb temperature of the ambient air as opposed to the dry bulb temperature (dry cooling). Due to the wet bulb temperature being often many degrees below coincident dry bulb temperature, in high dry bulb conditions this can mitigate the requirement for mechanical cooling.

Data Centre with DEC unit (DEC gantry width over half data hall width)

Chilled water system flexibility

The most common systems installed in data centres use chilled water as the medium between the internal and external units. Chilled water systems offer significant flexibility:

  • Data hall configuration: Chilled water systems can accommodate various data hall layouts and are suitable for future integration of liquid cooling technologies.
  • Heat rejection plant location: The heat rejection plant can be positioned on the roof or even remotely from the data hall itself, providing valuable design freedom.
  • External heat rejection options: Chilled water systems are compatible with a range of external heat rejection options, including dry cooling, dry cooling with partial evaporative assist, and full adiabatic cooling. This allows for optimisation based on climate and environmental considerations.

A key factor in maximising the heat rejection system (with chilled water) efficiency is minimising the heat rejection unit approach temperature i.e. the difference between the external air temperature and the chilled water outlet temperature. Every degree closer results in mechanical cooling being eliminated for potentially hundreds of hours, leading to significant energy savings. In hot, dry climates, using adiabatic cooling which cools towards the wet bulb temperature which is often much lower than the dry bulb temperature, can further reduce the number of hours of mechanical cooling required. The balance between reducing energy / water consumption as part of the design is investigated as part of each project design.

 

 

 

 

 

 

 

Water usage

A growing challenge for data centre providers is water availability. Data centres utilising evaporative cooling systems need a consistent water supply for process cooling and obtaining this can be increasingly difficult in some regions. Water stress levels (calculated by UN-Water) are low for European regions but there is much higher stress in North Africa and Asia, which may lead to fewer data centres obtaining water for process cooling.

Space efficiency of heat rejection plant

The external heat rejection unit size is typically based on the required heat rejection capacity and approach temperature (external air inlet temperature vs CHW/air outlet temperature). Generally, the heat rejection plant is located as close to the data halls as possible, however, where there is a drive towards higher density data centres the typical space for external heat rejection may be limited. To achieve an increased heat rejection capacity in limited space, the approach temperature of the unit may have increase (leading to an increase in the mechanical cooling hours annually).

Based on previous Bryden Wood data centre projects, we have plotted heat rejection plant properties based on their heat rejection capacity (kW/m2) vs approach temperature(K).

Note: The footprint shown are for the unit only and exclude any access/air intake space. For plant which are part of chilled water systems, the ancillary equipment is not included in the footprint such as pumps, heat exchangers and pressurisation system.

Some strong correlations can be observed:

  • Cooling towers offer a much lower approach temperature compared to dry air coolers (similar kW/m2). This is because cooling towers utilise evaporative cooling where water and air mix, improving heat exchange compared to the limitations of a dry air cooler coil. It's worth noting that cooling towers range in height from 4 to 8m tall compared to up to 3m of height for DAC and HDAC, which helps to improve the capacity of the cooling tower.
  • DEC & IAC units: While achieving low approach temperatures, their heat rejection capacity per unit volume is typically lower. This is because they rely solely on air cooling, and air has a lower specific heat capacity than water.
  • HDAC units: These units present a trade-off between approach temperature and capacity per unit volume. The graph shows both operating modes of the same size HDAC unit providing two different heat rejection capacities.
  • Air-cooled chiller (Free Cooling): This option is included in the graph, but the chiller size is at a disadvantage because it incorporates additional components beyond just the heat rejection unit (chiller) such as evaporators, compressors, condensers, and, in some cases, a CHW pump. However, some air-cooled chillers achieve results comparable to dry air coolers.

All adiabatic plants require water treatment equipment and water storage, which, based on the installation and resilience required, can be considerable. However, these have a lower priority to be situated adjacent to the equipment and can be located elsewhere on site.

Emergence of Liquid Cooling

Data centres can leverage liquid cooling strategies to optimise efficiency. One approach combines direct-to-chip liquid cooling for processors and graphics cards with traditional air cooling for other components within the rack. Initially, both systems might be connected to the same chilled water infrastructure (these have not been reviewed as a single system in the table).

Single phase immersion cooling offers the ability for higher rack densities than direct-to-chip, however, the IT equipment must be immersion ready which is not standard off the shelf equipment and for some IT manufacturers, not yet available. For some passive immersion cooling products, it is estimated 3 – 5% of the heat escapes from the immersion cooling unit into the room, therefore, for high-capacity rooms some air cooling will still be required.

With the trend of CHIP manufacturers ever increasing their compute capacity, supply temperatures of the chilled water system may be required to remain low whilst increasing the cooling capacity (increasing ΔT of the chilled water). This could see the emergence of two tiers of systems: high compute systems with high density at lower chilled water temperatures (at a higher ΔT) and lower compute systems (still high compared to today’s IT) with a lower density at higher chilled water temperatures enabling much more free cooling.

Alternatively, a liquid cooled rack with air heat rejection (on the back of the rack) can be installed to provide liquid cooling whilst maintaining the existing air-cooled system, but this is a less common approach.

Onward look to heat rejection

In the realm of air-cooled data centres, the industry is increasingly gravitating toward to ASHRAE A1 allowable temperatures for data halls. However, there is no indication of a shift toward A2 or higher temperature ranges. Heat rejection systems, including mechanical cooling, have started to reach a plateau, with manufacturers making incremental enhancements to accommodate higher chilled water temperatures within the ASHRAE A1 range.

The next significant leap in the industry lies in the adoption of liquid cooling. Many larger data centre operators are now integrating liquid cooling solutions into their facilities. However, the lack of a standardised design topology for liquid cooling systems and their various configurations, poses a challenge. We may witness the emergence of two distinct data centre topologies: the energy-efficient approach versus the high-density model, or perhaps a hybrid combination of both. These design choices will significantly impact heat rejection systems and other critical aspects, such as IT infrastructure development.

While the industry strives to reduce PUE in data centres, sustainability efforts should also focus on minimising process water consumption during adiabatic cooling. Cooling towers, which consume substantial amounts of water year-round, may eventually be phased out or replaced with hybrid systems that prioritise dry cooling methods or utilise HDAC units.

By embracing these trends, data centres can enhance efficiency, reduce environmental impact, and pave the way for a more sustainable future in the ever-evolving landscape of IT infrastructure.

References

  1. ASHRAE TC 9.9 white paper 2011 Thermal Guidelines for Data Processing Environments
  2. BAC Catalogue - Series 3000 Cooling Tower
  3. BAC Catalogue - TrilliumSeries™ Adiabatic Cooler
  4. EVAPCO – Dry Cooling 101