There are currently no results that match your search, please try again.
Following our ground-breaking Accelerate Data Centers roundtable in London, we summarize the key discussions on the future of data center development and operations. Leaders from Google, Equinix, VIRTUS Data Centres, Edged, Kevlinx Data Centers, Digital Realty, Tract, and QTS Data Centers joined us to explore critical challenges, including exponentially increasing power density, cooling technology evolution, power supply constraints, and regulatory hurdles. This unprecedented gathering brought together competitors to share problems and ideas, recognizing that the complex challenges facing the industry require collaborative thinking and new approaches.
Although it is a long-established sector, bringing together experts from multiple leading players in the data center world was experimental. In an exponentially growing and highly competitive industry, it is not normal for a diverse group from across the industry to come together and share problems and ideas.
But this is exactly the group that convened at Accelerate Data Centers, London, in January 2025.
The conversation started by discussing the trends in data center density.
Driven by the relentless development of CPU and GPU processing power, the need for ever greater power and cooling density at rack level continues to increase exponentially. Data processed has roughly doubled every two years from 2 zettabytes in 2010 to 149 zettabytes in 2024, and the trend continues. It was suggested that the increase in processor power will lead to rack density increasing, with the 1MW rack already being envisaged.
The change at processor/rack level density is accelerating at a rate that brings challenges to construction project delivery. When projects can take four to five years to realise, there might be a tenfold increase in rack density within that lifetime, creating a significant dilemma of what to build and how to pivot.
The move from air-cooled to liquid-cooled solutions is key. However, liquid cooling topology can be more complicated and expensive, so the trajectory for take up of this technology is still evolving.
Another disparity in scales arises between rack density and cooling technology infrastructure. While density is increasing exponentially, cooling technology is not. You could achieve 10x as much density in a given data hall footprint, but the external mechanical and electrical plant will be 10x bigger. This could present an opportunity for manufacturers or suppliers to develop a new suite of products, especially given land cost and pressures. We know this is happening in the US right now, where a major manufacturer is developing new products for a specific DC owner.
Recent news on the release of open-source DeepSeek and its impact on the share price of Nvidia suggests that there may be some striation in the market.
Some AI models, though not in the leading peloton, are close behind and more cost-effective. These applications may place lower demands on processing and cooling and hold a lower facility price point.
Everyone in the room agreed that standing still is not an option.
Some believe that we’re reaching the edges of technological development in cooling, certainly from current supply chains. There are other cooling technologies (e.g., some developed for the defunct space shuttle program), but these are currently 100x the cost. This rise in density is driving DCs to be all about the utilities, with processing space representing a very small percentage of each development.
With both a rapidly evolving technological marketplace and long project lead times, the same dilemma between customers and infrastructure builders about the timings of decision-making is amplified.
The developer needs decisions as early as possible to design, secure power, step through regulatory hurdles and source long-lead components, while the customers want to be able to wait as long as possible to read the market. Maintaining optionality, speed, and cost effectiveness is a problem requiring new solutions.
On top of these challenges there are further known unknowns in the picture: the impact of the move from training AI to the distribution of AI Quantum computing.
This situation is unlikely to change. DCs and the wider data infrastructure are increasingly being seen as critical national infrastructure, alongside energy, telecommunications, water, and waste systems.
The hunger for reliable power is driving a move towards on-site power generation and the diversification of power-sellers and other companies to invest directly in distribution infrastructure. The hope is they can do it more quickly than state-run projects.
Available, reliable power is certainly a key driver in finding locations for DCs. But added to this is the issue of data-sovereignty and cost. Countries and companies are sensitive about where their data is stored and the implications of having it stolen or data flows disrupted. The UK and US are reducing regulations to stay competitive in the market, whereas the EU is maintaining a high regulatory stance, and, as such, this market seems to be stagnating.
How Does All this Impact Project Development?
10 to 15 years ago, when average cab density was still well below 10kW per cab, proximity to fibre network was the key indicator for site viability. However, the expansion of fibre networks and reduced latency criticality associated with AI, has shifted the focus to power availability. Increased non-water-based cooling demands with an associated increase in power consumption is further driving most site acquisition conversations to focus on power, and lead time to power availability.
The simplicity of the single-story campus also remains a key challenge for Europe, and, even in the US, this development strategy is being challenged. Multi-story building geometries with associated distribution complexities and more space for plant are having to be envisioned, carrying with them potential impact on CapEx and OpEx envelopes, as well as embodied carbon and carbon in use. Along with this, speed to market remains high on the agenda and, by default, pre-construction timescales are increasingly coming under pressure.
The dream for design standardization remains alive but is being challenged by rapidly changing technology and service demands. The development of localization and local code compliance for standardized design is a key USP but a balance must be struck between standardization, localization, and the ability to pivot towards rapidly evolving technology and tenant demands.
What Are the Problems that Need to be Solved?
The program for delivering a new data center is three to five years. Within this period a great deal of the time is taken in permitting processes and getting power to the site.
Different countries and regions have different regulations and bureaucracies. One of the reasons why there’s a big focus on projects in the US (other than technology development) is the relative lightness of the permitting and approval process. The UK is seen as relatively straightforward; however, the system is often slowed by multiple objections. In other countries (for instance, Spain) the permitting process is very long.
Getting power to site is another key factor.
Power in Europe is supplied by a monopoly. Often state-run, these organizations suffer no commercial loss from being slow or late in delivering agreed power infrastructure. This can typically take 12 to 18 months.
To contend with some of these challenges, developers are pushing the boundaries of their operations and getting directly involved in the delivery of electrical distribution infrastructure by contracting private companies to carry out the work. This approach is growing significantly in Asia. Private companies like Octopus in the UK are looking at expanding their remit and starting to invest in distribution of power.
The regulatory hurdles are not lessening. In fact, most believe they are tightening. The energy efficiency of data centers (PUE) is being increasingly regulated, especially in the EU. In Germany, specific regulations about the use of recovered heat are coming into force. There is a great opportunity to use the recovered heat from DCs. However, there are significant problems in exploiting this, and there are country and regional variations in how schemes can work.
In northern Europe, there is a precedent in community heating schemes, whereas in the UK this has never been common. There are no established bodies or groups who can or will take the responsibility for installing and running these schemes. There are also operational and contractual issues to be overcome.
Clients of DCs want efficiency and reliability and are usually uninterested in wider operational considerations. Equally, any local heating project where homes rely on a DC for their domestic heating would be compromised if the DC is shut down for weeks in winter.
Making use of this valuable waste energy requires these conundrums to be resolved. This may become critical as the sustainability spotlight increasingly shines on these energy consumers.
Some companies are exploring an industrial synergy with food production using low grade heat and heat pumps, but this is in early development.
The feeling in the room was that, due to the regulations, Europe is pricing itself out of the general market – although due to data sovereignty a market will be retained.
Generally, green energy is more expensive, which also distorts the market. But programs like the Bernwick bank 4-GigaWatt wind farm may ease the UK situation in the longer term.
Although self-generated power looks like a silver bullet, bought utilities are significantly cheaper than those self-generated. These pressing needs are making self-generation and nuclear a more likely option.
Fusion as well as Fission is Being Discussed, but What is a Realistic Timescale?
The group felt that DCs need to be part of regional, city, utility, industrial and energy strategies. Joined up thinking may improve the situation and help to find a path forward.
Discussion in the room moved to nuclear self-generation for a while and how far we are away from viable small nuclear power self-generation. The modular size of small nuclear is critical. Due to the need for power resilience, sites would need to have some arrangement of N+1 power units – if each unit is too large then the investment costs would be simply too high. The modularity of SMRs and suitability to support DC loads will be important.
Small modular reactors are looking to address some of the licensing and regulation issues with repeatable modular design.
Based upon conversations and work with the small nuclear development ecosystem, there was a stated view that an industrialized modular nuclear offering is 20 years in the future. This view is in stark contrast to some who believe we will see offerings in three to five years. The difference may be in the maturity of the technology. Without proven repeatability, regulation will remain arduous. Development is in progress, but the market is still some years from a pilot.
Some of the group suggested that the market for this technology is currently dysfunctional: modular nuclear developers have an idea and are looking for niche markets (for instance, replacing coal-fired power stations with SMRs, as well as data center power). It was suggested that the DC industry could help focus and accelerate development through the creation of an Industry-Brief defining the market. This idea, of course, would need further collaboration across the DC ecosystem.
To bring the session to a close, there was a round-up of the issues and barriers that need to be solved in the short term. These were:
■ How to deal with irregular site design, different business drivers and limitations of template designs
■ How to do the above and meet acceptable price points
■ How to find sites with reliable power
■ How to shorten overall program durations
■ How to develop the expertise and constraints of the supply chain, including integrators to get them “on the curve”!
■ How to find outlets for waste heat
■ How to deal with fragmentation across Europe in regulations/standards
■ How to deliver value across the board within service level agreements.
Plenty for the group to discuss and action when it meets again later this year.
Learn more about our approach to data centre design here.
Related Ideas