There are currently no results that match your search, please try again.
The rising power density of IT infrastructure is transforming data centre design at every level, from the layout of white space to the arrangement of entire campuses. As CPUs and GPUs become more power-hungry, and AI and machine learning workloads push us into new thermal and electrical territory, the question is no longer whether densification is coming, but how fast, how far, and how to respond. MEP Director Michael Hope-Jones explores a shift in thinking - from attempting to future-proof buildings to creating 'present-enabling' infrastructure that can adapt as density requirements crystallise.
In this period of flux, we face a dilemma: how do we commit to design and construction when we don't yet know what the future of IT load will require?
At Bryden Wood, we’ve been exploring this challenge not only through our project work on high-density, next-generation facilities, but also through the conversations sparked at the Accelerate Data Centres summit. This event, hosted in London in early 2025, brought together specialists from across the data centre ecosystem: developers, operators, engineers, architects and integrators. The aim was to share insights, debate dilemmas and confront the realities of delivering infrastructure in an industry moving faster than ever.
What follows is a perspective shaped by that discussion, by practical delivery experience and by a belief that our approach to data centre design needs to change — not to chase the future, but to enable the present.
It wasn’t long ago that 3 to 5 kW per rack was standard. Today, hyperscalers are routinely designing for 30 to 60 kW, and in AI-focused clusters, densities of 80 to 150 kW per rack are becoming the norm. Some forward-looking programmes are already contemplating 400 kW or more.
These aren’t abstract numbers. They’re driving fundamental changes to the scale, shape and logic of data centre buildings.
At the same time, it’s increasingly difficult to predict where the market is heading. Will AI training continue to dominate, or will AI inference shift workloads back to distributed, lower-density environments? How quickly will liquid cooling be adopted across the sector? What will the average rack look like in five years?
We don’t know — and that’s the problem.
Data centres often take three to five years to deliver, constrained by power availability, planning delays, regulatory hurdles and long-lead infrastructure. In that time, IT requirements can change dramatically. A facility designed for 30 kW racks might need to support double that, or more, by the time it goes live. Yet developers are being asked to commit early: to secure power, navigate planning and procure long-lead plant.
This creates a tension. Clients want to wait to see where density trends go. Developers want to move. Designers are caught in the middle.
We often talk about future-proofing in this industry, but right now, that concept feels increasingly hollow. No one can predict the future of compute, but that doesn't mean we can't design for it.
This is where we shift the design mindset: present-enabling, not future-proofing.
Instead of waiting for perfect clarity, we define a stable foundation. We design the primary infrastructure to enable progress today, while keeping the IT spaces flexible enough to adapt as density and load types become clearer.
The infrastructure layout itself remains broadly consistent across scenarios. What changes later are the final elements: distribution topology, cooling loops, and the configuration of data halls to suit specific IT demands.
In this model, the heavy lifting happens up front. Generators, primary electrical switchgear, UPS systems, chilled water systems, structural cores, risers and critical ancillary plant are all fixed and built early. The data halls follow, shaped around this infrastructure rather than the other way around.

At low densities, data centres have traditionally been white space led. The architectural layout of IT racks drove the floorplate. MEP systems were arranged around it: on rooftops, in back-of-house plantrooms, in yards or on gantries.
But as rack densities rise, and especially as liquid cooling becomes prevalent, that relationship flips.
Power and cooling infrastructure has become the spatial driver. CDU rooms, PDU zones, pipework distribution routes, and vertical risers all require large, highly coordinated spaces. These systems can no longer be arranged around the IT layout — they define it.
At high densities, the building is no longer designed from the white space outward. It is designed from the infrastructure inward.
This shift changes everything. Floorplates may need to break into smaller, service-proximate modules. IT rooms might cluster around risers rather than stretch out in long rows. In multi-storey designs, upper levels may give way to plant decks instead of additional data halls, as the weight and supporting infrastructure of high-density, liquid-cooled racks can make upper-floor deployment progressively less viable.

Innovation in action — this breakthrough cooling system revolutionised data centre deployment. Scaled into full production by a supplier, it’s now used in dozens of new DCs around the world.
This leads to an idea we’re exploring at Bryden Wood: the creation of a plant-first chassis, a repeatable building infrastructure that can support a wide range of eventual IT densities.
The concept:
This approach enables early procurement of long-lead items, keeps the delivery programme moving, avoids locking in density assumptions too early and allows time for the IT requirements to clarify.
It is, in effect, a two-speed delivery model. Build the plant early; flex the compute space later.
Here, we turn the lens around. If IT hardware is becoming more compact and dense, should the same not be true for the plant?
Right now, plant is the constraint. It scales disproportionately. The more power and cooling you need, the more space and complexity the MEP systems require. That often cancels out the gains in IT density. But that might not always be the case.
Emerging technologies offer hints at a different future:
These technologies are progressing at different speeds. But collectively, they point to a future where the plant may once again give way to other drivers of form.
Right now, however, the physics hasn’t caught up with the ambition. Power must still be generated and transformed, heat must still be rejected, and maintainability must still be guaranteed. These realities keep plant large and force it to the centre of design.
So, in the near term, we must design accordingly. The data centre is no longer a building with infrastructure around it. It is infrastructure, with space for compute inside it.
In the geometry of the high-density data centre, the plant leads and the architecture follows.
That may seem like a shift in priorities. But it’s also an opportunity.
What’s perhaps most interesting about this shift is that, as rack densities rise, the centre of gravity in data centre design quietly moves. Where once the form of the building followed the logic of the IT layout, it now increasingly follows the infrastructure: cooling loops, electrical topologies, service access and resilience pathways. In effect, the services are shaping the space.
This doesn’t diminish the role of architecture. It reframes it. As plant systems become more central, the challenge is one of integration. The goal is to create buildings where spatial organisation, utility logic and future adaptability are considered as one.
At Bryden Wood, with our roots in architecture and a long-standing commitment to integrated design, this shift is less a disruption and more a continuation of how we’ve always approached complex challenges. We work across disciplines, ask better questions and evolve our thinking as the context demands.

Of course, the story doesn’t end here. As technologies like SMRs, fuel cells, direct current power and advanced cooling continue to mature, the infrastructure itself may begin to shrink or at least simplify. That might enable new typologies, where plant and white space coexist more seamlessly, and perhaps see a renewed architectural expression in the form.
But for now, we work with what we know. We accept that compute is changing faster than the systems that support it. We build infrastructure that gives us optionality. We delay what we can. We fix what we must.
We design not to predict, but to be ready and adaptable, enabling what’s needed now — and whatever comes next.

Michael Hope-Jones is MEP Director at Bryden Wood and a Chartered Electrical Engineer with 18 years' experience in data centre design and mission critical infrastructure. He has led multidisciplinary teams on projects for colocation, enterprise and hyperscale clients across the UK, Europe and South Africa, guided by the belief that the most resilient systems are those most simple to understand, implement and operate.
Interested in discussing high-density infrastructure design? Contact our data centre team: datacentres@brydenwood.com
Related Ideas