The *Edge* data centre redefined | Bryden Wood podcast \\Dries Hagen and Emmanuel Becker, Mediterra

Fifteen years ago, four megawatts was considered an impossibly large data centre. Today it barely registers as small. Emmanuel Becker has lived that entire arc – from early co-location at IsNet, through campus builds at Data4 in Italy, through the Telecity integration at Equinix – and he now leads Mediterra, a company deliberately positioned at the frontier of what comes next.


Permanent retrofit:

In this episode of the Bryden Wood Podcast, Data Centre Key Account Lead Dries Hagen sits down with Emmanuel to trace the evolution of the data centre market through three distinct eras, and to examine the structural shifts that are redefining what edge infrastructure means – and who builds it, and how.

Three eras, and a fourth beginning

Emmanuel's framing of market history is precise and useful. The first era was co-location: on-premise customers moving into shared facilities for efficiency, security, and connectivity. The second was cloud: hyperscale providers building regional infrastructure to serve those co-location customers and beyond. The third is AI – large language model training demanding ever-greater concentrations of compute, concentrated in tier one cities alongside their customer base.

But the third era is already revealing its own limits. A country's GDP is not built in one city. The customers that need AI-enabled services: chemical industries, pharmaceutical companies, automotive manufacturers, financial services, are distributed across regions, not concentrated in capital cities. The next wave, Emmanuel argues, is vertical AI specialists deploying closer to their specific customer bases, in tier two cities and regions that have historically been underserved by data centre infrastructure. The edge, having evolved from retail parks through cloud through AI, is going full circle – and the market for it is, in Dries's phrase, a sleeping giant.

The flexibility problem

The conversation's most substantive thread concerns the near-impossible design challenge that AI infrastructure has created. Nvidia is releasing new GPU generations every six to ten months. Software evolves in parallel. A data centre designed twelve months ago and delivered twenty-four months later will have lived through two to four GPU generations by the time it opens – and needs to last twenty to thirty years.

The result, Emmanuel explains, is that reference design as traditionally understood – a fixed blueprint handed to construction teams and replicated – has a shelf life of about twelve months. No more. The market still wants the certainty that reference design provides: predictable cost, predictable lead times, committed ready-for-service dates. But it also needs continuous adaptability. Those two things are in direct tension, and resolving that tension is where Dries and Emmanuel spend much of the episode.

Emmanuel's diagnosis is that the four parties who need to solve this problem together – GPU and CPU providers, customers, software developers, and engineering and design companies – are not talking to each other. Mediterra's position, as a data centre provider integrating all of these inputs, puts them at the intersection. The dialogue that needs to happen, he argues, is not happening at scale. And without it, the market will continue to produce data centres that are delivered on time and on budget, but cannot accommodate the technology that exists by the time they open.

Permanent retrofit as the new normal

One consequence of this pace of change is a shift in how the industry thinks about retrofit. Historically, retrofitting a data centre was an exceptional intervention – upgrading infrastructure that had aged over a decade. Emmanuel argues that retrofit is now a permanent feature of the operating model, not an occasional event. Data centres are being retrofitted for new GPU generations before they have even been handed over. A facility designed for one technology standard will need to accommodate its successor before commissioning is complete.

Dries draws the implication out: if permanent retrofit is the reality, the industry's historic preference for stick-built fitout – fixed, constructed-in-place infrastructure – is no longer fit for purpose. Modularity and plug-and-play systems are not a nice-to-have; they are the structural response to a market that changes faster than construction cycles. Emmanuel agrees. The ability to reconfigure a data hall without rebuilding it – to bring in a provider's own liquid-cooled rack solution, or to absorb a jump from one HPC generation to the next within the same footprint – requires flexibility to be designed in from the outset, not retrofitted itself.

The power conundrum

Dries closes the episode with what he calls the biggest unsolved problem in the market: power. Five to ten years ago, fibre connectivity was the critical question in data centre due diligence. Today, it barely features. Power is everything – and the constraints on it are severe.

For hyperscale operators in tier one cities, connecting to high-voltage grid infrastructure can take ten to fifteen years and hundreds of millions in network development costs. Energy is being reserved at ten to fifteen times actual need across the European market, as operators bank capacity ahead of competitors. The result is a structural bottleneck that is holding back the market's growth.

Mediterra's response to this is deliberate and clearly articulated: they have chosen to operate exclusively at mid-voltage, in tier two cities, for three specific reasons. Mid-voltage connections are faster, cheaper, and easier to obtain than high-voltage. Tier two cities sit near the sources of renewable energy – wind, solar, and hydroelectric – that transmission system operators are struggling to route efficiently to tier one consumption centres. By co-locating consumption with local production, Mediterra sidesteps both the grid bottleneck and the competition for tier one capacity. It is, Emmanuel argues, not just a market strategy but a structural advantage, and one that positions Mediterra's facilities alongside local green energy sources rather than dependent on overloaded national grids.

Dries also raises the question of horizontal collaboration: whether the industry's largest players, each solving the same power problem independently, should be working together to aggregate demand and accelerate solutions. Emmanuel is candid about why it is not happening. Energy access is a competitive advantage. The larger your energy need, the less willing you are to share it. Until that dynamic changes, the bottleneck will persist – and the market for intelligently positioned, mid-voltage, tier two infrastructure will continue to grow.

Dries Hagen is Data Centre Key Account Lead and Senior Project Manager at Bryden Wood. Emmanuel Becker is Chief Executive of Mediterra.

The Bryden Wood podcast

On the Bryden Wood Podcast, we explore the ideas, challenges, and innovations shaping the future of the built environment – with our own directors and engineers, and guests from across the industry. From industrialised construction and energy infrastructure to pharmaceuticals and data centres, the conversations are substantive, direct, and grounded in real delivery experience.

Watch on YouTube, or listen wherever you get your podcasts.

Previous
Previous

Can AI cut medicine costs? *AI in drug manufacturing* | Bryden Wood Podcast \\Martin Wood and Adrian La Porta

Next
Next

Climate resilience in *UK housing* | The Bryden Wood podcast \\Martin Wood, Pablo Gugel and Helen Hough