AI is rewriting the rules of data center design, but most data centers werenāt built with AI in mind.
Across the industry, operators are facing the same challenge: existing facilities designed for 5-15 kW per rack are now expected to support AI workloads pushing 50 kW, 100 kW, and beyond. The instinctive response is often drastic: major mechanical rebuilds, new data halls, or entirely new sites. But there is a way to plan a retrofit that isnāt so interruptive to everyday operations.
High-density AI capacity does not require rebuilding your data center from scratch. It requires rethinking how cooling is delivered.
Will Most of the Data Center Growth Be Greenfield?
Despite the attention on greenfield hyperscale campuses, the majority of near-term AI capacity will come from retrofitting existing infrastructure:
- Colocation facilities upgrading halls for AI tenants
- Hyperscalers densifying live campuses faster than utilities can expand
- Enterprises repurposing legacy data halls for AI and HPC
These sites share common constraints:
- Limited white space
- Fixed floor layouts
- Existing power and mechanical systems
- Tight deployment timelines
Tearing everything out isnāt just expensive, itās often impossible.
Why Traditional Cooling Retrofits Fall Short
When operators attempt to retrofit for AI using conventional approaches, they quickly hit limits.
In-row and In-Rack CDUs
Originally designed to extend air cooling, these solutions:
- Consume valuable white space
- Scale unevenly as loads increase
- Add operational complexity inside the data hall
- Become difficult to rebalance as AI workloads shift
Nautilus offers an in-row CDU, the RCD, because we understand that some designs and tenants still require row-level cooling, but when we’re able to step back and think efficiency and scale, thinking facility-wide or facility-scale cooling is very wise to consider.
The Retrofit Breakthrough: Facility-Scale Liquid Cooling
The most effective AI retrofits donāt start inside the rack, they start at the facility level.
Facility-scale Cooling Distribution Units (CDUs) enable operators to introduce high-density liquid cooling without reworking the data hall itself.
Key characteristics that make this possible:
- Off whitespace floor, skidded deployment
- Multi-Megawatt scale capacity per unit
- Designed for simple parallel scalability
- Support for hybrid environments (air + liquid)
- Compatibility with multiple cooling methods, including direct-to-chip, rear-door, immersion, and hybrid approaches
Instead of threading complexity through the data hall, facility-scale CDUs centralize it, where it belongs.
Retrofitting Without Disruption
One of the biggest fears in retrofit projects is operational disruption. Facility-scale liquid cooling directly addresses this concern.
Because cooling capacity is added outside the white space for the entirety of the white space:
- Live environments remain operational
- Construction inside the data hall is minimized
- AI capacity can be phased in alongside existing workloads
This allows operators to selectively convert portions of a facility to high-density AI while maintaining legacy systems elsewhere, a reality for most retrofit environments.
Speed Delivers Operational Agility
AI demand is moving faster than traditional infrastructure timelines.
Retrofit-friendly cooling solutions must be:
- Prefabricated, not custom-built
- Deployable in weeks or months, not years
- Designed to adapt as workloads evolve
Facility-scale CDUs succeed here because they behave like infrastructure products, not bespoke mechanical projects.
What This Looks Like in Practice
In real-world retrofit deployments, operators are increasingly turning to facility-scale CDUs, like Nautilusā EcoCore FCD, as a way to add AI-ready liquid cooling capacity without rebuilding their data halls.
EcoCore FCD was designed specifically for high-density and is an efficient solution for retrofit-constrained environments:
- It delivers megawatt-scale cooling per unit
- It deploys off-floor, preserving white space
- It integrates with existing cooling sources and legacy infrastructure
- It scales incrementally as AI demand grows
More importantly, its design is informed by hundreds of thousands of hours of real-world liquid cooling operation, not just theoretical models or lab testing. That operational experience shows up most clearly in retrofit scenarios, where pressure management, reliability, and ease of integration matter more than perfect schematics.
Designing to Future-Proof Your Workload
A successful retrofit isnāt just about meeting todayās rack density, itās about avoiding tomorrowās redesign and planning for future scale ā beyond what we know today.
Facility-scale cooling enables:
- Higher density ceilings without re-architecting
- Cleaner upgrades as GPU generations evolve
- Long-term flexibility in how cooling capacity is allocated
The result is an AI-ready facility that can evolve, without repeated mechanical interventions.
Retrofitting for AI Successfully
AI doesnāt require operators to abandon their existing data centers, it requires them to stop forcing legacy cooling models to do jobs they were never designed for.
Facility-scale liquid cooling provides a pragmatic retrofit path:
- High-density capability
- Faster deployment
- Minimal disruption
- Infrastructure-level scalability
Retrofitting for AI isnāt about rebuilding everything. Itās about putting the right cooling infrastructure in the right place, and letting the data hall do what it does best.
Nautilus pioneered liquid cooling over 5 years ago. Our liquid cooling technologies are operator-built and proven with over 500K+ unit hours of run time globally. We are the water and liquid cooling experts and can advise start to finish for efficient and scalable retrofit cooling design. Connect with one of our experts so we can support the success of your next project.