Data centers in 2026 face an unprecedented cooling challenge. AI workloads routinely demand 30–100 kilowatts per rack, up to ten times the density of traditional computing, and colocation providers are racing to deploy liquid cooling infrastructure to meet this demand. But here’s what matters: AI companies aren’t waiting for providers to innovate. They’re the demand signal forcing the entire industry to upgrade – they are driving liquid cooling adoption. The liquid cooling adoption rate among AI servers is projected to reach 57% in 2026, up from just 23% a year earlier. This acceleration isn’t a coincidence. It’s driven by AI tenants making cooling infrastructure a non-negotiable requirement. That shift has created what we call the “pull-through effect,” where AI companies’ cooling demands pull entire colocation facilities toward upgraded infrastructure that benefits all tenants.
The Demand Signal AI Companies Are Sending
AI infrastructure requirements have fundamentally reshaped data center economics. When a large language model requires 500+ GPUs operating simultaneously, the thermal footprint becomes massive. A single GPU can consume 700 watts. Stack them densely in a colocation rack, and you’re pushing 10-15 kilowatts per unit. Air cooling alone cannot handle this density efficiently.
Colocation providers understand the stakes. The data center liquid cooling market sits at approximately $6.6 billion in 2026, growing at a 28.7% compound annual growth rate. That growth is directly tied to AI workload expansion. Facilities without liquid cooling capabilities are losing market share to competitors who can deliver the thermal management AI companies require. It’s a straightforward competitive advantage.
Why Colocation Providers Are Responding
The economics are compelling. A single AI tenant signing a multi-year contract to deploy hundreds of GPUs represents significant recurring revenue. But that tenant won’t sign without reliable cooling. Colocation providers have two choices: upgrade their infrastructure or lose the contract to a facility that will.
This dynamic has accelerated infrastructure investment across the sector. Many colocation facilities are now deploying hybrid cooling solutions, combining traditional air cooling for legacy workloads with direct liquid cooling, immersion systems, or in-row systems for AI clusters. The investment is substantial, but the alternative is market irrelevance. Providers like Nautilus Data Technologies are leading this transition by building purpose-built liquid cooling infrastructure specifically designed for high-density AI data centers. The competitive advantage goes to facilities that can promise consistent thermal performance for demanding workloads.
The Pull-Through Effect in Action
The pull-through effect describes a specific dynamic: when one anchor AI tenant’s cooling requirements drive infrastructure upgrades that benefit all tenants in a facility. Here’s how it works in practice.
An AI company signs a contract requiring direct-to-chip liquid cooling for a 100-rack deployment. The colocation provider upgrades its facility’s cooling distribution, power delivery, and monitoring systems to support that deployment. Once that infrastructure is in place, every tenant in the facility gains access to improved thermal management. Other companies can now run their GPU workloads more efficiently, with lower operating temperatures and reduced energy consumption. The anchor tenant’s requirements pull the entire facility forward.
This dynamic has accelerated facility-wide liquid cooling adoption. Providers report that once they deploy liquid cooling for their first AI tenant, they extend the infrastructure to serve additional customers. The initial capex investment is spread across multiple revenue streams, improving ROI for the provider while delivering better infrastructure for all tenants.
How to Use Your Leverage
AI companies have more negotiating power than they often realize. Your cooling requirements are not a constraint you must accept from your colocation provider. They are a leverage point you can use to secure infrastructure investments that serve your operational needs.
When evaluating colocation facilities, make liquid cooling capability a core evaluation criterion. Ask providers directly about their liquid cooling roadmap. Request commitments on thermal management performance, not just power delivery. Negotiate service level agreements that specify coolant temperature, humidity, and monitoring frequency. Consider facilities that can demonstrate existing liquid cooling deployments for similar workloads. Your requirements will drive their investment decisions. Providers who move quickly to meet those requirements deserve your business. Those who delay will fall behind as more AI companies make cooling infrastructure central to their site selection process.
The pull-through effect amplifies your impact. Your cooling requirements don’t just improve your own operational efficiency. They upgrade infrastructure for every tenant in the facility. That’s leverage worth using.
The Bottom Line
AI workloads are pulling data center infrastructure toward liquid cooling at an accelerating pace. Colocation providers are responding because the economics are clear: AI tenants require thermal management that air cooling cannot reliably provide. The facilities that deploy liquid cooling first will capture the lion’s share of AI workload migration. For AI companies, this shift represents an opportunity. Your infrastructure requirements are not constraints you must accept. They are demand signals that drive facility-wide upgrades. Use that leverage strategically. Evaluate colocation providers based on their liquid cooling capabilities and their willingness to invest in thermal infrastructure for your workloads. The pull-through effect means your decision benefits not just your own operations, but the entire AI community operating in that facility.
Nautilus Data Technologies builds liquid cooling infrastructure purpose-designed for AI workloads at scale. If you’re evaluating colocation options for demanding GPU deployments, we invite you to explore how our thermal management solutions can support your mission.