There is a version of this conversation that goes well. An AI company, mid-growth, comes to a colo negotiation having done the homework, they know their power envelope, they know their cooling requirements, and they have specific language ready for the clauses that matter. The colo signs, the deployment goes smoothly, and when the next GPU generation arrives the infrastructure is already positioned to handle it.
That version is not the common one.
The more common version involves a signed agreement, a hardware upgrade cycle, and a conversation nobody wanted to have, the one where the colo explains that supporting the new density requires overhauling nearly every component in the data hall, and the tenant explains that they assumed this was covered, and both sides leave the room unsatisfied.
The gap between those two outcomes is almost entirely determined by what happens during contract negotiations, before anyone signs anything.
This is about closing that gap. The insights below come from years of deploying and operating fully liquid cooling infrastructure across colo and on-prem AI environments. These are the conversations we see go wrong, the clauses we wish were in every agreement, and the questions AI buyers need to be asking before they commit.
The Upgrade Conversation Nobody Is Ready For
Before getting into specific contract language, it helps to understand what the upgrade conversation actually looks like when it goes badly, because understanding the failure mode is what makes the right provisions feel urgent rather than bureaucratic.
When an AI tenant upgrades to higher-density hardware in a facility that wasn’t designed for it, the first conversation with the colo is defined by shock and surprise, with neither side leaving overly satisfied. That’s not a dramatic characterization, it’s a consistent pattern.
The core issue is that most AI companies don’t fully understand the scope of what a density upgrade requires from a critical infrastructure standpoint. A data hall built to support 2MW total that now needs to support 12MW is not a minor retrofit. It requires changes to nearly every component: power distribution, cooling infrastructure, structural systems, physical access. This is costly with no complications. In a live facility with operating tenants, it is enormously complex and expensive.
The first conversation, if both sides are honest with each other, does not end with everyone excited about what comes next. That’s not a deal-stopper, it’s a reality check. But it’s a reality check that could have been a structured plan if the right provisions had been in place from the beginning.
The agreements that handle this transition well are the ones that anticipated it.
The Biggest Miss: SLA Coverage for Water
Ask most AI infrastructure teams what their colo SLA covers and they can answer immediately: power uptime, network availability, physical security, maybe ambient temperature. Ask them what their SLA says about water flow rate, water temperature, and water pressure, the three variables that determine whether liquid-cooled hardware runs at spec, and the answer is usually silence.
This is the single most common and consequential miss in AI tenant colo agreements today.
The reason it gets missed is not purely negligence. It reflects a genuine structural tension in how liquid cooling infrastructure is currently being negotiated. Many AI companies deploying to colo assume that the CDU is their infrastructure to specify and control. That assumption is understandable, they’re paying for it, they’re operating the hardware it serves, and they want the performance guarantees that go with it. The problem is that no colo provider is going to allow a tenant to select and control access to a critical infrastructure item and then separately demand an SLA around its performance. The logic doesn’t hold from the operator’s side.
The path to getting water SLAs into an agreement is not to demand them unilaterally. It’s to release the CDU to be treated as genuine critical infrastructure, with the same ownership framework, access protocols, and operational standards that govern power. When that happens, the SLA framework for water follows naturally, because it’s the same framework that already exists for every other critical system in the building.
The practical implication: AI companies negotiating colo agreements should be pushing to have liquid cooling infrastructure explicitly classified as critical infrastructure in the contract, with ownership clearly established on the colo side and SLA terms attached accordingly. This is a negotiating item, not a given, but it is a negotiating item that can be won.
The Critical Infrastructure Definition Problem
Related to the SLA issue is a broader definitional gap that appears in nearly every colo agreement: critical infrastructure is generally assumed to be a known, not explicitly defined. That assumption is costing AI tenants real money and real leverage.
In most agreements, “critical infrastructure” is used as a term without a formal definition. Power distribution, cooling systems, physical access controls, everyone assumes these are covered. But when a dispute arises, or when an upgrade conversation forces the question of who owns what, the absence of a clear definition becomes a significant problem.
A well-structured AI tenant colo agreement should include an explicit definition of what constitutes critical infrastructure for that deployment, with liquid cooling systems, CDU infrastructure, and the water delivery systems that serve them specifically called out. Ownership should be established clearly. And that definition should be tied directly to SLA terms, so that the performance guarantees are anchored to a specific, documented scope of responsibility.
This provision protects both parties. The tenant knows exactly what the colo is guaranteeing and what falls outside that guarantee. The colo knows exactly what they’re responsible for maintaining and what requires tenant coordination. In upgrade conversations, having this definition in place means the negotiation starts from a shared understanding rather than competing assumptions.
The One Variable Most AI Buyers Over-Negotiate
Here is a counterintuitive point that comes directly from operational experience and is worth sitting with: water temperature is not as important as flow rate and pressure, and AI companies that treat temperature as a hard requirement may be leaving money on the table.
This matters because temperature tends to be the variable that AI buyers focus on, it’s visible, it’s easy to understand, and it’s tied to GPU thermal management in ways that feel intuitive. As a result, AI tenants often negotiate hard for tight water temperature specifications and accept whatever the colo offers on flow and pressure.
The operational reality is different. Flow rate and pressure are what determine whether your cooling system can actually respond to the dynamic thermal loads that AI workloads generate. Temperature has more flexibility than most buyers assume, operating at higher water temperatures is possible with the right CDU architecture and may actually reduce the monthly recurring cost and overhead of both greenfield and brownfield installations if AI companies are willing to broaden their acceptance criteria.
The practical implication: before your next colo negotiation, pressure-test your water temperature requirements against your actual thermal envelope. You may find there is flexibility that can be traded for harder commitments on flow and pressure, where the real operational risk lives.
Verifying “We Support Liquid Cooling” Before You Sign
Every colo sales team will tell an AI prospect that their facility supports liquid cooling. This claim ranges from fully true and operationally ready to technically accurate but years away from being deployable. The difference is enormous, and a standard due diligence process will not surface it.
When a colo says they support liquid cooling, the questions that actually matter are about the how and the when โ not whether. Use this checklist before you commit:
Specifically:
What does “support liquid cooling” mean physically? In some facilities it means a tap on primary piping outside the data hall that a tenant can access. In others it means fully deployed in-rack or in-row infrastructure ready for connection. These are not the same thing. Asking for a specific description of the existing physical infrastructure, what is installed, where it terminates, what the flow rate and pressure are at the point of connection, will immediately clarify what “support” actually means.
Is the piping installed inside the data hall? Primary piping outside the hall is a starting point, not a solution. The question is whether the distribution infrastructure that serves individual racks or rows is in place, or whether that is a future capital project that will need to be designed, permitted, and built after you sign.
Is the power and structural capacity in place to support the increased density that liquid cooling enables? This is the question most AI tenants forget to ask. Liquid cooling unlocks higher rack densities, but higher rack densities require more power per cabinet and place greater structural loads on raised floors. A facility that can physically connect a CDU may not have the power distribution capacity or structural ratings to support the deployment you’re actually planning. Asking for the facility’s per-cabinet power capacity and raised floor weight ratings before you sign will surface this constraint while you still have negotiating leverage.
What is the timeline from signed agreement to operational liquid cooling? If the honest answer is six to nine months because the infrastructure needs to be built, that is critical information for your hardware deployment schedule. If it is weeks because everything is already in place, that is a meaningful differentiator worth paying for.
The colo operators who are getting this right are the ones who have already made this infrastructure investment, who can answer the questions above clearly, quickly, and in writing. That clarity is worth paying a premium for. It is significantly less expensive than the alternative: a hardware upgrade, a difficult conversation, and an infrastructure overhaul that nobody budgeted for.
The Agreement That Holds Up Through the Next Upgrade
The GPU upgrade cycle is not slowing down. The interval between hardware generations that require meaningfully different infrastructure is shorter than most AI companies’ colo agreement terms. The agreement you sign today will be stress-tested by hardware that doesn’t exist yet.
The provisions that protect you through that cycle are not complicated to negotiate, but they do require knowing what to ask for before you’re sitting across the table from a colo sales team. Water SLAs tied to critical infrastructure classification. Explicit definitions of ownership. Harder commitments on flow and pressure than on temperature. Honest due diligence on what “liquid cooling support” actually means in that specific facility.
The colo operators who are getting this right are the ones who have already made this infrastructure investment โ who can answer the due diligence questions above clearly, quickly, and in writing. That clarity is worth paying a premium for. It is significantly less expensive than the alternative: a hardware upgrade, a difficult conversation, and an infrastructure overhaul that nobody budgeted for.
The best time to have the liquid cooling conversation with your colo provider was before you signed. The second best time is now, while you still have leverage.
Discuss Your Cooling Requirements
Talk to our team about a facility readiness assessment and what you need to request to support your cooling needs before your next hardware cycle.