What Happens to Your Colo Contracts When a Tenant Upgrades to High-Density GPU Workloads

Most colocation contracts were not written with AI in mind. They were written for servers that drew 5 to 10 kilowatts per rack, cooled by raised floors and computer room air handlers that have been more or less standard since the 1990s. When a tenant calls to tell you they are moving from general compute to GPU clusters for AI training or inference, you are not looking at a simple hardware swap. You are looking at a fundamental mismatch between what your contract assumes and what that tenant now needs.

We have been through this. Nautilus has operated fully AI-native data centers for five years, which means we spent a long time on the other side of this conversation before we started helping colocation operators navigate it themselves. Here is what actually happens, and how to get ahead of it.

Why AI Workloads Exceed Standard Colocation Power Contracts

Standard colo agreements price around power draw and cabinet count. A tenant takes X cabinets at Y kilowatts per cabinet, and the rate reflects that. The facility engineering assumptions behind those rates are typically 8 to 12 kW per cabinet with air cooling as the default heat removal method.

GPU workloads for AI break those assumptions immediately. A single DGX H100 system can pull over 10 kW on its own. Rack densities of 40, 60, even 100+ kW are now common in serious AI deployments. Your air-cooled infrastructure was not designed for that, and more importantly, your contract almost certainly does not account for the gap between what the tenant is drawing and what your facility can actually support at that density.

The first thing to look at when a tenant announces this kind of upgrade is whether your agreement has a power cap per cabinet or per cage. If it does, the tenant is about to breach it or is already planning to. If it does not, you have a different problem: you may have contractually committed to power delivery that your mechanical and electrical systems cannot safely provide.

The Cooling Question Is The One That Surprises Operators Most

Power overages are obvious. Cooling is where colo operators get caught off guard.

Air-cooled facilities are designed around a heat load that matches the power draw within a certain envelope. When you are moving 40 or 60 kW through a rack, you cannot remove that heat with a raised floor and CRAC units. The physics do not work. What changes with high-density GPU workloads is not just how much power is consumed but where the heat goes and how fast you need to remove it.

Liquid cooling, whether direct liquid cooling to the chip or rear-door heat exchangers, requires facility-level infrastructure changes: supply and return lines, secondary coolant loops, leak detection, and in many cases, changes to how your existing mechanical systems are zoned. None of that is in a standard colo contract, and almost none of it is something a tenant can install without your direct involvement.

This is not a tenant problem you can hand back. The moment a tenant wants to deploy liquid-cooled GPU infrastructure in your facility, you are a co-participant in the engineering solution whether you want to be or not.

Critical Data Center Contract Requirements for AI Infrastructure

Most agreements that were not specifically drafted for AI workloads will be missing language in several areas.

Cooling method approval. The contract needs to specify that any cooling infrastructure changes, including liquid cooling deployments, require operator review and written approval before installation. Without this, a tenant can argue they are simply managing their own equipment.

Power density limits per cabinet. Not just total cage power, but per-cabinet limits that reflect what your infrastructure can actually cool and what your busway and PDU ratings support.

Structural and load provisions. High-density GPU racks are heavy. A fully loaded DGX rack and associated CDU can exceed 2,000 pounds. Standard raised floor panels and rack mounting points are not rated for that, and your lease almost certainly says nothing about it.

Fluid containment and liability. If a tenant deploys direct liquid cooling and a line fails, who owns the damage? Liquid in a data center is a serious incident. This needs to be addressed in the contract before any deployment, not during the insurance claim.

Upgrade notification windows. You need lead time to assess what a density upgrade means for your facility. A 60 to 90 day written notice requirement for material workload changes gives your engineering team time to determine what infrastructure modifications are required before the tenant has already ordered the hardware.

How to Support AI Upgrades While Retaining Colocation Tenants

This is where operators tend to make one of two mistakes. The first is treating the tenant’s upgrade as a contract problem to be managed defensively, which creates adversarial dynamics with a customer who is probably your highest-value account. The second is accommodating the request without a structured process, which leads to facility risk that shows up later as a near-miss or an outage.

The right approach is to treat it as a joint engineering and commercial conversation from the start.

When a tenant signals they are moving toward GPU density, bring your facilities team into the discussion within the first week. Do a power and cooling capacity assessment for their specific cage or suite before the commercial negotiation even starts. You need to know what your facility can actually support at their target density before you can have an honest conversation about what an amendment to their agreement needs to cover.

From there, the amendment structure is usually straightforward: revised power caps, a cooling infrastructure rider that addresses who designs and pays for any new fluid infrastructure, updated weight and structural provisions, and in most cases, a density surcharge that reflects the actual cost of serving that workload. Operators who have done this well frame the surcharge not as a penalty but as an infrastructure investment that enables the tenant to deploy the compute they need. That framing matters.

When the Facility Cannot Support It

Sometimes the honest answer is that the facility cannot support what the tenant needs, at least not in the current configuration. This is not a conversation operators look forward to, but it is better to have it at the start of a planned upgrade than after the tenant has placed hardware orders.

The instinct most operators have at this point is to treat it as a binary: either invest heavily in a full cooling retrofit, or tell the tenant the facility is not the right fit. Neither option is great. A full retrofit is capital-intensive and requires the facility to be designed for it from the start. Losing the tenant means losing probably your highest-revenue account at the exact moment the market is moving toward more of what they need, not less.

There is a third path, and it requires a different way of thinking about what cooling actually is.

Cooling is Infrastructure, Not Equipment

The mental model most colo operators carry is that cooling is a facility system, something built in during construction and maintained like any other mechanical system. That model made sense when workload density was predictable and air cooling was universal. It does not hold up when a single tenant can double the heat load of a cage in a single hardware cycle.

The operators who are getting ahead of this are starting to treat cooling the way they treat power: as a deliverable, a utility, something that can be provisioned, expanded, and contracted for independently of the building itself. This shift in framing changes what is possible commercially and operationally.

When cooling is infrastructure, you can phase it. You can expand it in response to tenant demand. You can build it into your product offering rather than treating every high-density request as a one-off engineering problem.

What makes this practical rather than theoretical is that the equipment now exists to support this model. Facility-scale cooling deployments can be designed to fit within an existing mechanical gallery, eliminating the need to touch the data hall itself during installation. A single unit can deliver up to 4 megawatts of cooling capacity, and because the architecture is modular, capacity can be brought online in phases as tenant demand grows rather than requiring a full capital commitment upfront.

That last point matters a great deal for how colo operators can approach tenant conversations. You do not need to have the full cooling infrastructure in place before the tenant signs an amended agreement. You need to have a credible, phased infrastructure plan that ties capacity expansion to the tenant’s own deployment timeline. That is a very different commercial conversation than telling a tenant the facility needs a full retrofit before it can support them.

What This Means for the Tenant Conversation

When you frame cooling as infrastructure rather than a facility constraint, the amendment negotiation changes shape. Instead of negotiating around what the facility can currently support, you are negotiating around a buildout roadmap. Phase one of the cooling infrastructure supports the tenant’s initial GPU deployment. Phase two, tied to a density expansion commitment from the tenant, funds the next capacity increment. Both parties have skin in the plan.

This is how power infrastructure works in colo already. Tenants commit to power draw, operators build to that commitment, and the capital investment is underwritten by the contract. Cooling can work the same way. The reason it has not is partly because the equipment architecture to support it has not been widely available, and partly because neither operators nor tenants have been thinking about it in those terms.

AI companies, for their part, need to understand that cooling capacity is now as important a facility specification as power and connectivity. An operator who can commit to 4-10+ megawatts of liquid cooling capacity in a phased buildout is offering something materially different from one who is still trying to manage GPU workloads with supplemental air cooling. Procurement teams evaluating colocation partners for AI infrastructure should be asking about cooling roadmap the same way they ask about power redundancy. Most are not doing that yet.

Leading Operators Treat AI Cooling and Density as a Scalable Product, Not a Contract Exception

The colo operators we work with who are ahead of this transition have stopped thinking about GPU density upgrades as contract exceptions to be managed case by case. They have built it into their product line, with a defined process for density upgrade requests, a standard amendment framework, and a cooling infrastructure model that can scale with tenant demand.

The ones who are furthest ahead have gone one step further: they have started positioning cooling capacity as a product feature in its own right. Not just that they can accommodate liquid cooling, but that they can deliver a defined number of megawatts of cooling capacity across a deployment, in phases, on a timeline tied to the tenant’s hardware roadmap. That is a fundamentally different conversation with a prospective AI tenant than anything that was possible three years ago.

The education piece on both sides is still happening. Most colo operators are still working through what phased liquid cooling infrastructure actually means for their facilities and their capital plans. Most AI companies are still learning to ask the right questions about cooling when they evaluate colocation partners. The window to be the operator who is already thinking this way, and already has the answers, is open right now. It will not stay open.

If you’d like to discuss phased cooling strategies, how to plan for future scale, contact Nautilus’s liquid cooling experts who can help you for what you need now, but also plan for efficient future cooling scale.

More Recent Posts

Scroll to Top