Groundbreaking Data Center Technology: The Case for Water-Cooled Data Center Design

Everyone knows that data centers produce heat, lots of it, and without some technology to remove it, data centers can’t function. Imagine, for a moment, that all the conventional data center cooling technologies are reaching a point where they’re not smart, not suitable, and not sustainable. How would you cool your data centers?
 
The founders of Nautilus Data Technologies went on a journey to prove that there’s an answer to that question. This answer has incredible implications for data center density, longevity, and economics, as well as substantial social and environmental advantages. Over the past few years, we took our idea from the drawing board to a prototype. Last year, we launched our 7MW first data center in Stockton, California, which supports a range of production workloads for a collection of customers.
 
Before we take you through our journey, let’s set the stage.
 

The Problem with Traditional IT Cooling in Data Centers

Everyone knows that data centers have evolved from their beginnings as computer rooms and server closets to massive multi-megawatt facilities with thousands of servers and hundreds of racks. Most of today’s data centers still depend on air-conditioning. Data centers rely on these technologies to keep data center equipment running smoothly, whether it’s vapor-compression refrigeration (like your window A/C unit) or evaporative cooling (like an old-fashioned swamp cooler).
 
Here’s the thing though. These technologies come with fatal flaws — water and electricity consumption.
 
First, data center cooling consumes a ridiculous amount of water. It’s been reported that, in 2019, a single cloud provider requested over 2.3 BILLION gallons of water from municipal water supplies. All the major hyperscale cloud providers are working on initiatives to reduce their water consumption. However, it’s still a considerable problem that puts pressure on potable water supplies and costs data center providers hundreds of millions of dollars each year.
 
To make matters worse, data center cooling also requires massive amounts of power. Cooling systems can consume half of a data center’s energy intake. That means that many data centers are purchasing 5, 10, or even 20 megawatts of power just to keep their servers cool. That’s both an intolerable expense and an intolerable pressure on electricity grids that need intensive improvements just to cope with the demands of inclement weather, as we’ve seen both in California and Texas over the past year.
 

Exploring the Alternatives

So conventional air conditioning uses too much water and too much electricity, or in other words, it’s not sustainable. What’s a data center provider to use instead?
 
Of course, there have always been alternatives to air conditioning. Liquid cooling is superior to air cooling — we use it in thousands of industrial processes, we use it in cars, we use it in power plants, and we’ve used it to cool computing since the mainframe era.
 
But the conventional wisdom is that liquid-cooled data centers are a niche technology suitable for high-performance computing, but perhaps not so crucial for the typical virtualized or cloud data center where vapor-compression refrigeration or evaporative cooling does the job.
 
The problem is, those technologies aren’t really doing the job and certainly won’t do the job going forward with higher costs and lower supplies of water and electricity.
 
The leading data center providers see a crisis point, where innovations in server design are putting additional pressure on data center cooling. With the latest multi-core CPUs and GPUs, today’s emerging servers consume more power than ever before.
 
It’s likely that some customers will see power consumption as much as 100kw / rack unit — which chilled air simply cannot cool. To quote Christian Belady, distinguished engineer and VP of Microsoft’s data center advanced development group, “Air cooling is not enough.” Those technologies cannot perform at the level today’s servers will demand.
 
Once we saw the sustainability and performance problems, we realized that the industry is at an inflection point where old methods won’t work.
 
So what did we do?
 

A Proven Technology, A New Approach

To move everyone past that inflection point, we took a proven technology and reinvented it. To put it simply, we took water cooling technologies from power plants and other industrial applications and designed them to work within a data center. Our technology uses pumps and water flow to cool data centers without the need for any mechanical chilling at all.
 
Sounds good, right? But of course, lab concepts often fail in real-world applications, and we wouldn’t be much of a data center provider if we didn’t build a data center.
 
So we built a floating 7MW data center. And we placed it by a brownfield location on an old military base in California. Today, it’s running production applications for a variety of customers.
 
Here’s the genius behind our water-cooled technology:
1. Instead of using treated municipal water, our cooling is done with a closed-loop on or near any body of water: saltwater or fresh, lake, river, or ocean. We consume zero gallons of water. That’s right. Zero.
2. We don’t use any evaporative or vapor-compression refrigeration. We cut electricity consumption from cooling by 70%, achieving 1.15 PUE every day of the year.
3. We can cool more data center equipment per rack than any air-cooled data center. Our data center in Stockton offers 5x higher power density per rack.
 

Discovering the Advantages through the Colocation Data Center

In doing all these things in Stockton, not only did we successfully prove that we could solve data center sustainability and performance problems, we uncovered a host of compelling benefits that you might not imagine.
 

Proving our technology can scale.

We built a 7-megawatt facility, but in doing so, proved that our technology is suitable for any data center, from 200 kilowatts to 200 megawatts. That means we can help organizations add ten racks of capacity closer to the customer for edge requirements, build the next generation of hyperscale data centers, and serve everything in-between.
 

Real estate efficiency.

We originally planned for a 6 MW deployment and discovered we could support 7 MW in the same footprint. Not only are we able to cool more capacity more efficiently, but we’re also able to offer greater capacity in a smaller footprint, with significant implications for data center placement and productivity.
 

Repurposing old sites and old equipment.

We built our data center at a commercial port on a brownfield site, leveraging old infrastructure and underutilized capacity that we put to productive use without affecting port operations. We refurbished an existing vessel to serve as our platform. We’ve given the Port of Stockton a new way to modernize what they do and a new revenue stream. On our side, the economics of essentially using an unwanted site was very favorable.
 

Putting data center capacity in unprecedented locations.

Due to limited water and power supplies in many parts of the world, establishing data centers simply can’t be done. We can now put a multi-megawatt data center on a ship, move it to where it’s needed, place it at a port near fiber connectivity, and begin serving customers without putting undue pressure on power grids or drinkable water supplies. Our technology opens up an entirely unprecedented world of data center placement flexibility. Countries that want to scale data center services now have a new tool to do so.
 

Optimizing data center construction.

We worked through the process of creating a large-scale, modular method of putting our facility together, using shipbuilding techniques and tools to produce a massive warehouse-sized data center superstructure. Now we can prefabricate both floating data center and land-based facilities with ease of construction and accelerated assembly speed. You’ll see new data centers from us coming live very soon.
 

Exceeding regulatory expectations.

Our water-cooling system offers exciting energy conservation and water conservation features, but it also causes no harm to wildlife, doesn’t allow a build-up of invasive species, and reduces noise pollution by 30x. California’s regulatory requirements are some of the most stringent in the world, and we easily met or exceeded every expectation.
 
All of these advantages sound great, but are you wondering what the customer experience is like?
 
Simply put, we have the same and better technical performance. When paying for power consumption, our wholesale customers benefit from one of the most power-efficient data centers in the world. Unlike other data centers, with spikes in power consumption due to weather, our cooling systems have 24x7x365 predictable power consumption without variability. They’re paying for a more sustainable system.
 
We met all the appropriate standards and certifications for resilience, data protection, and security. The data center sector has a very disciplined process of certification and commissioning, and we went above and beyond that process with compliances that include FedRAMP (to support confidential government facilities) and HIPAA (for healthcare information).
 
In other words, Stockton is a data center. It’s just more sustainable and higher performing.
 

Wrapping Up

To sum it up, from taking a concept into production, we discovered we can deliver superior colocation data center capacity at market terms; we can rethink where data centers can go, the impact they have, and the advantages they bring to local communities, countries, and the world.
 
We think that our approach offers such compelling advantages that it could be a competitive disruptor for any large organization that adopts it. Being able to rapidly develop and deploy data center capacity that minimizes electricity consumption and comes with no water consumption allows cloud providers, colocation providers, enterprises, and governments to build more sustainable data centers with better performance and lower cost wherever possible they want, whenever they want.
 
We have a data center. But under the hood, it’s a game-changing cooling design, like moving from the world of cars to a Tesla(™). It’s simply in another category.

More Recent Posts

Why AI and Liquid Cooling Go Hand in Hand

“Our next-generation data center design will support our current products while enabling future generations of AI hardware for both training and inference. This new data

Scroll to Top