Sustainability goes way beyond a buzzword in the data center industry. In fact, and to no one’s surprise, it’s one of the industry’s most pressing issues. Why? Data centers and data transmission networks consume enormous supplies of energy – guzzling about 2% of global energy, according to industry research. Another scary statistic? According to the U.S. Department of Energy, data centers are one of the most energy-intensive building types, consuming 10 to 50 times the energy of a typical commercial office building.
Even if we take a look at water, one of our world’s most abundant resources, the question of conservation and usage becomes even more acute. Combatting the huge amounts of heat generated by high-performance computing infrastructure has been a puzzle from the beginning — but liquid cooling has long been a salve for these operational complexities. In fact, different forms of liquid cooling have been around since the 1800s. However, as the size of data centers, the heightened degrees of operational excellence required, and the incredible densities required by new applications have ballooned demand for cooling solutions, finding ways to create sustainable, efficient cooling with water has become paramount.
Today, the industry finds itself at the intersection of competing priorities: expanding computing capacity, reducing energy consumption, and finding greater operational efficiency. So, where does water fit into the equation — and how can we make water’s place in the data center more effective than ever?
The Ebbs and Flows of Liquid in the Data Center
For computing, in particular, adding water to the technological environment goes back decades. In the 1960s, IBM stood as a pioneer for enterprise-grade computers when they introduced dual air and water-cooled products called System/360. Eventually, liquid cooling expanded from mainframe system use into home applications with custom PC builds using the strategy to ensure high-performance results. It wasn’t until the 2000s that liquid cooling entered the data center sphere, where it continues to find its footing at scale with adoption being driven by AI, HPC, machine learning, and more.
This method of cooling has admittedly gone in and out of popularity. After all, air cooled systems have gone through periods of innovation that saw efficiency and cost reduction benefits that curry widespread favor. However, the industry seems to always find its way back to fluidity — and for good reason.
To start, liquid wins out easily as a solid choice for cooling simply because of its nature; water and other liquids are much more efficient at transferring heat than air is up to 1,000 times more efficient to boot. As densities increase dramatically, having science inherently on your side is nothing to sneeze at. Liquid cooling also offers efficiency benefits, reducing energy consumption and (ironically) even using less water than air cooling systems often do — that means big OPEX benefits. Not to mention, it’s quieter and more efficient in size than alternative cooling methods (a key advantage when density and space constraints continue to rise in data centers) and can even help prolong the life of IT assets.
The popularity of liquid cooling has given rise to multiple subtypes, including direct-to-chip, immersion, and rear-door heat exchange methods. Direct-to-chip strategies put the cooling power right into the computer case or chassis, keeping CPUs, GPUs, and other key components properly chilled at the source. With rear-door heat exchangers, the same philosophy is leveraged but at the rack level, where a coolant runs through the exchanger to accomplish temperature regulation. Immersion cooling is the new tool in town, fully immersing (hence the name) server components in a non-conductive dielectric fluid. All of these strategies create great efficiencies and powerful effects at the micro level of the data center environment right at the rack. As we zoom out, however, we start to see efficiencies fall flat as we realize that the data center as a whole might not be truly, holistically optimized for making the most of these innovative cooling technologies.
Eliminating Liquid Lapses: Building Efficiency at All Levels
Data centers leveraging liquid cooling to the rack often combine all those liquids from their systems in a central location — but the traditional system isn’t up to new sustainable goals. In fact, common methods of transferring the heat from the liquid into the air, using a heat pump, a refrigerant loop and more, ultimately make data center efficiency strategies, well, a load of hot air. Others employ evaporative cooling, pumping heat into potable water and then evaporating it — but the result is the same. Phase changes add extra power requirements and depend on operational designs that inherently use tons of resources to cater to heat exchange.
These are stopgaps, but they’re not truly sustainable or innovative ways of translating liquid cooling’s hard work into positive results. It only undoes micro wins on the macro level. So, the industry finds itself in need of an operational strategy that completely rethinks how liquid cooling fits into the bigger data center picture, allowing efficiencies to be compounded — not impeded. Furthermore, as more data center operators and customers strive to implement liquid cooling strategies at the micro level, granting flexibility and accessibility to deploy this key efficiency tool within a range of operational settings is paramount.
So, where do we go from here? Is there a way to increase the value of liquid cooling at the micro level even further while better supporting it on a macro scale?
Seeing the Big Picture, Minding the Details
Achieving advanced sustainability and efficiency in the face of AI and other dense demands requires us to align liquid cooling at the rack with larger data center operational strategies. This work must start with the foundations of data center operation, re-envisioning data hall cooling strategies from the ground up.
We’re no longer in an era where fan walls, computer room air conditioners (CRACs), and other air-based solutions can often do the job as well as liquid can in the face of AI and other advanced applications. More and more organizations are seeking out the most advanced forms of liquid cooling at the rack level and looking to procure data centers that are holistically friendly to these methods. Further still, liquid cooling support on a macro scale within data centers hasn’t been at an optimal level — until recently.
Starting on the micro scale, if data centers can find a way to make symbiotic liquid cooling a reality through external, natural liquid resources, the battle for sustainability in the IT environment takes a quantum leap forward, allowing the data center to thrive as a part of its natural ecosystem — not despite it. That starts by being able to use water in its most unadulterated form: freshwater, greywater, and saltwater. Pushing the boundaries further still, we see that a significant win comes when we start to consider that the best liquid cooling capabilities won’t use any water at all.
Building a liquid cooling method like this isn’t just possible — it’s already a reality thanks to cutting-edge zero-water consumption systems. Still, every data center deployment or individual customer wants the flexibility to choose how they cool at the micro level to the rack. So, delivering that flexibility requires an advanced operational model with accessibility built in. This is where EcoCore does the job.
Working at the macro level, EcoCore leverages a water-based heat sink cooling process that eliminates the need for chillers, CRACs, and CRAHs — reducing waste from the get-go. Yet, the best part of it all is that, as an operational platform, it takes liquid cooling into account within every part of the equation, making it unique from the vast majority of solutions on the market. Plus, it still gives customers the flexibility to use any cooling they like at the micro level, whether that’s traditional hot aisle, rear-door cooling, direct-to-chip, and beyond. For those that want to push their eco-consciousness even further with their cooling method, leveraging Nautilus’ zero-water cooling technology within this operational framework can add some significant benefits to boot, including unmatched heat rejection (8,000 watts per square meter), industry-leading PUE at 1.15 or less (a 50% improvement over traditional energy usage) and massive water savings. We’re talking 380 million gallons of water preserved annually at a 100MW data center. That’s success on the macro scale and the micro scale in action — all parts working together for maximized results.
Ultimately, taking the next step forward in efficiency, resource conservation, and sustainability (and the one after that, and the one after that) comes down to strategically tearing down our traditional data center perspective and seeing the new, even bigger picture — and yes, that’s pretty cool indeed.