What is a high-density data center?

Do you know those inflection points where IT has to make a dramatic shift to cope with emerging demands?

 One of those is on the way.

Scenario: High-Density Data Center Dilemma

Consider this scenario — imagine, for a few minutes, that you’re an experienced data center manager responsible for making sure that your data centers support the services your organization needs to succeed.

Fortunately for you, where power and cooling are concerned, your organization thought ahead. Five years ago, you needed to support 6 kilowatts (kW) per rack. Two years ago, 7kw per rack. Three years from now, 10kw per rack. As a result, you still have more than enough power supply for every rack, and your data centers, built with hot and cold aisles using mature proven data center HVAC, keep all your systems comfortably cool.

One day, you learn that your organization has a long-term plan to add artificial intelligence (AI) capabilities to a key product. The word comes down from on high that to do the job, you’ll need to put twenty new servers in a data center. So you start checking into options to accommodate the need. You see that you have a single empty rack at that location, which is great – it’s just what you need. You also see that those racks support up to 12kw per rack. Sounds good.

Then you do the math. These servers, full of GPUs, consume 1.5kw at average load. These servers need at least 30kw of power and cooling. You only offer 12kw. You can’t power them and you can’t cool them.

To make matters worse, you’re space constrained. So you can’t spread out the servers to balance the load. That’s a sinking feeling. You have a high-density requirement, and you can’t deliver.

So you contact a local colocation provider. They’re happy to help, but they can only support 10kW per rack. You can use five racks to let you spread out while offering headroom if you need to add additional hardware. But, of course, they need to charge you for five racks and require you to sign a long-term contract for those racks…that’s not going to work. So now what do you do?

This might seem like an extreme example, but research shows it will happen sooner than you think. Demands for greater power and cooling density are accelerating. According to the Uptime Institute, in 2020, the mean density of a rack was 8.4kw, and 29% of survey respondents reported that the most common density in their data centers was more than 10kw per rack, with one out of ten reporting densities of over 20kw per rack.[1] Overall, organizations reported an increase of 1kw per rack just in the past year. If that rate of growth continues, we’ll be at a mean density of more than 12kw per rack by 2025.

Why is high-density data happening?

Why is this happening? Three factors:

  1. Server manufacturers are building servers that consume more power. For example, a leading server manufacturer just released a flagship 2U server with dual 2400w power supplies.[2]
  2. As more organizations demand leading-edge analytics and AI-powered workloads, new technologies, including GPUs and DPUs, demand more power. [3]
  3. Real estate costs are skyrocketing, and colocation costs can be prohibitive, meaning the old “overbuild and oversubscribe” paradigm is simply going away. Organizations simply don’t want to spend money on empty space.

If you talk to experienced data center leaders, many of them will privately (and not so privately) tell you that they don’t have much headroom in the data center to cope with change. They’ll also tell you that conventional power and cooling technologies will impede or block innovation sooner or later. For many, these problems already exist. For example, AFCOM sees situations where 1kw per rack unit is becoming the new normal. Today’s data centers cannot keep up because conventional power distribution and air cooling systems can’t effectively handle more than 20kw per rack.[4]

So we’re at an inflection point. What should we do?

Begin by understanding that high-density is different for everyone. Your organization may decide that 15kw per rack is high-density; another organization, 40 kW per rack. Finally, we see situations where 60kw per rack is what’s needed to power a workload. You’ll be forced to decide what high-density means to you.

The second problem is to recognize that conventional technologies won’t get us where we need to go. A simple back-of-the-envelope calculation tells us that 40 kW of servers produces about 117320 BTU/hour and would need chilled airflow of around 6400 CFM. In a 24u space, that equates to airflow that’s almost 18 mph!

To add to that point, not only are conventional technologies not keeping up with data center infrastructure innovation, they’re under intense scrutiny by regulators and customers. The power demands of Bitcoin alone consume more power than Argentina,[5] and last year, Google used over 5 billion gallons of water to cool their data centers.[6] From an optics perspective, as well as a cost perspective, those facts make organizations uncomfortable.

Finally, you already know about a fundamental problem — that power and cooling choices last for decades because retrofits are disruptive and expensive. So if you’re going to be designing data centers that need to last for decades, you’d better pick technologies that fit long-term, high-density requirements.

High-density data centers demand new thinking.

The first consideration is power. Conventional 208v power is being supplanted, at least in the thinking of end-users, by 415/240-volt power. As a result, data center designs built with this approach can cut hundreds of thousands of dollars from initial costs and ongoing operational costs.

Second, we see that liquid cooling is the right approach to solve the challenges of high-density. There’s a reason why many server manufacturers are adding direct-to-chip liquid cooling to their designs. The old fear of water in the data center is proven to be groundless. Can you run a 30, 50, 60, 80-kilowatt rack on chilled water? Of course you can.

Beyond supporting greater density, chilled water offers three advantages:

  • Organizations can choose a range of approaches, including rear door heat exchange, a fan wall with chilled water, and direct to chip. For high-density requirements, mixing these offers additional room for optimized results.
  • Reduced use of refrigerants. As you know, refrigerants are under constant regulatory scrutiny and have been banned again and again. This pattern has forced organizations to deal with costly, complicated chiller rip and replace to keep up with changing demands. Liquid cooling offers greater efficiencies, reducing refrigerant use — and to make matters better, technologies like ours don’t require refrigerants AT ALL to keep data centers cool.
  • Reduced costs. Though liquid cooling can cost more initially, greater efficiencies generally result in lower OpEx. And since liquid cooling offers more cooling overhead per rack, it delivers superior future-proofing, reducing the need for new equipment over time.

For these reasons, Nautilus decided to accelerate mainstream adoption of water-cooled data centers. Leveraging proven industrial cooling technologies, Nautilus brings a patented new approach to water-cooled data centers that can support demands for high-density infrastructure without the need for refrigerants or consumption of drinking water.

A data center cooled by Nautilus drives high-density data in a sustainable way

With a Nautilus cooled data center, organizations can:

  1. Use closed-loop cooling that, like a power plant, dumps a nominal and environmentally approved amount of waste heat into any nearby body of water, saltwater or fresh, lake, river, or ocean. Our approach consumes zero gallons of water. That’s right. Zero.
  2. Avoid the high costs and real estate requirements of any evaporative or vapor-compression refrigeration while achieving a 1.15 PUE or less every day of the year.
  3. We can cool more data center equipment per rack than any air-cooled data center. Our data center in Stockton offers 5x higher power density per rack.

If you have a high-density data center requirement, exploring what we do might be the answer. To learn more about Nautilus Data Technologies or our production data center in Stockton, California, visit us at…

[1] https://drift-lp-66680075.drift.click/UptimeInstituteGlobalDataCenterSurvey2020

[2] https://www.delltechnologies.com/resources/en-us/asset/data-sheets/products/servers/dell-emc-poweredge-r750xa-spec-sheet.pdf

[3] https://www.business-standard.com/article/technology/it-takes-a-lot-of-energy-for-machines-to-learn-why-ai-is-so-power-hungry-120121600301_1.html#:~:text=AI%20is%20more%20computationally%20intensive,neurons%20in%20the%20human%20brain.

[4] https://datacenterfrontier.com/rack-density-keeps-rising-at-enterprise-data-centers/#:~:text=The%202020%20State%20of%20the,and%207.2%20kW%20in%202018.

[5] https://www.bbc.com/news/technology-56012952

[6] https://www.gstatic.com/gumdrop/sustainability/google-2020-environmental-report.pdf

 

 

 

 

More Recent Posts

Scroll to Top