What is a high-density data center?

Data has a better idea image
Share This Post

Do you know those inflection points where IT has to make a dramatic shift to cope with emerging demands?

 One of those is on the way.

Scenario: High-Density Data Center Dilemma

Consider this scenario — imagine, for a few minutes, that you’re an experienced data center manager responsible for making sure that your data centers support the services your organization needs to succeed.

Fortunately for you, where power and cooling are concerned, your organization thought ahead. Five years ago, you needed to support 6 kilowatts (kW) per rack. Two years ago, 7kw per rack. Three years from now, 10kw per rack. As a result, you still have more than enough power supply for every rack, and your data centers, built with hot and cold aisles using mature proven data center HVAC, keep all your systems comfortably cool.

One day, you learn that your organization has a long-term plan to add artificial intelligence (AI) capabilities to a key product. The word comes down from on high that to do the job, you’ll need to put twenty new servers in a data center. So you start checking into options to accommodate the need. You see that you have a single empty rack at that location, which is great – it’s just what you need. You also see that those racks support up to 12kw per rack. Sounds good.

Then you do the math. These servers, full of GPUs, consume 1.5kw at average load. These servers need at least 30kw of power and cooling. You only offer 12kw. You can’t power them and you can’t cool them.

To make matters worse, you’re space constrained. So you can’t spread out the servers to balance the load. That’s a sinking feeling. You have a high-density requirement, and you can’t deliver.

So you contact a local colocation provider. They’re happy to help, but they can only support 10kW per rack. You can use five racks to let you spread out while offering headroom if you need to add additional hardware. But, of course, they need to charge you for five racks and require you to sign a long-term contract for those racks…that’s not going to work. So now what do you do?

This might seem like an extreme example, but research shows it will happen sooner than you think. Demands for greater power and cooling density are accelerating. According to the Uptime Institute, in 2020, the mean density of a rack was 8.4kw, and 29% of survey respondents reported that the most common density in their data centers was more than 10kw per rack, with one out of ten reporting densities of over 20kw per rack.[1] Overall, organizations reported an increase of 1kw per rack just in the past year. If that rate of growth continues, we’ll be at a mean density of more than 12kw per rack by 2025.

Why is high-density data happening?

Why is this happening? Three factors:

  1. Server manufacturers are building servers that consume more power. For example, a leading server manufacturer just released a flagship 2U server with dual 2400w power supplies.[2]
  2. As more organizations demand leading-edge analytics and AI-powered workloads, new technologies, including GPUs and DPUs, demand more power. [3]
  3. Real estate costs are skyrocketing, and colocation costs can be prohibitive, meaning the old “overbuild and oversubscribe” paradigm is simply going away. Organizations simply don’t want to spend money on empty space.

If you talk to experienced data center leaders, many of them will privately (and not so privately) tell you that they don’t have much headroom in the data center to cope with change. They’ll also tell you that conventional power and cooling technologies will impede or block innovation sooner or later. For many, these problems already exist. For example, AFCOM sees situations where 1kw per rack unit is becoming the new normal. Today’s data centers cannot keep up because conventional power distribution and air cooling systems can’t effectively handle more than 20kw per rack.[4]

So we’re at an inflection point. What should we do?

Begin by understanding that high-density is different for everyone. Your organization may decide that 15kw per rack is high-density; another organization, 40 kW per rack. Finally, we see situations where 60kw per rack is what’s needed to power a workload. You’ll be forced to decide what high-density means to you.

The second problem is to recognize that conventional technologies won’t get us where we need to go. A simple back-of-the-envelope calculation tells us that 40 kW of servers produces about 117320 BTU/hour and would need chilled airflow of around 6400 CFM. In a 24u space, that equates to airflow that’s almost 18 mph!

To add to that point, not only are conventional technologies not keeping up with data center infrastructure innovation, they’re under intense scrutiny by regulators and customers. The power demands of Bitcoin alone consume more power than Argentina,[5] and last year, Google used over 5 billion gallons of water to cool their data centers.[6] From an optics perspective, as well as a cost perspective, those facts make organizations uncomfortable.

Finally, you already know about a fundamental problem — that power and cooling choices last for decades because retrofits are disruptive and expensive. So if you’re going to be designing data centers that need to last for decades, you’d better pick technologies that fit long-term, high-density requirements.

High-density data centers demand new thinking.

The first consideration is power. Conventional 208v power is being supplanted, at least in the thinking of end-users, by 415/240-volt power. As a result, data center designs built with this approach can cut hundreds of thousands of dollars from initial costs and ongoing operational costs.

Second, we see that liquid cooling is the right approach to solve the challenges of high-density. There’s a reason why many server manufacturers are adding direct-to-chip liquid cooling to their designs. The old fear of water in the data center is proven to be groundless. Can you run a 30, 50, 60, 80-kilowatt rack on chilled water? Of course you can.

Beyond supporting greater density, chilled water offers three advantages:

  • Organizations can choose a range of approaches, including rear door heat exchange, a fan wall with chilled water, and direct to chip. For high-density requirements, mixing these offers additional room for optimized results.
  • Reduced use of refrigerants. As you know, refrigerants are under constant regulatory scrutiny and have been banned again and again. This pattern has forced organizations to deal with costly, complicated chiller rip and replace to keep up with changing demands. Liquid cooling offers greater efficiencies, reducing refrigerant use — and to make matters better, technologies like ours don’t require refrigerants AT ALL to keep data centers cool.
  • Reduced costs. Though liquid cooling can cost more initially, greater efficiencies generally result in lower OpEx. And since liquid cooling offers more cooling overhead per rack, it delivers superior future-proofing, reducing the need for new equipment over time.

For these reasons, Nautilus decided to accelerate mainstream adoption of water-cooled data centers. Leveraging proven industrial cooling technologies, Nautilus brings a patented new approach to water-cooled data centers that can support demands for high-density infrastructure without the need for refrigerants or consumption of drinking water.

A data center cooled by Nautilus drives high-density data in a sustainable way

With a Nautilus cooled data center, organizations can:

  1. Use closed-loop cooling that, like a power plant, dumps a nominal and environmentally approved amount of waste heat into any nearby body of water, saltwater or fresh, lake, river, or ocean. Our approach consumes zero gallons of water. That’s right. Zero.
  2. Avoid the high costs and real estate requirements of any evaporative or vapor-compression refrigeration while achieving a 1.15 PUE or less every day of the year.
  3. We can cool more data center equipment per rack than any air-cooled data center. Our data center in Stockton offers 5x higher power density per rack.

If you have a high-density data center requirement, exploring what we do might be the answer. To learn more about Nautilus Data Technologies or our production data center in Stockton, California, visit us at…

[1] https://drift-lp-66680075.drift.click/UptimeInstituteGlobalDataCenterSurvey2020

[2] https://www.delltechnologies.com/resources/en-us/asset/data-sheets/products/servers/dell-emc-poweredge-r750xa-spec-sheet.pdf

[3] https://www.business-standard.com/article/technology/it-takes-a-lot-of-energy-for-machines-to-learn-why-ai-is-so-power-hungry-120121600301_1.html#:~:text=AI%20is%20more%20computationally%20intensive,neurons%20in%20the%20human%20brain.

[4] https://datacenterfrontier.com/rack-density-keeps-rising-at-enterprise-data-centers/#:~:text=The%202020%20State%20of%20the,and%207.2%20kW%20in%202018.

[5] https://www.bbc.com/news/technology-56012952

[6] https://www.gstatic.com/gumdrop/sustainability/google-2020-environmental-report.pdf

 

 

 

 

More Recent Posts

Request a Quote

Do you want to make your data center as green as it is powerful? Send over your requirements and we’ll be back with you as soon as possible. 

If you have a general inquiry, please contact us here.

Schedule a Tour

Chad Romine

Chad Romine has over two decades of experience in technical and strategic business development. As Vice President of Business Development for Nautilus Data Technologies, Mr. Romine brings global connectivity to some of the most prominent global influencers in technology. Mr. Romine has led startups and under-performing companies to successful maturity built largely upon solid partnerships. Proven results in negotiating mutually beneficial strategic alliances and joint ventures. Outside of work, Chad has invested time fundraising for the American Cancer Society. Mr. Romine recently helped secure funding and led marketing for the completion of a new private University.

Ashley Sturm

Ashley Sturm is a marketing and strategy leader with more than 15 years of experience developing strategic marketing initiatives to increase brand affinity, shape the customer experience, and grow market share. As the Vice President of Marketing at Nautilus Data Technologies, Ashley is responsible for all global marketing initiatives; she integrates the corporate strategy, marketing, branding, and customer experience to best serve clients and produce real business results. Before joining Nautilus Data Technologies, she served as the Senior Director of Marketing Brand and Content for NTT Global Data Centers Americas, spearheading marketing efforts to open two out of six data center campuses. Prior to NTT, Ashley led global marketing through the startup of Vertiv’s Global Data Center Solutions business unit, where she developed the unit’s foundational messaging and established global and regional marketing teams. Ashley’s career experience includes extensive work with the US Navy through the Clearinghouse for Military Family Readiness as well as broadcast journalism. Ashley earned a bachelor’s degree in journalism with an emphasis in converged media from the University of Missouri’s School of Journalism.

Paul Royere

Paul Royere is Vice President of Finance and Administration at Nautilus Data Technologies. For more than twenty years, he has specialized in finance and administration leadership for emerging technology companies, guiding them through high growth commercialization. In addition to senior team roles guiding strategic business operations, Mr. Royere has directed cross-functional teams in implementing business support systems, designing and measuring business plan performance, leading pre/post-merger activities, and delivering requisite corporate, tax and audit compliance.

While at 365 Data Centers, Mr. Royere served as Vice President of Finance leading a multi-discipline restructuring in preparation for the successful sale of seventeen data centers. As Vice President and Corporate Controller at Reliance Globalcom, Royere led the finance and business support teams to and through the conversion from a privately held company to a subsidiary of an international public conglomerate.

Arnold Magcale

Arnold Magcale is founder and Chief Technology Officer of Nautilus Data Technologies. As a recognized leader and respected visionary in the technology industry, he specializes in data center infrastructure, high-availability networks, cloud design, and Software as a Service (SaaS) Technology.

While serving on the management team of Exodus Communications, he launched one of Silicon Valley’s first data centers. Mr. Magcale’s background includes executive positions at Motorola Mobility, where his team deployed the first global Droid devices, and LinkSource Technologies and The Quantum Capital Fund, serving as Chief Technology Officer. He was an early adopter and implementer of Cloud Computing and a member of the team at Danger, Inc., acquired by Microsoft.


Mr. Magcale had a distinguished ten year career in the United States Navy Special Forces. His military and maritime expertise provided the foundation for inventing the world’s first commercial waterborne data center.

Patrick Quirk

Patrick Quirk is a business and technology executive who specializes in operations management, strategic partnerships, and technology leadership in data center, telecommunications, software, and semiconductor markets. Prior to joining Nautilus, he spent the past year working with small businesses and non-profits on survival and growth strategies in addition to PE advisory roles for critical infrastructure acquisitions. Quirk was the President of Avocent Corp, a subsidiary of Vertiv, the Vice President and General Manager for the IT Systems business, and the VP/GM of Converged Systems at Emerson Network Power, providing data center management infrastructure for data center IT, power, and thermal management products. He has held numerous global leadership roles in startups and large multinational companies including LSI and Motorola in the networking and semiconductor markets.

Rob Pfleging

Most recently, Rob was the Senior Vice President of Global Solutions at Vertiv Co, formerly Emerson Network Power. Vertiv Co is an international company that designs, develops and maintains critical infrastructures that run vital applications in data centers, communication networks and commercial and industrial facilities. Rob was responsible for the global solutions line of business at ​​Vertiv, which serves the Americas, Europe and Asia. Prior to Vertiv, Rob was the Vice President of Expansion and Innovation, Datacenter Engineering at CenturyLink, where he was responsible for 55 datacenters across North America, Europe and Asia. Before working for CenturyLink, Rob was the Executive Director of Computer/Data Center Operations at Mercy, where he led datacenter engineering and operations, desktop field services, call center services, and asset management and logistics for more than 40 hospitals. Before fulfilling this mission at Mercy, Rob held various engineering management and sales positions at Schneider Electric. Rob Pfleging additionally served for 6 years in the United States Marine Corps.

James Connaughton

James Connaughton is a globally distinguished energy, environment, technology expert, as both corporate leader and White House policymaker. Mr. Connaughton is the CEO of Nautilus Data Technologies, a high-performance, ultra-efficient, and sustainable data center infrastructure company powered by its proprietary water-cooling system. Before joining Nautilus Data Technologies, he served as Executive Vice President of C3.ai, a leading enterprise AI software provider for accelerating digital transformation.

From 2009-2013, Mr. Connaughton was Executive Vice President and a member of the Management Committee of Exelon and Constellation Energy, two of America’s cleanest, competitive suppliers of electricity, natural gas, and energy services. In 2001, Mr. Connaughton was unanimously confirmed by the US Senate to serve as Chairman of the White House Council on Environmental Quality. He served as President George W. Bush’s senior advisor on energy, environment, and natural resources, and as Director of the White House Office of Environmental Policy. During his eight-year service, Mr. Connaughton worked closely with the President, the Cabinet, and the Congress to develop and implement energy, environment, natural resource, and climate change policies. An avid ocean conservationist, Mr. Connaughton helped establish four of the largest and most ecologically diverse marine resource conservation areas in the world.

Mr. Connaughton is a member of the Advisory Board of the ClearPath Foundation and serves as an Advisor to X (Google’s Moonshot Factory) and Shine Technologies, a medical and commercial isotope company. He is also a member of the Board of Directors at the Resources for the Future and a member of the Advisory Boards at Yale’s Center on Environmental Law and Policy and Columbia’s Global Center on Energy Policy.