Behind the Scenes: Full Q&A with Patrick Quirk

Share This Post

As part of our most recent e-Book, 5 Things to Expect from Data Centers in the Next 5 Years, we leaned on the incredible insights from our leaders. While everything couldn’t make the final cut, the full Q&A with our Chief Technology Officer Patrick was too good not to share. Patrick gives his take on the biggest opportunities, trends, and challenges facing data centers today:

Q: What do you think the biggest challenge is in the data center industry today?

A: There are probably several, but in the very near term, the single biggest issue is just supply chain volatility and price escalations. It’s becoming more and more difficult to be able to commit to a timeline that people are used to at a price point that people are used to. This is industry-wide. In general, this is an industry that’s gotten used to continually compressing the cost point and the time to delivery. And the last two, two and a half years have just completely upended that. And so, quite honestly, the industry is struggling to build out the capacity that is being demanded.

So that’s probably the single biggest challenge today. But, you know, everybody expects that will clear itself up, whether that’s over 12 months, 18 months, 24 months, you know, everybody has a slightly different opinion around that. I tend to think that it’s probably more in the 24 to the 36-month range before we start seeing things return to normal. And I do think that we are kind of out of the deflationary era of data center build-out so that it will now be inflationary, which is going to change some of the business models that people have used in the space for the better part of two decades, really. And then as far as the data centers themselves, the biggest issue there is power availability.

You’re starting to see a lot of areas that now have constrained power capacity, and the ability of any individual data center to guarantee that they have the utility quality they originally signed up for is getting tougher and tougher. This is a global problem: it’s now seen in pockets where there’s been significant buildout. In Ashburn, Virginia, for example, there’s now a significant power constraint that’s come in there. I’m not sure how people are responding to it, but given that it is the single largest data center market in the world, and now there are power constraints being applied, that’s definitely something that’s catching people’s attention.

You’ve got Singapore and Ireland and the Netherlands, and multiple other countries have put pseudo moratoria on new data center buildouts for both power availability as well as for other environmental concerns. And those things are becoming more and more frequent. Silicon Valley is another one – basically out of power capacity into the core of Silicon Valley for at least three to five years. And so, those power capacity and power distribution buildouts are lagging significantly behind the demand. So, I would say those are the two biggest challenges, one for the industry as a whole and the other for the operator side of it.

Download our free e-Book today!

Q: That being said, what companies do you think are most at risk over the next decade?

A: I think it is a combination of the equipment suppliers and integrators because there’s been such heavy reliance on both of them in the buildout path. And the inflationary aspects, the lack of regularity around timing for deliveries and what’s happening with their inventory and supply chain side. I think that some of this could really put some of the product and integration companies at risk. And if that happens, it’s only going to exacerbate some of the supply chain issues that we’re already seeing.

So, the product companies are rightly going to start going in sizing their operations to what they can actually perform by. And that may further exacerbate the supply chain crisis into the data center that we already have. And cause either downstream effects on the integrators or it’s even possible that some of the product providers will falter. So the actual data center operators, there are always going to be one or two that run into issues. But for the most part, the demand is still there. The pricing hasn’t gone down to a point where it’s affecting overall business models or viability of cash flow or anything like that. It really is more on the supplier side.

Q: Do you think there are specific companies that have data centers, which are the vast majority of them at some point, that are going to be more at risk? We’ve heard things about the healthcare industry being more impacted by changes in the data center industry, anything like that?

A: The healthcare industry, there are two aspects of their data center buildout. One is the patient record side, which is protected by HIPAA and other rules. And so as long as their suppliers are meeting those requirements, there shouldn’t be that much impact in their outsourced aspects. Now where they’ve insourced and built their own data centers, most of that buildout happened really starting about 15 years ago through about four or five years ago. So they actually have their capital getting a little bit longer in the tooth than everything else. So, if they were to start having issues with reliability of power lineup or cooling equipment, that is a sector.

So really it’s any sector that built out between 2000 and 2012 – 2014, their equipment is now reaching that 10 to 20 year age range. And if they haven’t done continuous refurbishments or gone to more of a hybrid strategy, whether that’s private public cloud or mix of colo versus insourced, so it’s really anyone that kind of own their own and their buildouts were more than eight to 10 years ago, they’re the ones that be most at risk from a data center operations perspective.

Q: This sort of plays into some of the things you’ve already talked about, but what technology is going to impact data centers the most over the next decade? The good and the bad.

A: So one of the people that we work with, Gabe, he jokes that the day of increasing density in racks is always next year. And he said it’s been next year for the last 20 years, right? And you really haven’t seen a significant shift in the mean rack power density. But we’re now reaching the point from a semiconductor perspective that if they truly want to be able to get the power and performance out of the servers that they want, there’s going to have to be a shift to higher density, which then is going to force people to start moving into liquid cooling. Because it’s rapidly reaching the point where the Intels, the AMDs, the NVIDIAs of the world, they can’t keep relying on designs that are only air cooled.

If they were to stop there, they’d basically stop being able to innovate. And that we know that’s not going to happen. So effectively you can say that Moore’s Law is going to force the server OEMs or even anyone that’s building a custom-based server to start moving to much higher densities. And we will actually start seeing that average rack density start increasing here fairly rapidly over the next five to seven years. And then that is going to create a knock-on effect that a lot of these data centers that were built out over the last 15, 20 years just aren’t capable of being able to retrofit back in and support the kind of power densities and usage of liquid cooling that is definitely coming down the pipe. The root of it is Moore’s Law, but the end result of it is a shift from air based to water-based cooling, and the industry is not ready for that in general.

Q: So what do you think will impact data center evolution most aside from just the technology side?

A: Government regulation. Yep, it’s coming. And we’ve been saying for a while now that if the industry doesn’t get out ahead of this, then you will start seeing government regulations start coming in. And, when that happens, there is typically a stifling of innovation around solving problems because politicians are politicians, right? And they’re gonna go in and they’re gonna build regulations that, you know, box everything in one particular way. And that will have a crippling effect on this industry if we’re not careful.

Q: It’s already happening, for sure. I mean, there’s already regulations that you’ve talked about.

A: So a great example of that would be Ireland. If you look at what’s happened in Ireland, over the last less than two years, they recently put a de facto pause on approving new data center buildouts. They then came out with a policy from the regulator and now the power generation and power distribution and power connection entities are executing through on that. The net result is a dramatic change in what data centers have to do in order to operate in Ireland. Ireland is sticking to their 2030 goal of carbon content goals for their power generation and they’re not going to bend on it. It’s created a lot of angst for people who had data centers planned in Ireland, and now what are they gonna do with that capacity? Dublin is very similar to Ashburn, it is the central hub for Europe for data centers. Now, everybody’s having to go somewhere else or change how they approach building a data center if they’re gonna stay in Ireland. It’s going to get replicated in more and more places.

Q: I’m giving a caveat to this question here. Because obviously we can’t predict climate change and what it will do to the planet. We have a general understanding, but you know, how that issue is going to kind of impact data centers over the next 10 years, that evolution. We talked about temperatures, water levels rising. Anything else that you might wanna add there?

A: Yeah, so I think, and I like the way that you put the caveat on that question, right? Because what everybody needs to do is just kind of step back and say, okay, when you don’t know what’s gonna happen, what’s the first thing that you do? You go back to first principles, right? So in this case, anything that we can do to improve our power efficiency is going to naturally have a knock on effect on carbon emissions and everything else, right? It really just comes back down to how can you be more efficient in your usage of whatever it is? Whether that’s fuel for your generators or the amount of power it takes for you to be able to cool your data center or the means by which you’re cooling your data center, the physical materials you’re using in the buildout of your data center. What is its physical footprint? Are you using concrete that can do carbon capture?

So if everybody just steps away from the ledge for a minute and just says, okay, whether the seas are going to rise or the oceans are going to boil, or whatever it is: we know that there is demand for the services that are provided through data centers. So what are the things as data center designers, builders, owners, operators that we can go in that, adhere to those first principles and we just are the most efficient that we can possibly be with the resources that were allocated. And I think that that’s, if everybody just took that approach as opposed to the, oh, here’s a shiny 1.2 trillion coming from whatever government entity and just went back and said: you know, the data center footprints that we’ve been building are wrong, right? Trying to use air to cool something that is generating that much heat is wrong, right?

Just go back to those first principles and say, how can we be more efficient with the resources that were allocated? And that will then start solving downstream problems a whole lot faster than going in and buying carbon credits or doing PPAs with wind farms or putting solar in everywhere. The return on those investment dollars are not near as much as you would get if you just shrink your footprint.

Q: If you had to name the two most important things that companies need to be paying attention to when it comes to their data centers, what would they be?

A: So, I heard at the Schneider sustainability innovation conference last week out in Vegas, another data center owner put something out there that intuitively I’ve always thought about but never heard anybody articulate it. His comment was, of the 40,000 photos that you have stored, how many of them really need to be in a resilient data center? Do they all need to be sitting in a storage facility with five nines availability? Probably not. The knock on to that is that the first thing everybody needs to start thinking about is: what is the appropriate place to be putting certain types of compute or storage.

Do you really need to have 100% uptime or N + 1 resiliency? If there’s maintenance on some parts of it and it goes away for a little while, it’s not that big of a deal. So that whole idea of tiered storage or tiered compute to where the criticality of that data or processing is aligned with the true end need. A great example of that would be a doctor doing a telemedicine surgery on somebody. The data that’s going across is hugely critical and should be placed in the highest and most resilient, highest performing data center. The information should be transited in the highest priority, most resilient communication path. But the photos of Fluffy when it was a puppy eight years ago, are not of the same level of data criticality. So we shouldn’t be treating all data and compute the same. To me, that is by far number one. And if we go back to first principles and ways that we can improve what we’re doing: storage of data and processing of information needs to be aligned to the criticality of that data and today it is not.

Number two is, when you’re making a decision about where to place a particular application, if we were to go back 15-20 years ago, the most important person in the data center industry was the real estate guy. Because you had to go buy land, you had to go get all the permitting, you had to build a facility, you had to make all these choices. And so it really all rolled back to the person who was most important was the real estate, and then the facilities people, et cetera, et cetera.

Now it’s the application owner. Now I as the application owner get to decide, am I going to build my own data center? Am I going to go to co-location? Am I going to do hybrid cloud? Am I going to go to public cloud? And there’s a whole scale of things across there. As people are looking at that, making the choice that best aligns to the utilization of the resources is, to me, I think the second most important thing. So what are you doing with your data and compute, and then what is the right platform to put your application or compute need onto? Each one of those slices, whether it’s a hyperscale, cloud, co-location, public-private cloud, your own enterprise data center, whatever it happens to be, when you’re making those choices, that has a huge impact on what the overall sustainability of your choices ends up being.

Q: What do you think the future of the data center looks like?

A: A place that people don’t go.

Q: Fully automated, right?

A: So, whether it ends up being the robots moving around and putting new computer cartridges in, or whatever, data centers were built for people to be in. And all the standards and everything that we still have today assume that there are going to be people inside of them. And, the chips and the storage and, the servers and everything else, they really don’t have the same environmental requirements that we do. We’re a heck of a lot more fragile than they are. And it’s almost a pendulum thing because if you go and you look at what the original mainframes and even the mainframes that IBM still builds today, they are self-contained. People don’t go in them except for when they shut them down to go work on them.

So data centers to me, are going to evolve from these big huge corridors with, half a million square feet, that people walk up and down with, roller carts and whatever. They’re gonna move into relatively compact small spaces with close coupled cooling, more than likely liquid based, because that’s the only way you’re gonna be able to get that density. And they’d be serviceable from a robot or whatever it happens to be, or you leave them in there and just tag them until they die. And then once it reaches a certain percentage, then you go and you pick up that whole module and you take it out, right? That is what data centers will become. And I think that that future is not as far away as we think it might be.

Q: And finally, what are you personally most excited about when it comes to data center evolution and the changes that kind of get you excited about the future?

A: So, in a lot of ways it ties together everything I’ve been saying here in that today you have chip manufacturers that they’re building a server or a storage array or whatever it happens to be. The IT is behind it, and they’re trying to hit a really broad swath of potential applications where you’re starting to see people go in and say, no, no, no, for this type of application, this is the best silicon to build. Well, then you’ve got the system level guys that are putting together, whether it’s a server or a server array or, computer array, whatever it is, they’re having to take that silicon and package it up and build it up, but they think, they still think in terms of one box. How do I sell one box?

And then, 42 of those boxes are put together to go into a rack that goes into a data center. And when you reach that data center as a system, so many trade-offs have been made because the IT guy needs to be able to sell into a lot of applications. The OEM that’s building the server itself has to think about all the applications they’re gonna sell it into. Not every single one is gonna be a dense form factor. And so, to me that this next evolution is going to be a data center as a system where the silicon, the software, the physical hardware that it’s built into, and the housing: that entire ecosystem built with a single system level design approach. Because there are trade-offs that you would make if you’re doing that, that you wouldn’t make if each one of those layers wasn’t trying to hit a broader swath of the market. And that’s what it’s becoming because the compute requirements now in large scale data centers are much different than what’s needed, like at the edge. The amount of computing in our phones now blows away what servers were less than 10 years ago. And we’re still building servers the way that we were building them 20 years ago. So that has to change and we need to start thinking about data centers as a system, and do the design from the silicon software all the way up.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

More Recent Posts

Request a Quote

Do you want to make your data center as green as it is powerful? Send over your requirements and we’ll be back with you as soon as possible. 

If you have a general inquiry, please contact us here.

Schedule a Tour

Chad Romine

Chad Romine has over two decades of experience in technical and strategic business development. As Vice President of Business Development for Nautilus Data Technologies, Mr. Romine brings global connectivity to some of the most prominent global influencers in technology. Mr. Romine has led startups and under-performing companies to successful maturity built largely upon solid partnerships. Proven results in negotiating mutually beneficial strategic alliances and joint ventures. Outside of work, Chad has invested time fundraising for the American Cancer Society. Mr. Romine recently helped secure funding and led marketing for the completion of a new private University.

Ashley Sturm

Ashley Sturm is a marketing and strategy leader with more than 15 years of experience developing strategic marketing initiatives to increase brand affinity, shape the customer experience, and grow market share. As the Vice President of Marketing at Nautilus Data Technologies, Ashley is responsible for all global marketing initiatives; she integrates the corporate strategy, marketing, branding, and customer experience to best serve clients and produce real business results. Before joining Nautilus Data Technologies, she served as the Senior Director of Marketing Brand and Content for NTT Global Data Centers Americas, spearheading marketing efforts to open two out of six data center campuses. Prior to NTT, Ashley led global marketing through the startup of Vertiv’s Global Data Center Solutions business unit, where she developed the unit’s foundational messaging and established global and regional marketing teams. Ashley’s career experience includes extensive work with the US Navy through the Clearinghouse for Military Family Readiness as well as broadcast journalism. Ashley earned a bachelor’s degree in journalism with an emphasis in converged media from the University of Missouri’s School of Journalism.

Paul Royere

Paul Royere is Vice President of Finance and Administration at Nautilus Data Technologies. For more than twenty years, he has specialized in finance and administration leadership for emerging technology companies, guiding them through high growth commercialization. In addition to senior team roles guiding strategic business operations, Mr. Royere has directed cross-functional teams in implementing business support systems, designing and measuring business plan performance, leading pre/post-merger activities, and delivering requisite corporate, tax and audit compliance.

While at 365 Data Centers, Mr. Royere served as Vice President of Finance leading a multi-discipline restructuring in preparation for the successful sale of seventeen data centers. As Vice President and Corporate Controller at Reliance Globalcom, Royere led the finance and business support teams to and through the conversion from a privately held company to a subsidiary of an international public conglomerate.

Arnold Magcale

Arnold Magcale is founder and Chief Technology Officer of Nautilus Data Technologies. As a recognized leader and respected visionary in the technology industry, he specializes in data center infrastructure, high-availability networks, cloud design, and Software as a Service (SaaS) Technology.

While serving on the management team of Exodus Communications, he launched one of Silicon Valley’s first data centers. Mr. Magcale’s background includes executive positions at Motorola Mobility, where his team deployed the first global Droid devices, and LinkSource Technologies and The Quantum Capital Fund, serving as Chief Technology Officer. He was an early adopter and implementer of Cloud Computing and a member of the team at Danger, Inc., acquired by Microsoft.


Mr. Magcale had a distinguished ten year career in the United States Navy Special Forces. His military and maritime expertise provided the foundation for inventing the world’s first commercial waterborne data center.

Patrick Quirk

Patrick Quirk is a business and technology executive who specializes in operations management, strategic partnerships, and technology leadership in data center, telecommunications, software, and semiconductor markets. Prior to joining Nautilus, he spent the past year working with small businesses and non-profits on survival and growth strategies in addition to PE advisory roles for critical infrastructure acquisitions. Quirk was the President of Avocent Corp, a subsidiary of Vertiv, the Vice President and General Manager for the IT Systems business, and the VP/GM of Converged Systems at Emerson Network Power, providing data center management infrastructure for data center IT, power, and thermal management products. He has held numerous global leadership roles in startups and large multinational companies including LSI and Motorola in the networking and semiconductor markets.

Rob Pfleging

Most recently, Rob was the Senior Vice President of Global Solutions at Vertiv Co, formerly Emerson Network Power. Vertiv Co is an international company that designs, develops and maintains critical infrastructures that run vital applications in data centers, communication networks and commercial and industrial facilities. Rob was responsible for the global solutions line of business at ​​Vertiv, which serves the Americas, Europe and Asia. Prior to Vertiv, Rob was the Vice President of Expansion and Innovation, Datacenter Engineering at CenturyLink, where he was responsible for 55 datacenters across North America, Europe and Asia. Before working for CenturyLink, Rob was the Executive Director of Computer/Data Center Operations at Mercy, where he led datacenter engineering and operations, desktop field services, call center services, and asset management and logistics for more than 40 hospitals. Before fulfilling this mission at Mercy, Rob held various engineering management and sales positions at Schneider Electric. Rob Pfleging additionally served for 6 years in the United States Marine Corps.

James Connaughton

James Connaughton is a globally distinguished energy, environment, technology expert, as both corporate leader and White House policymaker. Mr. Connaughton is the CEO of Nautilus Data Technologies, a high-performance, ultra-efficient, and sustainable data center infrastructure company powered by its proprietary water-cooling system. Before joining Nautilus Data Technologies, he served as Executive Vice President of C3.ai, a leading enterprise AI software provider for accelerating digital transformation.

From 2009-2013, Mr. Connaughton was Executive Vice President and a member of the Management Committee of Exelon and Constellation Energy, two of America’s cleanest, competitive suppliers of electricity, natural gas, and energy services. In 2001, Mr. Connaughton was unanimously confirmed by the US Senate to serve as Chairman of the White House Council on Environmental Quality. He served as President George W. Bush’s senior advisor on energy, environment, and natural resources, and as Director of the White House Office of Environmental Policy. During his eight-year service, Mr. Connaughton worked closely with the President, the Cabinet, and the Congress to develop and implement energy, environment, natural resource, and climate change policies. An avid ocean conservationist, Mr. Connaughton helped establish four of the largest and most ecologically diverse marine resource conservation areas in the world.

Mr. Connaughton is a member of the Advisory Board of the ClearPath Foundation and serves as an Advisor to X (Google’s Moonshot Factory) and Shine Technologies, a medical and commercial isotope company. He is also a member of the Board of Directors at the Resources for the Future and a member of the Advisory Boards at Yale’s Center on Environmental Law and Policy and Columbia’s Global Center on Energy Policy.