As part of our most recent e-Book, 5 Things to Expect from Data Centers in the Next 5 Years, we leaned on the incredible insights from our leaders. While everything couldn’t make the final cut, the full Q&A with our Chief Technology Officer Patrick was too good not to share. Patrick gives his take on the biggest opportunities, trends, and challenges facing data centers today:
Q: What do you think the biggest challenge is in the data center industry today?
A: There are probably several, but in the very near term, the single biggest issue is just supply chain volatility and price escalations. It’s becoming more and more difficult to be able to commit to a timeline that people are used to at a price point that people are used to. This is industry-wide. In general, this is an industry that’s gotten used to continually compressing the cost point and the time to delivery. And the last two, two and a half years have just completely upended that. And so, quite honestly, the industry is struggling to build out the capacity that is being demanded.
So that’s probably the single biggest challenge today. But, you know, everybody expects that will clear itself up, whether that’s over 12 months, 18 months, 24 months, you know, everybody has a slightly different opinion around that. I tend to think that it’s probably more in the 24 to the 36-month range before we start seeing things return to normal. And I do think that we are kind of out of the deflationary era of data center build-out so that it will now be inflationary, which is going to change some of the business models that people have used in the space for the better part of two decades, really. And then as far as the data centers themselves, the biggest issue there is power availability.
You’re starting to see a lot of areas that now have constrained power capacity, and the ability of any individual data center to guarantee that they have the utility quality they originally signed up for is getting tougher and tougher. This is a global problem: it’s now seen in pockets where there’s been significant buildout. In Ashburn, Virginia, for example, there’s now a significant power constraint that’s come in there. I’m not sure how people are responding to it, but given that it is the single largest data center market in the world, and now there are power constraints being applied, that’s definitely something that’s catching people’s attention.
You’ve got Singapore and Ireland and the Netherlands, and multiple other countries have put pseudo moratoria on new data center buildouts for both power availability as well as for other environmental concerns. And those things are becoming more and more frequent. Silicon Valley is another one – basically out of power capacity into the core of Silicon Valley for at least three to five years. And so, those power capacity and power distribution buildouts are lagging significantly behind the demand. So, I would say those are the two biggest challenges, one for the industry as a whole and the other for the operator side of it.
Q: That being said, what companies do you think are most at risk over the next decade?
A: I think it is a combination of the equipment suppliers and integrators because there’s been such heavy reliance on both of them in the buildout path. And the inflationary aspects, the lack of regularity around timing for deliveries and what’s happening with their inventory and supply chain side. I think that some of this could really put some of the product and integration companies at risk. And if that happens, it’s only going to exacerbate some of the supply chain issues that we’re already seeing.
So, the product companies are rightly going to start going in sizing their operations to what they can actually perform by. And that may further exacerbate the supply chain crisis into the data center that we already have. And cause either downstream effects on the integrators or it’s even possible that some of the product providers will falter. So the actual data center operators, there are always going to be one or two that run into issues. But for the most part, the demand is still there. The pricing hasn’t gone down to a point where it’s affecting overall business models or viability of cash flow or anything like that. It really is more on the supplier side.
Q: Do you think there are specific companies that have data centers, which are the vast majority of them at some point, that are going to be more at risk? We’ve heard things about the healthcare industry being more impacted by changes in the data center industry, anything like that?
A: The healthcare industry, there are two aspects of their data center buildout. One is the patient record side, which is protected by HIPAA and other rules. And so as long as their suppliers are meeting those requirements, there shouldn’t be that much impact in their outsourced aspects. Now where they’ve insourced and built their own data centers, most of that buildout happened really starting about 15 years ago through about four or five years ago. So they actually have their capital getting a little bit longer in the tooth than everything else. So, if they were to start having issues with reliability of power lineup or cooling equipment, that is a sector.
So really it’s any sector that built out between 2000 and 2012 – 2014, their equipment is now reaching that 10 to 20 year age range. And if they haven’t done continuous refurbishments or gone to more of a hybrid strategy, whether that’s private public cloud or mix of colo versus insourced, so it’s really anyone that kind of own their own and their buildouts were more than eight to 10 years ago, they’re the ones that be most at risk from a data center operations perspective.
Q: This sort of plays into some of the things you’ve already talked about, but what technology is going to impact data centers the most over the next decade? The good and the bad.
A: So one of the people that we work with, Gabe, he jokes that the day of increasing density in racks is always next year. And he said it’s been next year for the last 20 years, right? And you really haven’t seen a significant shift in the mean rack power density. But we’re now reaching the point from a semiconductor perspective that if they truly want to be able to get the power and performance out of the servers that they want, there’s going to have to be a shift to higher density, which then is going to force people to start moving into liquid cooling. Because it’s rapidly reaching the point where the Intels, the AMDs, the NVIDIAs of the world, they can’t keep relying on designs that are only air cooled.
If they were to stop there, they’d basically stop being able to innovate. And that we know that’s not going to happen. So effectively you can say that Moore’s Law is going to force the server OEMs or even anyone that’s building a custom-based server to start moving to much higher densities. And we will actually start seeing that average rack density start increasing here fairly rapidly over the next five to seven years. And then that is going to create a knock-on effect that a lot of these data centers that were built out over the last 15, 20 years just aren’t capable of being able to retrofit back in and support the kind of power densities and usage of liquid cooling that is definitely coming down the pipe. The root of it is Moore’s Law, but the end result of it is a shift from air based to water-based cooling, and the industry is not ready for that in general.
Q: So what do you think will impact data center evolution most aside from just the technology side?
A: Government regulation. Yep, it’s coming. And we’ve been saying for a while now that if the industry doesn’t get out ahead of this, then you will start seeing government regulations start coming in. And, when that happens, there is typically a stifling of innovation around solving problems because politicians are politicians, right? And they’re gonna go in and they’re gonna build regulations that, you know, box everything in one particular way. And that will have a crippling effect on this industry if we’re not careful.
Q: It’s already happening, for sure. I mean, there’s already regulations that you’ve talked about.
A: So a great example of that would be Ireland. If you look at what’s happened in Ireland, over the last less than two years, they recently put a de facto pause on approving new data center buildouts. They then came out with a policy from the regulator and now the power generation and power distribution and power connection entities are executing through on that. The net result is a dramatic change in what data centers have to do in order to operate in Ireland. Ireland is sticking to their 2030 goal of carbon content goals for their power generation and they’re not going to bend on it. It’s created a lot of angst for people who had data centers planned in Ireland, and now what are they gonna do with that capacity? Dublin is very similar to Ashburn, it is the central hub for Europe for data centers. Now, everybody’s having to go somewhere else or change how they approach building a data center if they’re gonna stay in Ireland. It’s going to get replicated in more and more places.
Q: I’m giving a caveat to this question here. Because obviously we can’t predict climate change and what it will do to the planet. We have a general understanding, but you know, how that issue is going to kind of impact data centers over the next 10 years, that evolution. We talked about temperatures, water levels rising. Anything else that you might wanna add there?
A: Yeah, so I think, and I like the way that you put the caveat on that question, right? Because what everybody needs to do is just kind of step back and say, okay, when you don’t know what’s gonna happen, what’s the first thing that you do? You go back to first principles, right? So in this case, anything that we can do to improve our power efficiency is going to naturally have a knock on effect on carbon emissions and everything else, right? It really just comes back down to how can you be more efficient in your usage of whatever it is? Whether that’s fuel for your generators or the amount of power it takes for you to be able to cool your data center or the means by which you’re cooling your data center, the physical materials you’re using in the buildout of your data center. What is its physical footprint? Are you using concrete that can do carbon capture?
So if everybody just steps away from the ledge for a minute and just says, okay, whether the seas are going to rise or the oceans are going to boil, or whatever it is: we know that there is demand for the services that are provided through data centers. So what are the things as data center designers, builders, owners, operators that we can go in that, adhere to those first principles and we just are the most efficient that we can possibly be with the resources that were allocated. And I think that that’s, if everybody just took that approach as opposed to the, oh, here’s a shiny 1.2 trillion coming from whatever government entity and just went back and said: you know, the data center footprints that we’ve been building are wrong, right? Trying to use air to cool something that is generating that much heat is wrong, right?
Just go back to those first principles and say, how can we be more efficient with the resources that were allocated? And that will then start solving downstream problems a whole lot faster than going in and buying carbon credits or doing PPAs with wind farms or putting solar in everywhere. The return on those investment dollars are not near as much as you would get if you just shrink your footprint.
Q: If you had to name the two most important things that companies need to be paying attention to when it comes to their data centers, what would they be?
A: So, I heard at the Schneider sustainability innovation conference last week out in Vegas, another data center owner put something out there that intuitively I’ve always thought about but never heard anybody articulate it. His comment was, of the 40,000 photos that you have stored, how many of them really need to be in a resilient data center? Do they all need to be sitting in a storage facility with five nines availability? Probably not. The knock on to that is that the first thing everybody needs to start thinking about is: what is the appropriate place to be putting certain types of compute or storage.
Do you really need to have 100% uptime or N + 1 resiliency? If there’s maintenance on some parts of it and it goes away for a little while, it’s not that big of a deal. So that whole idea of tiered storage or tiered compute to where the criticality of that data or processing is aligned with the true end need. A great example of that would be a doctor doing a telemedicine surgery on somebody. The data that’s going across is hugely critical and should be placed in the highest and most resilient, highest performing data center. The information should be transited in the highest priority, most resilient communication path. But the photos of Fluffy when it was a puppy eight years ago, are not of the same level of data criticality. So we shouldn’t be treating all data and compute the same. To me, that is by far number one. And if we go back to first principles and ways that we can improve what we’re doing: storage of data and processing of information needs to be aligned to the criticality of that data and today it is not.
Number two is, when you’re making a decision about where to place a particular application, if we were to go back 15-20 years ago, the most important person in the data center industry was the real estate guy. Because you had to go buy land, you had to go get all the permitting, you had to build a facility, you had to make all these choices. And so it really all rolled back to the person who was most important was the real estate, and then the facilities people, et cetera, et cetera.
Now it’s the application owner. Now I as the application owner get to decide, am I going to build my own data center? Am I going to go to co-location? Am I going to do hybrid cloud? Am I going to go to public cloud? And there’s a whole scale of things across there. As people are looking at that, making the choice that best aligns to the utilization of the resources is, to me, I think the second most important thing. So what are you doing with your data and compute, and then what is the right platform to put your application or compute need onto? Each one of those slices, whether it’s a hyperscale, cloud, co-location, public-private cloud, your own enterprise data center, whatever it happens to be, when you’re making those choices, that has a huge impact on what the overall sustainability of your choices ends up being.
Q: What do you think the future of the data center looks like?
A: A place that people don’t go.
Q: Fully automated, right?
A: So, whether it ends up being the robots moving around and putting new computer cartridges in, or whatever, data centers were built for people to be in. And all the standards and everything that we still have today assume that there are going to be people inside of them. And, the chips and the storage and, the servers and everything else, they really don’t have the same environmental requirements that we do. We’re a heck of a lot more fragile than they are. And it’s almost a pendulum thing because if you go and you look at what the original mainframes and even the mainframes that IBM still builds today, they are self-contained. People don’t go in them except for when they shut them down to go work on them.
So data centers to me, are going to evolve from these big huge corridors with, half a million square feet, that people walk up and down with, roller carts and whatever. They’re gonna move into relatively compact small spaces with close coupled cooling, more than likely liquid based, because that’s the only way you’re gonna be able to get that density. And they’d be serviceable from a robot or whatever it happens to be, or you leave them in there and just tag them until they die. And then once it reaches a certain percentage, then you go and you pick up that whole module and you take it out, right? That is what data centers will become. And I think that that future is not as far away as we think it might be.
Q: And finally, what are you personally most excited about when it comes to data center evolution and the changes that kind of get you excited about the future?
A: So, in a lot of ways it ties together everything I’ve been saying here in that today you have chip manufacturers that they’re building a server or a storage array or whatever it happens to be. The IT is behind it, and they’re trying to hit a really broad swath of potential applications where you’re starting to see people go in and say, no, no, no, for this type of application, this is the best silicon to build. Well, then you’ve got the system level guys that are putting together, whether it’s a server or a server array or, computer array, whatever it is, they’re having to take that silicon and package it up and build it up, but they think, they still think in terms of one box. How do I sell one box?
And then, 42 of those boxes are put together to go into a rack that goes into a data center. And when you reach that data center as a system, so many trade-offs have been made because the IT guy needs to be able to sell into a lot of applications. The OEM that’s building the server itself has to think about all the applications they’re gonna sell it into. Not every single one is gonna be a dense form factor. And so, to me that this next evolution is going to be a data center as a system where the silicon, the software, the physical hardware that it’s built into, and the housing: that entire ecosystem built with a single system level design approach. Because there are trade-offs that you would make if you’re doing that, that you wouldn’t make if each one of those layers wasn’t trying to hit a broader swath of the market. And that’s what it’s becoming because the compute requirements now in large scale data centers are much different than what’s needed, like at the edge. The amount of computing in our phones now blows away what servers were less than 10 years ago. And we’re still building servers the way that we were building them 20 years ago. So that has to change and we need to start thinking about data centers as a system, and do the design from the silicon software all the way up.