George Rockett: Jim, let’s kick it off. What’s happening in the market from your perspective in relation to Moore’s Law?
We’re in a very exciting time with the end of Moore’s law, as opposed to a troubling time which is there is a global innovation race going on for a whole new category of chips and servers. And it’s very exciting, the first big steps of that, or what you see coming out of, you know, NVIDIA and Google and apple and others as they create silicon configurations but there’s a lot more behind that that we’re not seeing yet because it’s all proprietary. And this is exciting as perhaps things were 35 years ago. So I think the original pioneers are now mentoring people like my young son on a whole new breed of computing, which comes with a whole new breed of performance. And as long as electrons are there to flip bits that performance will require energy. It will also generate heat which means that I and my fellow panelists today sit astride the ability to unleash the future of this new generation of computing which as we get into this discussion is going to open the door for probably the single greatest leap forward in sustainability that any sector can deliver in addition to its core function.
George Rockett: Wonderful. So you’re saying that this is a, you know, with the 35 years, and this is like a big new door and I’m kind of interested that we’ve got the end of the road for Moore’s law. That doesn’t seem daunting. Can you give just a quick explanation of how that kind of like fits in with this?
Yeah, so, you know Moore’s Law specifically defined just relates, you know, simply put how many transistors can squeeze on a single wafer. And so now, you know, not the physical end of that in terms of variety being able to do it. But the demand for compute is going up, right. We are saturated human to human computing requirements with what we’re doing right now, but applied materials recently released their own study, projecting 50x demand growth and data processing and storage between now and 2030 all driven by machine to machine interaction, driven by the, you know, the manifold censoring of the world. Now at current with the current stoppage dealing with Moore’s law means a 50x growth means 50 times more current servers that are not economically, technically, and certainly not environmentally sustainable.
And so with that market drivers though, 50X growth and the demand for compute, which is wildly, unsocially and environmentally valuable. We’re in this, we’re in it, we’re in it right now. You can’t see it yet really, but we’re in it this next stage of innovation and invention on the hardware side, driven by dramatic innovations in the software side, I used to be with the top of the c3.ai, like parents here and others, you know, bringing software packages to bear on these new integrations. They require a massive amount of compute, but they deliver, you know, even, even more ginormous as my Irish forbearers would say, with ginormous benefits in terms of economics and human welfare. So, so we are, we’re in it, we’re in it right now. We don’t know that we’re in it really. But we are, and,you know, look at Ray Kurzweil, the singularity is near, I’m feeling like the singularity is here.
George Rockett: Peter Poulin can you give us your broader context and, you know, what’s already broken, what’s going to break in this, you know where are we at and what are your clients saying to you about this point that we’re in?
Well, I think Jim hit on a number of these items, right, is when we think about Moore’s law coming to the end of the road, well, you could argue from a transistor perspective, that’s true, right? This notion of power increasing every eight doubling every 18 months. But if you look at the new applications that are coming out, that are driving more and more powerful systems to drive them, you know, artificial intelligence, virtual reality, augmented reality a lot of GPU based computing for cloud gaming, et cetera. What you’re starting to see in those application areas is almost a replication of Moore’s law in, but it’s coming in the form of density per rack. And seeing that level of density in those racks really escalate very, very quickly. And, you know, we’ve learned that conventional air cooling is going to struggle to deal with those things because it becomes very, very difficult to, as Jim said, those electrons mean energy, which means heat and harvesting.
Those BTU’s becomes very, very difficult it with conventional forms of cooling. So I think from our perspective, that’s, that’s certainly what we are seeing as it relates to, you know, if we want to make this connection to Moore’s law, now there are a number of other variables, right. That have made alternative non-conventional I would argue they’re not so non-conventional this, maybe they were when I started around here. But we are making the conventional, well, we’re flipping the conventional on its head, right? To what, what should be conventional moving forward. So I think this is one of the key elements that is driving. Jim mentioned it in our pre-call George that is this really unconventional or is it really taking conventional techniques and moving them into an industry that can now benefit from them? And, and I’ll let Jim explain, I don’t want to take credit for Jim’s thinking on this, but I thought that was a very interesting way to pose it.
George Rockett: Yeah. I agree with you on the day we prepped that he said it’s right. But I think that part of today is to challenge that whole notion. Jim, what is it, what was it you said precisely on the day was very well said?
Well, we have one of the most advanced sectors in the world using some of the most outdated techniques for dealing with the rejection of heat. And what I said is if we take what is conventional from all of the other sectors of the world for how they handle the rejection and you apply it into the data center sector, you’re actually applying conventional approaches conventionally into the data center world. And to unpack that a little bit you know, my background is in industrial sustainability. So I have had the great privilege of seeing everything in the world made you know, at the ground level and every other sector of the generate heat thermal power plants, industrial processing facilities, petrochemical facilities, every ship that plies the seas carting every product that ships around the world, they all use liquid and typically water as a mechanism for rejecting heat it is actually unique in the data center sector that is not widely deployed as the methodology.
And so where I sit, what Nautilus is doing by using water to water, heat exchange is, is, is terrifically conventional. We just have to open the eyes of the practitioners of the data center sector to understand that and be comfortable with that and think about it, you know, a thermal power plant, a nuclear power plant is creating, you know, 300 megawatts to a gigawatt of heat. Okay. and so that gets cooled. And so what we’re doing in the data center sector is actually small ball compared to that. And so we can constantly apply both the techniques Nautilus is using to reject the heat and that Pete and JD, and, and the other folks who were working inside the racks and on the servers off the machines we can, we can confidently employ these conventional methodologies in a very engineered ways. That’s the issue. We can do it with electrical or mechanical engineering. We’re just, we’re just innovating, innovating in the space methodology, the principles that are well crafted.
George Rockett: Okay, well, we’re going to go into that a little bit in a little bit of time, I’ll just put a ring JD into the conversation. You’ve got a lot of contact with the chip manufacturers and people like that. It was alluded to at the beginning of the conversation, there’s a lot of things that are being worked on. What is your perspective on the chip manufacturers where they’re at in the moment?
Yeah, certainly. So, you know, first of all, I’m so pleased and honored to be here, with Jim and Pete. Both are great innovators in this space. You know, Peter, you’ve been one of the, you know, you came in early on, you’re one of the OG as they call it. Right. My son calls it the OG right. Both you and Jim so I’m honored to be here. So I agree completely what we’re doing is in every other industry, including the biopharmaceutical industry and the pharma industry with respect to cooling is not, is not, you know, it’s, it is very conventional. It is very conventional stuff. You know, bioreactors, for example, tanks used fluids waters, and things to cool the jackets. They don’t use air. That would be ridiculous. So, you know, what we’re doing is just applying this to, to the, to the industry that desperately needs it now with respect to you know, where are we in it, my perspective of where are we in it?
I agree completely we’re in it. And you know, that that ship has left the port, it left, you know, years ago probably when Pete started jumping into the business and Jim was thinking about, you know, Nautilus that ship was, was, was fueling up and was heading out. And I’d say in the last three to four years it’s, it’s out to sea with respect to, and the reason why I say that absolutely confidently is because I fortunately, like I’m sure both Peter and Jim are, I have the opportunity to sit across the table. Side-By-Side as recently as yesterday, was people like, you know, the folks at Dell OEM and Intel and, and all of those folks that are there that are innovating in this space. I talk about the three legs of the stool, right? How important, how important it is for the OEM on the chip.
And the architecture continues to innovate on the densities of getting, you know, PCB boards that are, you know, hybrids, GPU, CPU, and storage, all combined together. And you’re getting 500 watt, you know, types of chips which are, you know, unheard of. And then you’ve got the data center innovation and the adoption of these types of technologies. And then folks like Peter and Jim and ourselves here at TMG core, who are innovating that third leg of the stool, which is making sure that we can get heat dissipation technologies that make sense, everybody calls about cooling, but it’s all about heat, dissipation and high-density applications.
It needs to be immersion, either single-phase or two-phase. And, and that’s their development. And they’re not, that’s just one example of dozens of examples of the leaders in the space who are innovating companies like AMD and Nvidia and, and, and others who are, and so we see what’s happening.
And so either you’re going to innovate with them or that innovation is going to stop. So we’re, it’s, we’re coming together and an inflection point where the technologies like what GRC is doing with Pete and the abilities to do cooling with water w with Pete with Jim, is doing with Nautilus. And what team G4 is doing with the two-phase has come together. And, and we’re starting to match up the pace with those companies like Intel, like Dell and others who are developing the chipsets and the PCB boards and all of those things, and it’s coming together and it’s starting to mesh. And as soon as that’s happening, you’re going to get this massive wave of innovation. And this adoption that’s going to happen over the next three years, where we’re going to be able to do things that nobody ever thought we could do.
And that’s having a completely decentralized data center list type of application on the edge. And with that, that ship left a while ago, and now it’s accelerating, you know, Peter, you brought it up, you know, it’s like another iteration of Moore’s law, and it’s really the acceleration of the densities at the rack level, right? That is, that is almost, you know, we’re flying away from the chip and into the architecture and into the densities. And so Moore’s has a lot of wealth as it relates to the densities and the enablement that, that technology is like Peter and Jim are developed.
George Rockett: So can each of you to just give me your three-minute explainer and then we can kind of move the conversation forward. Jim, can we start with you at Nautilus? So I know you’ve got the technology and you’ve got data centers. Can you explain to us what you’re doing?
That’s terrific. Thanks so much. So we’re simple to understand, which is almost every data center, the world is cooled with some form of mechanical, you know, air chilling system. And we cooled the data center with naturally cold water. So you really get to the core of what we’re about. It’s how do we employ the methodologies once through an open loop, to the closed loop heat exchange, how do we employ them to adapt into the 100% uptime reliability of the data center and do that in the most sustainable way possible environmentally. And so literally we placed the data center at the shoreline. It can be either be on a vessel on top of the water, or it can be in a land building beside the water, just like every other industrial entity that uses naturally cold water for cooling. And we can do that quite resiliently the advantage of this approach there were four of them.
So one is high-performance and we need to start there because we’re an all-in business to make money and deliver products. And the future is at high performance and we all can recognize that in five years, high-performance computing is just going to be computing. Okay? And the only question is how much of it the public will people have access to high-performance computing is limited to national laboratories and, and, and, you know, elite players, but how do we create democracy in access to high-performance computing? Well, that’s what we enable. And we do that because we take advantage of mother nature. Mother nature has provided us with an abundance of naturally cold water, which can be flowed through the data center facility to collect the heat, coming off the machines, through the innovations of JB and Pete and others who were finding creative ways to make those machines a lot more powerful, but get heat out from the rack and then you give it to us.
And we’re able to address that at scale. So high-performance is important. And to put that in context, this methodology can cool rack loads greater than a hundred kilowatts. Okay. So we’re talking about where we are today and that’s, without any special activity, the system is simpler. It’s like a Tesla compared to an internal combustion car. Okay. The system is separate, less complex, more resilient. The second feature is ultra-efficiency. So we’re talking about the ultimate reduction in energy use for cooling and operations. Our first facility that we commissioned in California has a PUE of 1.15, but even that is not a fair number because we actually eliminate the server fan load. And as everyone knows, the server fan load is attributable to compute. But if you gave us the server fan load, our PUE we would be in, in reality, even lower. So we’re also efficient on the cooling and then also ultra-efficient when it comes to sustainability.
So we consume no water because we do no evaporation. So we don’t have to plug into the drinking water systems. We don’t have to retreat that water. We don’t have to manage the evaporation of that water. We don’t need to transfer that showing to chemical refrigerants that are the most potent greenhouse gases and ozone-depleting substances in the world. And we, therefore, don’t produce any wastewater. And we don’t have any other chemicals on site. So you have, you have a 70 plus percent reduction in energy for cooling, and you have a hundred percent elimination of water consumption, chemical consumption, wastewater production, and then to top it all off it makes no noise, quiet both inside and out. So that then leads you to the final feature beyond this ultimate sustainability, which is flexibility, which is we can then deliver these mechanical electrical systems in 2,500-kilowatt packages.
You can deliver it on the vessel, which is cool, and you can pre-build the whole thing, and literally, you know, ship it to a location and plug it in or you can bring it in, in in container size system, but create a hyperscale data center trucked in and assembled like Legos right at the shoreline with the added advantage of this locational flexibility too every population center on earth has brownfields where all the industry currently sits. Then it’s just, it’s just the best place to be setting up data centers of the future in these brownfields for both large data centers. But also as it turns out these brownfields, sit at the edge of every major population center. So you can double duty with these locations. So that’s what we’re about. And the beauty of it is it’s scalable, it’s available. It can be deployed everywhere in the world regardless of season or weather. And as a result, you know, just like JD, you know, you can actually have a cool afternoon in the middle of the summer in our data center without any mechanical chilling in sight.
George Rockett: Wonderful. I’ll have to, but seriously, Limerick clearly likes it. San Francisco likes it. And you’re looking in Maine. I think it’s a wonderful idea. Now I’m sat here in the center of London and Edington, and that could be floated on a barge in central London. Couldn’t it? Where there’s no more space, you just need the palette, but, but people are working on that. Jim, thanks a lot for that. We’re going to move swiftly on and just ask Peter, can you give us a quick elevator of what you’re doing too?
Yeah, sure. This is that it’s pretty simple. What we do essentially, as this picture might reflect, we’re just immersing and industry standard server into, we call them racks. Some people may call them tanks pass. We just immerse them in a non-conductive non-corrosive fluid, which is very effective at capturing the heat. Unlike Jim who’s helping reject the heat. Our focus is around capturing the heat with fluids that are much more efficient than air that warm fluid is then pumped through a heat exchanger where the heat is then moved off out to the environment, whether that be a more conventional approach with a cooling tower of some kind or a quote-unquote unconventional approach like Jim is deploying or rejecting it through his type of system. So through that heat rejection, the coolant comes back, the fluid comes back colder and the cycle repeats itself. So it’s pretty simple how it operates.
It’s been pretty well vetted. As you mentioned earlier, we’ve been doing this for 12 years, deployed it in 17 countries. The picture you see here is a modular data center that the air force uses. So a lot of it has been worked out. The problems that we’re typically solving for customers are they are out of power in their facility and they’re out of power because a large portion of their power envelope is being used for their cooling technologies. So by converting to immersion, they can change the ratio of how much power is applied to compute that does the real work versus applied to the cooling technology. And to give you a sense of this, I think we mentioned before the power required to run an immersion system. The total power is less than the power you say by removing the fans from the surface.
The other problem we’re typically solving is in high-density applications where the customer’s out of space. And this is because air cooling just can’t handle some of the heat loads. It’s not, I’m sure many of the audience say walked into data centers with racks that are one quarter or one-third fault, simply because if you filled them, you couldn’t cool them. So out of space is another particularly for, for clients in financial services, spaces and others, where the cost of real estate is really high. And then finally the problems we keep seeing, and I, and I know JD is seeing this as well, is what we call unconventional locations at the edge, which are going to require the density, the ability to handle the high densities, but also need to provide us a level of environmental resilience that is more difficult to get where they are.
Your most expensive assets are immersed in a fluid, which essentially protects them from airborne particulates: salt, mist, humidity, oxidation, and because you’ve removed, the fans from the servers, which typically is the single highest incident of failure. You’re increasing the reliability of those systems. And you’ve got to think about, it’s expensive to roll trucks out to remote environments, remote locations for service levels. So I think we’re all seeing it. It’s no big revelation that we’re seeing this explosion at the edge. But liquid cooling technologies like JD and we provide are uniquely suited to create value there. And then finally, I mean, our approach has always been to go to market with strategic partners that, include folks like Vertiv Dell, Intel. And this is really just to ensure we have a completely integrated solution that can be delivered and supported on a worldwide basis. That’s I hope I helped you save a couple of minutes there. Well, yeah,
George Rockett: Let’s roll on forward. JD quickly, you spoke before about this, just the elevator pitch, and then we’ll get into some of the, in some of the background that we wanted to from this more of the discovery.
Yeah, certainly. Just I’ll just reiterate exactly what Peter said. We do. Very similar. It’s very similar to what Peter does. The only difference is we use different types of fluids. We use a two-phase fluid versus a single-phase fluid each have benefits all of the same related thermodynamics benefits, the edge computing. We go to places where like the Permian or actually, you know, oil and gas actually, you know, right where they’re collecting the data in hardened fashion, it’s cleaning. So it’s, it’s very similar. It’s immersion cooling in dielectric fluids. We just simply go through a single phase. I think we go from a liquid phase to a, what we call it vapor phase at a very specific temperature and then that vapor and it’s again to, to, to Peter’s point, all our stuff is, is, is solid-state.
So there’s no fans to fail. There’s no moving parts. So it’s very, the environment is very clean. It’s ideal for electronics. It protects the electronics and from a reliability perspective. So, and that goes through the vapor phase and then we just run, we run warm water, so we don’t need chill water at all. When we partner up with companies like Atlanta’s and Jim’s team, I mean, the idea we can get extraordinary. There’s people even talking negative PVS, which is hard to fathom, but the idea that you’re pulling in, you know, water from on a barge, or what have you or a location near a water body of water, that’s doing the final heat rejection through our cooling coil. I mean, it doesn’t even have to be cool. It’s warm, it could be 90 degrees. So for us, that creates that that collapses the vapor, and it just goes on and on and a closed-loop system. And so, you know, for us, it’s, it’s the same story we can go where we need to go and support the customers in locations that they normally couldn’t go. We don’t need to raise flooring. We don’t need conditioning in this space. It, it, it, it’s a, it’s another technology that, that leads onto the ability and the innovation of what we call the data center world.
George Rockett: Brilliant, thanks JD. So we’ve gone through like the broad context. We’ve covered a lot of ground there and you’ve each presented the technologies, the innovations that you’ve worked on for many years which we’ve now called conventional technologies that need to be applied to the data center industry. So now I want to cover a lot of, have a little look. This is an old friend of ours Peter Gross, and this is a comment of his that I use and abuse. I add years onto it. So to many people, but I thought we’d give him credit for it, with his face in the data center industry that was innovation, as long as it’s 10 years old. And he usually applies it to power and the critical power. Now you’ve got these conventional ideas that you’ve all said that conventional, they are used by people that have very mission, critical applications.
What are just your short stories of acceptance and what people are saying and how they’re changing, what they’re saying. And a little bit about the struggles that you’ve had that I can clearly see you’ve overcome, but I’d just like to get that angle from you all. Jim, can you kick us off? You remember we met in San Francisco and you gave your idea and you must’ve given it to a lot of people it’s taken years to get there. Right. What have been the hurdles?
Yeah. Any innovation takes time to craft even if it’s built upon the conventional nets. Right. And think of the evolution of the light bulb, where we are today with the led, which is now saving the world for electricity, bill cutting the world’s electricity bill by 70%. I thought that was a long journey. And yet that 70% electricity savings are taking place over the course of under, you know, 10 to 15 years. It’s radical when just occurred with the shift in lighting. And it occurred basically overnight, having been very conservative and not shifting very much over time. So in our case, we set about with this idea, how do we apply these methodologies and once-through cooling into the data center environment and it required no, these are capital intensive efforts and it required a lot of you know, you have to make 15,000 decisions to settle on 300 things we ultimately did to make this work to me now, 100% uptime requirements of the sector and to make it to design it and crafted the way where it can handle any rack, any computing configuration, and any methodology of dissipation of heat from the servers.
And so we took our time to do it right, make it resilient, make it scalable, and make it beyond price competitive. So we can, we can deliver our outcome at the same or better market prices. And then with the dramatic reduction in the environmental burden and in the social community burden of accomplishing that. So like any innovation you got to stick with it, you know, our first project you know, we had to raise a lot of money for that, we have to get the planning. This is a new thing for regulators. They’ve never seen this before. They think now that they’ve seen it and they embrace it. They actually, now they prefer what we’re doing in California to the alternative and so that creates its own change as well. And we’ve got community excitement because we’re bringing advanced technology and, you know, nameplate tech firms, you know, to the city of Stockton, for example, that has not seen any of that kind of investment or opportunity. And yet here it is, in Silicon Valley’s backyard, right. And we’re, we’re, we’re able to open up that door, the S part of ESG. So it’s a conservative as it should be because of the mission-critical nature of what it does. So I’ll be generous there, but if you are mission-critical, you also have to be as aggressive as possible and leaping forward to the next methodology of meeting and exceeding your mission-critical requirements. I believe the combination of what Nautilus does, what JD and Peter’s company are doing, there’s no question when you run it side by side element for element everything we are doing the same and better in every respect than what’s occurring in conventional.
George Rockett: Great. and, and I, you know, it’s spreading a little bit, isn’t it, we’ve got over in Singapore, you know talks about companies investing in these types of ideas too. So yeah, it’s not, you’re not isolated either, you know, which is great to see. So JD can, you can like tackle this one. I mean, how long have you been working?
TMG. Sure. If I could just say one thing before that you know, the idea of bringing water into the data center used to be so taboo over the last three years. And even this year, I would just, I’d say four or five years, but even this year, things like direct to chip like this stuff here you know, cold plate technologies that use all of these types of things here that go onto the chips like this to you for node four Walker, pass that Intel, they’re bringing lots of water into the, into the data center and that’s opening, and that’s a great technology and that’s leading the way towards the acceptance on things like what Nautilus and what, and what Peter and, and teams I-Corps, I, I’m doing. So with that said, we’ve been working non-stop for the last four years investing a lot of time and capital with really smart people working closely with and, and at first, you know, we looked at companies like, like what Peter at GRC, as like, you know, these guys are really leading the charge.
I mean, you know, kudos they’re out there, they’re there, they’re cutting, cutting the teeth in the industry. And we came in and said, Hey, we’re going to do something slightly different, but in the same vein of innovation to allow that enablement to happen. So what we saw was initially we didn’t get the resistance that maybe that Peter and Jim initially saw in the beginning, because they had already driven that and these technologies had already started pushing them. In fact, when we first unveiled art auto technology in our edge box technology as a supercomputer back in 2019 in Colorado, we were, we were, it was welcomed with open arms. Yes. It was the innovators. It was the first, the leaders in the industry, the visionaries, quite frankly, who could completely see the line of sight, and the reason why they consider visionaries isn’t necessarily because they have a crystal ball it’s because they’re in it.
And they’re developing the technology that absolutely requires technologies like TMG, cord, Peters, and gems it’s necessary. It isn’t like, it’s a nice to have it’s necessary for the, for the evolution of the entire industry. So they, they, they, they appear to be, you know, real visionaries and they are to a certain extent, but primarily because they’re developing the tech that needs system. So we didn’t see that type of resistance. That’s not to say that we’re, that it isn’t slower than we would, like, we would like to see faster adoption. We have to prove ourselves just like everybody else to show, Hey, the reliability, the sustainability, you know, all of those things are there, which we consistently do. But yes, I, I see us, we’re having, there’s less of a headwind and it’s, there’s more of a, a tide taking us out. And there’s more adoption happening now than ever before. And it’s starting to accelerate.
George Rockett: Yeah. I think we can all see that in the discussions. Now I’m going to accelerate this conversation a little bit as well, because we could be here for two hours if I wanted to get everything out of you guys on this conversation. So I want to jump into sustainability right now. You see out there, it’s a green roller green brush on kind of like alluding there to some greenwashing, the industry around things, often a lack of understanding. I want to talk about what you guys are doing, how you’re seeing the industry from a sustainability perspective and what you’re doing, you know, metrics, measurements what’s different, what needs to be done because we’re right in it as well, from the perspective of needing to find solutions and we’re right in it needing to reprogram minds and teach people, things. Jim, what can you tell us about this? This is your history as well, advising presidents. So can you advise us?
No, I think it’s very important to know where this sector sits from the perspective of its biggest challenge, which is the single largest growth in electricity consumption and therefore in air pollution, air pollution and greenhouse gases is the growth of the data center sector falling only by access of people around the world, air conditioning that they currently don’t have it. So a single two biggest growth drivers, electricity consumption are data centers and they are conditional ironically enough. Okay. Everything else is flattened the climate. Okay. So as we reduce lighting consumption of electricity, it’s being replaced by data centers, gobbling up the electricity. All right. And so that’s just not sustainable. It is not sustainable. Secondly, data centers are gobbling up massive amounts of water, almost always drinking water. Okay. Such that you now see moratoriums being put in place for the first time at places like Cape town in Dublin.
Even in Singapore even in Silicon valley now regulators and communities are asking why are these data centers taking our drinking water in the middle of droughts okay. Those are only getting worse and, and contributing a big, heavy footprint onto this public infrastructure. That’s more important for other uses. So that’s the second biggest issue for data centers and the other is just their size and scale. They’re just not tolerable anymore in the commercial districts near residential districts, that’s just not powerful anymore. And, and that’s just a fine evolution. I did mention at the beginning, though, the opportunity here is not a problem to solve. This is actually an opportunity to be seized. Okay. Very important to understand it is the computing that will drive the insight that will enable the world to become dramatically more sustainable, not just a little bit more sustainable, but dramatically more sustainable.
And so as a sector, we want high-performance computing as quickly as possible. We want it to as many people as much as possible so that computing can deliver sustainability. And what’s the importance of that is to have that package in an infrastructure that itself is setting the mark for the lowest possible footprint and Nautilus, we’re introducing into the conversation, this idea, T R U E true, instead of PUE, which is total resource usage effectiveness. And what we have demonstrated is not incremental one to 5% improvement in resource use, we’re talking 70, 80, a hundred percent improvement. And then to JD’s point our site in Maine not only will it have a negative PUE but you know, ironically we’ll be producing negawatts and negacarbon because we’re using a hydro facility. We’re going to be water feeding that with gravity.
And we’ll be producing the warm water that can be used in other applications and offset the energy load of those other applications in this case greenhouses and district heating and some other applications. And so I’m terribly excited because we’re not talking about incremental change. We’re talking about probably the single greatest leap forward in carbon abatement, air pollution, abatement water conservation then is available to the world today. That is how we are talking about this opportunity. This is the biggest thing I will have worked on in my career in terms of the scale of the sustainability benefits that we can deliver simply by going from air cooling to liquid cooling.
George Rockett: Brilliant. That is really…ha, so that’s a wonderful statement really interesting, and really concentrates the mind and can concentrate everybody. That’s listening into this. We’ll follow up on that. We need to help make that realization now. I’m just interested in as we go out and your perspective of that leap, how far it will go, and what you think will look like by 2030 in terms of data center infrastructure globally. Peter, can I get you to try that one first?
Yeah, sure. George, I mean, I think we’ve already talked about the unconventional will become conventional. We’re fond of saying right now, I think JD hit upon this. It’s, it’s not something that’s too far in the future. We often play a little play on words for those game of Thrones fans out there. It, it’s not winter, that’s coming. It’s liquid that is coming to a data center near you. And I think one of the big accelerators that are going to cause this explosion in adoption is when you think about what have been some of the obstacles historically is it’s not just about the technology and the innovation in the technology to deliver a data center solution requires the cooperation and collaboration of an entire ecosystem, the compute and chip manufacturers, the Infor, the supporting power and cooling the rejection infrastructure, how you do the installation and commissioning how your service and support product on a worldwide basis, best practices around new operating models for these environments.
These are the things that take time. And we’d been through much of that time at this point. You know, I, I know J D seeing, and I know Jim seeing, I’m seeing the level of collaboration we are now getting from the most trusted data center providers in the industry is what really is driving the adoption and will continue to drive it because it’s creating value for those customers. By having a more, I guess, you know, we talked about this before, in terms of the conservative nature of data center operators. There shouldn’t be the consequences of failure that are significant. So, you know, we want to say the currency of the data center is trust. And so having the trust of an ecosystem that is already trusted is what’s driving the adoption. So when we look out to 2030 with like the projections, we are getting the Intel’s of the world, the Dells of the world, about what they expect in terms of adoption, you know, can sometimes scare the bejesus out of you.
Pardon my language, because, you know, we’ve got to get, we’ve got to get really focused around scalability as much as anything because we are on the verge of this thing exploding. And when you look out to 2030, this is going to be commonplace and there will be multiple technologies. You know, they’ll be GRC, single-phase systems, school, it cold plate solutions, TMG core, two-phase solutions in these hybrid data centers where customers will have a choice of the liquid technology that delivers the best TCO for their particular use case. So that’s, you know, some sense of what can we say?
Great. JD give us your one-minute view. We are gonna need a bigger boat?
That’s a jaws thing, right? It’s actually, you’re gonna need a bigger boat, but in this case, we’re going to need a group, bigger boat. You know again, to Peter, to his comment, the comments we’re having its scale, it’s just, can we support the scale you know and production and making sure that it’s streamlined. We know, just touching back on the environmental aspect real quick, we know that our environmental ecosystem is significantly more sustainable and by factors of multiples. So, so that sustainability story it’s a reality. And so from a supply chain perspective, there are no concerns there. So it’s just scalability. So, and then if you take that away and say, okay what does it look like in 20 20, 20, 30, so effectively, you know, whatever 10 years from now today, or whatever nine years from today.
For me, it’s a decentralized distributed point of presence network where companies like Walmart, for example, who are, have a huge push into the e-commerce business and have 4,400 stores across the United States, brick and mortar stores, where they can now take technologies like, like, you know, single-phase, two-phase cooling where you can get really high-performance computing and portable types of applications. And in an area that doesn’t require all the traditional data center requirements, raised flooring and conditioning and all that. And now they can create a computerized node in a warehouse in their brick and mortar store that they’re supporting their global, e-commerce, their local brick and mortar business. And they’re creating what I call the digital tollbooth or digital tollway. So they can partner up with companies like, like NetSuite, like Netflix and Microsoft, and the Azure program cloud and commoditize basically create the commoditization of compute through hyperconvergence infrastructure software platforms that can provide, you know, 5g networking at zero latency applications for HD high-performance computing for AI and ML to universities researches K through 12, and then sell that compute to, you know, unused compute to the communities in tier three, tier four, eh, you know, data center location.
So if these couples, there will always be these massive data centers for storage and other types of applications as it, and it will expand out to geographic locations and, and things like Walmart and other types of retailers and financial institutions can do where the world is a decentralized distributed nodal network, where we getting zero latency on the edge, what they define as the edge. We don’t define the edge because we don’t see edges. There is no edge in our opinion,
George Rockett: Jim, just 2030, what do you see?
So we have a fork in the road. And either prior, you move to liquid cooling or you’ll be frightened and move to liquid cooling, but either way over the course of the next 10 years if you had not risk-managed your way to the future, you’re in trouble. Regulators are not going to let you use refrigerants anymore. They’re not going to let you gobble up drinking water and use the wastewater system anymore. And your customers want higher performance and lower cost. And so we think that drive the decision to what we’re talking about is already in place. And every technology we just talked about can deliver greater performance, better price, and dramatically better sustainability, and therefore reduce your regulatory and locational risk. It really is that simple. So you’re either building an old dog with a data center and you’re stuck with it for the next 20 to 30 years, or you’re building a new liquid cool facility and you’ve got a future. It really is that simple. And the market is about to divide as some people, as to which they choose.
George Rockett: Okay, that’s sobering words and important for where we’re going. I love it. I love everything that you’ve said. I hope the audience loves it too. Give us a quick mark on the school board up here so that we know it’s a program more about this. I’d like to thank Nautilus data technologies for being a sponsor of this debate and helping us get everybody involved into it, showcasing the conventional we’re gonna move in one moment onto the round table.