Are AI Infrastructure Investments Sustainable?

Episode ID S4E08
August 14, 2025

The hyperscalers are transitioning to a new artificial intelligence platform, signaling a surge in computing power deployment is just ahead. In this episode of All Day Digital, Alan Bezoza, managing director at DigitalBridge, explains how this AI-driven investment cycle differs from previous money-losing cycles.

Transcript

Alan Bezoza: The point about these companies is very different. We saw this, Jeff, in the 2000s when the bubble burst and the telecom spending went from a lot to a little and was very painful for a lot of companies in the ecosystem. That was because these telecoms weren’t generating any revenue. There was vendor financing, and it was just a mess and a lot of speculation, a lot of hyperbole. Interesting about this environment is that these companies are spending boatloads of capex, but at the same time, their free cash are generative. They pay dividends, and they buy back stock on top of that. These companies are very, very profitable. It allows them to spend this kind of money.

Jeff Johnston: That was Alan Bezoza, managing director at DigitalBridge, explaining how this AI-driven investment cycle is different from previous cycles where a lot of money was lost.

Hi, I’m Jeff Johnston and welcome to the All Day Digital podcast where we talk to industry executives and thought leaders to get their perspective on a wide range of factors shaping the digital infrastructure market. This podcast is brought to you by CoBank’s Knowledge Exchange group.

Given the enormous amount of capital being spent on AI infrastructure, and the fact that it kind of came out of nowhere, determining the sustainability of these investments and figuring out where we are in the AI business lifecycle is top of mind for industry participants. Alan has been investing in the technology sector for 25 years. And his company, DigitalBridge, invests across the entire telecom and AI ecosystem. Alan is in the know and has seen it all, which is why I was thrilled to have him on the podcast.

So, without any further ado, pitter patter let’s hear what Alan has to say.

Johnston: Alan Bezoza, welcome to the podcast. It’s great to see you again. How have you been?

Bezoza: I’ve been well, Jeff. Thanks for having me on again. I can’t remember maybe years since we last had this sit-down. I really enjoyed our last discussion, and I’m looking forward to this one as well.

Johnston: No, I appreciate it. No, it’s been too long. I love having you on, I love hearing what you have to say. I think our listeners benefit a lot from hearing what you have to say.

Hey, before we get into some of the specifics on what I want to talk about today, Alan, I think it might help listeners get a better appreciation of the lens you look through and where you sit and where your company DigitalBridge, sits in the overall digital infrastructure ecosystem. Maybe you could just level set that for us to start us off.

Bezoza: Yeah, thanks. I invest in public companies. I run a fund here in sunny Denver, and we invest in public companies up and down the big food chain of both telecom and cloud. That includes all the way from AT&T, Verizon, Vodafone, Deutsche Telekom, cloud companies themselves like Google, Amazon, Facebook, et cetera.

Then all the way down the entire ecosystem from tower companies, data center companies, the stuff that goes in the data center and things that go on the towers, all the flashing lights, some industrial companies, some software companies, some hardware companies, all the way down to semiconductors and everything in the middle, some labor and construction like Quantas of the world. We really get involved in everywhere and anything around up and down this big food chain.

What’s unique about what we’re doing is that we have our parent company, DigitalBridge, which is a private equity firm. First and foremost, they manage roughly $95 billion in private equity, specializing in a very specific area of digital infrastructure, which is mostly towers, data centers, and you can call them wholesale telecoms. We’re one of the largest players in each one of those markets.

From the data center perspective, we have three main data center brands or platforms. One is Vantage, also based in Denver. One is Switch, which we took private a few years ago, and the third is DataBank, based in Texas. We have a good lens of what’s happening from the ground up, and what we are doing is, again, investing in, if you think about digital infrastructure being the center of the universe, of our universe, I should say, we’re investing in their customers, the competitors and things that go in their data centers and towers. It’s a unique opportunity for us to invest up and down the food chain where one man’s revenue is another man’s capex. Clearly, cloud spending and the whole AI infrastructure build out has been near and dear to our hearts over the last couple of years.

Johnston: Fantastic. I appreciate that. Definitely, you guys are sitting at the epicenter of this whole AI movement across numerous verticals, so another reason why I was excited to have you on today, beyond just I enjoy talking to you.

Bezoza: We’ve known each other for a long time.

Johnston: Yes. Absolutely. Hey, Alan. Where are we? If you had to just look at the industry right now, where do you see the AI data center infrastructure business, if you will, that industry? Where are we in that business cycle or the business lifecycle right now? You can use a baseball analogy if you want, or however you want to describe it.

Bezoza: People love innings in the U.S. When you go outside of the US, they have no idea what you’re talking about. I will tell you that it’s hard to say what inning we’re in, because it really does depend on how well the monetization efforts happen through these applications. For example, if this is a huge build-out that’s a pig in the python and they can’t monetize, then we’re midway through it. If this is the very beginning of a huge wave of monetization, of AI workloads through either managing third-party workloads or running their own workloads, then we’re early in this whole process.

The key to understanding where we are, I would say, is understanding where are we in the application layer. There always could be a delay, for example, or a hole to fill, if you will, between this initial build-out of Hopper and then more recent Blackwells that are being rolled out, just starting right now. If we look to mid ‘26, you could see there’d be a hole that the application layer isn’t filling up all the capacity that’s being added. Just to step back and put some things in perspective, 2025 is actually a pretty interesting year because you’ve had a huge delay in the transition from Hopper to Blackwell.

Hopper to Blackwell at NVIDIA, the compute platform, happens to be a very big change. It’s a big change in the availability and ability to compute at lower power requirement. That is something that these hyperscalers are willing to wait for. And so the delay that it’s been six, nine months delay from last fall till now, and now being deployed at scale has really caused a lot of anxiety around the whole ecosystem from the ordering of new data center capacity to be built in two years has been at bay to some extent. The deployment of compute, deployment of networking, storage, everything that goes in data centers have been sort of on pause in some ways.

You really can’t see it because the growth rates are so fast, but it probably would’ve been faster if we didn’t have this delay from the Hopper platform to the Blackwell platform at NVIDIA. The second thing is we’ve had a big transition even at Google with Broadcom’s TPU. Version 5 to version 6 has had a huge impact in terms of building out of infrastructure, meaning that second half of the year is when you’ll start seeing the version 6 of the TPU from Broadcom that goes to Google. Same thing I could say about Trainium at AWS at Amazon.

Every one of these compute platforms have been either delayed or going through these product transitions, that I do think the second half of this year, ‘25 and into ‘26, I think you’ll see a lot of computing deployed at just levels that are mind-numbing.

The question is, and I don’t want to use the analogy of baseball because it kind of depends. It depends on once this big build out happens from all the hyperscalers, then how much of this capacity is then being used at the application layer? That’s an important element of how this plays out. Then the last thing I’ll throw in there, too, is that we’re seeing a lot of demand from enterprises right now to the hyperscalers for this capacity.

The hyperscalers will all say publicly that we are seeing lots of capacity constraints right now from our customers. Meaning people want utilization or usage of their GPUs for third-party workloads. The reason why they’re doing that is because every enterprise on the planet right now is creating applications themselves to be more of test dev for AI workloads. There’s just a lot of demand from enterprises to figure out how is AI going to help me? How is this excess compute that’s being deployed going to help me? That’s creating a lot of demand. I just don’t know what that looks like in terms of capacity utilization on the networks or on the systems.

Johnston: A couple of things jumped out at me, Alan.

Bezoza: Yes. That was a lot of stuff. I apologize.

Johnston: It’s great. There’s a lot of good stuff there. A lot of great insight. A couple of things jumped out at me. One is, my goodness, if we did in fact, like as you say, we’re in a bit of an air pocket as we go from Hopper to Blackwell, I just can’t imagine what we would have seen, because the capex guidance and the results so far, quarter over quarter, I think, have been pretty strong. I guess they could have been even stronger, and maybe they will be, as you suggest, in the back half of the year.

But more specifically, Alan, the compute, the money that’s going to be spent as we address this air pocket with these chipsets, is that going to be primarily for continuing to train these large language models that, if I understand it correctly, are effectively the foundation on which applications will be built on top of?

Bezoza: It’s kind of both, I think it depends on who you ask. Some companies like OpenAI are still building lots and lots and lots of training or building compute for training. Other companies are building for usage. For example, if you’re Amazon, if you’re GCP at Google, or perhaps even Microsoft, you’re building a lot of these AI infrastructure right now. Yes, it’s going to be used for internal workloads used by Amazon, Google, et cetera, but at the same time, there’s a lot of enterprises out there.

Like we mentioned earlier, a lot of enterprises right now are looking for how do you take this newfound compute power and information and leverage it into workloads? One of the best things about AI, or one of the first use cases of AI, is actually creating more code. By definition, it means that if it’s easier to code and easier to create software, these enterprises that typically have gone to third-party software vendors to create applications for them are now doing a lot of things internally.

Johnston: Those are some really good points. Especially I’m tracking with you as it relates to the investments that the hyperscalers are making in AI for their own internal purposes. We may not be seeing that return on invested capital yet because they’re just doing it internally as opposed to selling it.

Bezoza: To be fair, one of the things that, to your point, both Google and Facebook have used AI significantly over the last couple of years, even before the word AI was used so frequently. They were using it because of the cookie issue that Apple incurred on them in terms of the inability for them to use cookies to figure out who we all are. Facebook and, actually, Google spent a lot of money, time, and effort using AI workloads in order to figure out who we all are.

Johnston: Alan, look, we’ve seen some enormous capex numbers. As you suggest, they might even get bigger, which is crazy. It’s not as if these hyperscalers are flying blind. They’ve got a pretty extensive history here in cloud, in even pre-AI cloud, Maybe you could talk a little bit about that side of the business and how that’s probably giving investors and these companies comfort in believing that the monetization story of generative AI and these investments is going to play out.

Bezoza: As we said earlier, Jeff, this monetization aspect of the application layer is the most important thing to get right, because if they’re not monetizing it and generating revenue, whether it’s through third-party hosting of applications or internal-based applications, this will all not end well. To your point, capex has gone up materially. We’re talking about numbers that are in the $350 billion range for just a handful of companies, of capex, that’s up around 50%. When I look at the revenue base, already in just a handful of companies, you’re talking about $70 billion-$75 billion in revenue per year, and it’s growing about 20%-25%. $70 billion of revenue is already being generated.

The point about these companies is very different. We saw this, Jeff, in the 2000s when the bubble burst and the telecom spending went from a lot to a little and was very painful for a lot of companies in the ecosystem. That was because these telecoms weren’t generating any revenue. There was vendor financing, and it was just a mess and a lot of speculation, a lot of hyperbole. The interesting thing about this environment is that these companies are spending boatloads of capex, but at the same time, their free cash are generative. They pay dividends, and they buy back stock on top of that.

These companies are very, very profitable. It allows them to spend this kind of money. The question again is, are they able to generate increasing revenue? If they’re generating $78 billion in cloud revenue today, again, this is mostly non-AI workloads, if this can accelerate and start to see some real benefits from third-party hosting or internal-based applications, that’s when we’re going to see the light turn green and I think you’ll see capex stay at these elevated levels. If they can’t see their cloud revenue pick up, that’s when we should be concerned.

That’s driven by, again, either third-party workloads like these enterprises building out their own applications, or it’s going to come from, again, internal applications, can Google serve better ads, can Facebook serve better ads? More targeted ads and more ROI to their advertisers based on AI is also part of the equation.

Johnston: When I think about risks and value creation and what side of the fence companies end up on, how should we think about the hyperscalers and the investments they’re making in large language models? Do those investments and do LLMs get commoditized, and it’s ultimately the software developers or the application developers who are going to be generating the most shareholder value, or is it if you got enough scale as a hyperscaler and you’re serving up these applications or supporting these applications, that still should be a very promising business model? You know where I’m going with this? I’m just trying to figure out commoditization versus value creation and who’s on the outside looking in.

Bezoza: It’s a very valid question. If you look at every one of the hyperscalers, each have a different view based on their actions. If you’re Microsoft, you’ve had this relationship with OpenAI historically, and you felt that you made this investment in OpenAI, so therefore you do believe that there’s value in the LLM creation and that the utilization is important.

However, with this severance between OpenAI and Microsoft, it tells you that maybe they don’t feel that the LLM is as important and the application layer might be more important to them in terms of how they view their business. If you look at Amazon, similarly, they don’t have a foundational model they work with, but they work with several models to be available on their platform for third-party AI workloads.

If you look at it, everybody has their own view. They’re saying it by definition. Apple, again, has no foundational model. They certainly have enough cash and capabilities of spending more money to create these models, certainly know-how, but they view it as they’re going to monetize AI through the handsets and through applications and the handsets. Everybody has a different view. Personally, I do sit here and say, how many models do we need?

I think there’s going to be very large language models. Then you’re going to have a lot of smaller models and companies like Salesforce that have all this customer data that’s on their platform can utilize whether it’s Slack or CRM data to create small language models, in other words, small models that are then being used to create applications and workloads that make everyone’s life more efficient.

Again, I think it depends on where you sit in the ecosystem. The question is, can OpenAI or Google or others monetize their model better than somebody else’s model? I don’t think we really know the answer to that, but it’s certainly table stakes and not to mention the xAI and grok and what Elon Musk’s companies are doing as well. The amount of money being spent out of these companies creating these large language models is unheard of.

The question for them is, is there commoditization? Is the internet itself, as maybe the analogy, was that something that was commoditized or valuable? If there was three different internets and you picked one to use, is one better than the other? That might be the analogy, perhaps, in the large language models, but it’s even going back to our telecom risk, Jeff. If you think about AT&T and Verizon and T-Mobile, historically, they weren’t all the same.

AT&T or T-Mobile really didn’t have great coverage in rural markets. They did well in cities. Now, it’s all the same and people don’t really think about their coverage anymore, one versus the other. Same thing could happen in the large language model analogy, where there might be some differences in the near term, but over next five years, perhaps it is commoditized and the value is then on the application layer.

Johnston: Fascinating stuff. It’s going to be fun to watch. This business model is still in flux, wouldn’t you say?

Bezoza: I think that’s why it’s funded, too, this sporadically and big step function changes. I think a year ago we were talking a lot about, really, just a build-out. We weren’t talking about the application layer, and now I think we’re getting closer to seeing if this is real or not. It’s going to go back to what I said earlier of the spending cycle is only going to continue if the-- It’s not just a monetization, it’s actually just the capacity utilization on these platforms as well. If there’s a lot of applications that are being deployed and very small ones of that, but each one is so compute-intensive, you can see the build-out continue, and if this virtual cycle is correct.

Now, if it’s not correct and we build out all this infrastructure and the applications aren’t that compute-intensive and/or there’s not a lot of volumes, then we could sit here and have two years of excess capacity, perhaps, that we have to burn through. Now, I don’t know if there’s an answer to that yet, but I do think that’s the risk in that there’s just not as much capacity utilization of these platforms that are building out compute infrastructure.

It doesn’t mean it goes away. That’s the key, is it doesn’t mean it goes away. It can be used and it just may take more time to burn through the excess capacity being built. I don’t have an answer for you, but right now we’re definitely in a supply-demand imbalance where there’s not enough supply of compute.

Johnston: No, listen, I think you make a really good point. It’s not as if the stuff that’s being built today isn’t going to be used. It may take a little bit longer to absorb in the system, but I think it’s a pretty safe bet as we sit here today that it’s going to get used. That’s an important point.

Hey, Alan, look, we covered a lot here today. This was incredible. You didn’t disappoint. I knew you wouldn’t. I just want to give you a chance to wrap it up, Alan. If there’s anything I didn’t ask or a thought you wanted to share before we say goodbye, the stage is yours.

Bezoza: If I look back in our careers, and we’ve been doing this for a long time, I’ve been looking at this sector for almost 25 years, and we’ve seen so many ups and downs, so many build-outs, which always leads to inventory builds, and then you have to burn through it. We’ve seen that so many times and time again. Whether it’s the telecom build-out, whether it’s from the CLEC way, way back long ago.

I look, sit here, and think about the same concept that this is going to happen or not. These are really, really smart people sitting here at these organizations spending a lot of capital. When I look at the application layer, there’s definitely people that can take this excess compute and use it in a way that I think is actually going to change lots of our lives. I don’t say that because that’s the buzzword or that’s why people are talking about in Silicon Valley. I do think this is a game changer. It really is a game-changer function of how much compute is now available. What can we do with this excess compute?

Being a technology investor, it has been just an amazing run of 25 years of seeing innovation and technology, and we just don’t know what we don’t know.

We didn’t know that Uber was going to come out and change the world and change the world of taxis, or Waymo is now doing the same thing to Uber perhaps, but it’s just going to be interesting to see that there’s excess compute or increased computes as I say, not excess, is increasing compute and what it’s going to do to the world.

Just as much as the iPhone changed a lot of our daily lives that we can’t live without, I do think there’s going to be new applications that we built with this increase in compute intensity that we’re going to be sitting here in five years and say, "Wow, this really was a game changer.

Johnston: Yes, it’s been an amazing 25-plus years, hasn’t it? It’s been fascinating, and I think your words, “we don’t know what we don’t know.” Boy, I think there’s a lot of wisdom in that right now,and I think we got to keep that top of mind as we move through the next couple of years because stuff’s changing at an insanely fast pace and just continues to accelerate. It’s going to be super fun to see how it all plays out. Hey, Alan, thanks, man. I really appreciate you making time out of your busy schedule and spending some time with me and talking tech and AI and finance and all that good stuff, so I really appreciate it.

Bezoza: Hey, Jeff, always a pleasure.

Johnston: A special thanks goes out to Alan for being on the podcast today. Alan’s comments regarding the application layer and how its adoption is key for the future of AI makes a lot of sense. And while there are no guarantees about how all of this will end, I take comfort in the fact that the companies who are investing in AI have better insight into its potential than just about every other company out there. They have seen firsthand how AI improved their business, which is why they believe enterprises all over the planet will follow suit.

Hey thanks for joining me today and a special thanks to my CoBank associates Christina Pope and Tyler Herron because without them there wouldn’t be an All Day Digital podcast. Watch out for our next episode.

 

Disclaimer: The information provided in this podcast is not intended to be investment, tax, or legal advice and should not be relied upon by listeners for such purposes. The information contained in this podcast has been compiled from what CoBank regards as reliable sources. However, CoBank does not make any representation or warranty regarding the content, and disclaims any responsibility for the information, materials, third-party opinions, and data included in this podcast. In no event will CoBank be liable for any decision made or actions taken by any person or persons relying on the information contained in this podcast.

Where to Listen

Anchor Apple Podcasts Spotify RSS