Goldman Sachs published a report called "Tracking Trillions" that projected a $7.6 trillion AI buildout by 2031. That number is now the ambient hum beneath every earnings call, every infrastructure deal, every press release announcing another gigawatt of planned capacity. It is treated as gravity -- something that simply exists, that you work around rather than question.
Ed Zitron, host of the Better Offline podcast, decided to question it. He spent a week trying to find the real customers -- the businesses and organizations whose actual, paying demand would justify that $7.6 trillion projection. What he found was troubling enough that it deserves a careful read.
The Customer List Is Shorter Than Anyone Admits
Start with the companies consuming the most compute. When Zitron worked through the actual list of significant GPU customers, he arrived at something short enough to fit in a paragraph: OpenAI, Anthropic, Jane Street (a hedge fund with a stake in CoreWeave), Microsoft buying on behalf of OpenAI, Google buying on behalf of both OpenAI and Anthropic, Amazon buying on behalf of Anthropic, and Meta.
That is not a market. That is a carpool.
The revenue figures that hyperscalers cite as proof of AI demand reinforce this. Microsoft reports $37 billion in annualized AI revenue -- but 70 to 80 percent of that is OpenAI paying for Azure. The remainder comes from Microsoft 365 Copilot and GitHub Copilot, both of which are showing sluggish enterprise adoption. Amazon's $15 billion annualized AI revenue figure looks impressive until you note that more than 80 percent of it is Anthropic paying for AWS. These are not organic markets. They are circular flows: hyperscalers invest in AI labs, AI labs spend that investment on hyperscaler cloud services, hyperscalers report the resulting revenue as evidence of booming AI demand.
"OpenAI and Anthropic were treated like rich kids. Nobody ever challenged them to be efficient or profitable. Microsoft built all of OpenAI's infrastructure. Amazon and Google built it out for Anthropic." -- Ed Zitron, Better Offline
The Distorted Infrastructure Baseline
Approximately $300 billion has been invested in servers for OpenAI and Anthropic combined. That is an extraordinary concentration of capital in two companies that have never turned a meaningful profit. But the broader consequence is subtler and more damaging: the entire AI industry looked at that level of infrastructure spend and concluded that everyone would need the same. Nvidia shipped 6 million GPUs into a market built on that assumption. The actual demand to absorb 6 million GPUs does not exist. Even Perplexity -- an AI-native search company -- runs on a few hundred to a couple thousand GPUs at most.
The Goldman Sachs report that the industry uses to justify the buildout also contains a finding that gets far less attention: AI is costing the majority of businesses more than it saves them. The business case, at scale, has not materialized. And yet the construction continues.
The Data Centers That Are Not Being Built
Zitron issued a challenge: go find a completed data center that was announced in 2023. Microsoft has claimed to bring a gigawatt of capacity online. Zitron investigated every publicly announced Microsoft data center project he could locate. The results were striking. None were complete. Some had not broken ground. Some had only resumed construction in March 2026, despite being announced in 2023 or 2024.
The actual pace of the buildout, according to Zitron, is in the range of hundreds of megawatts per year -- not the gigawatts that the announced ambitions imply. The gap between what is being announced and what is being built is where the narrative lives, and it is doing significant work.
The xAI situation illustrates the supply crunch in a different way. Elon Musk's AI company built its own data center -- Colossus 1, a 300-megawatt facility -- and then rented it to Anthropic in its entirety. The reason: new capacity from Amazon, Google, and Microsoft was not coming online fast enough for Anthropic's needs. A company renting its own data center to a competitor because the hyperscalers' announced buildout is running behind is not a sign of a functioning infrastructure market.
Who Is Holding the Debt
SoftBank offers another data point. The company dropped its planned AI infrastructure loan from $15 billion to $6 billion -- a 60 percent reduction with no dramatic public explanation. The Financial Times has reported that banks are "choking on AI debt" and selling AI-related paper at discount rates. Private credit funds, including vehicles managed by Apollo through its Athene subsidiary and by Blackstone, are stepping in where bank financing has become constrained. The downstream effect is that pension funds, 401(k) accounts, and retirement savings are being routed into AI data center debt through these vehicles.
"The driving factor behind all of this is insecurity, not rational investment." -- James Carvell, Goldman Sachs, as cited by Ed Zitron
Goldman's James Carvell made a comment that Zitron found worth highlighting: the force sustaining AI capital expenditure at this scale is not a coherent return-on-investment thesis. It is the fear of being the company that did not invest when everyone else did. That is a different kind of market dynamic, and a more fragile one.
DeepSeek Was the Warning
In January 2025, DeepSeek demonstrated that frontier-level model performance could be achieved with a fraction of the compute that OpenAI and Anthropic were using. The market treated this as a temporary shock. Zitron's reading is that it should have been read as a structural signal: nobody needs that many GPUs. If a Chinese lab could produce competitive results with radically less infrastructure, the assumption that the entire industry needed OpenAI-scale compute was wrong from the start.
The Nvidia Blackwell GPU situation sharpens this. Blackwell units are shipping now. The next generation -- Vera Rubin -- is coming. If Blackwell GPUs are sitting uninstalled when Vera Rubin arrives, the economics of installing them collapse: why deploy an older, less efficient chip when the next generation is available? The result could be significant write-downs on hardware that was purchased, delivered, and never put to work.
What Breaks the Narrative
Four hyperscalers -- Meta, Google, Amazon, and Microsoft -- have collectively committed to spending $800 to $900 billion on AI infrastructure in 2026, with projections approaching $1 trillion in 2027. Meta's commitment to this buildout has taken on a quality that Zitron describes bluntly: Mark Zuckerberg appears to be building an AI CEO chatbot for himself. The strategic rationale has become difficult to distinguish from executive status competition.
The last thread holding the investment narrative together is a simple claim: when the data centers are built, the customers will come. That is a faith-based argument, not an economic one. Private credit funds are writing loans against it. Pension managers are allocating to it. The entire structure depends on demand materializing at a scale that no one has yet been able to demonstrate exists.
Zitron's view of the timing is cautious: the market may be waiting for something that is completely and utterly impossible to ignore before it reprices. That catalyst could arrive quickly or it could be months away. His best candidate is the moment when it becomes visibly undeniable that the announced data centers are not getting built -- that the gigawatt promises were always projections, not pipelines. When that narrative breaks, the circular revenues, the private credit exposure, and the GPU inventory overhang all become problems at once.
He spent a week looking for AI's real customers. The list he came back with is short. The money committed against that list is not.