Back to resources

The Real Cost Of AGI—According To OpenAI | In The Loop Episode 30

The Real Cost Of AGI—According To OpenAI | In The Loop Episode 30

Published by

Jack Houghton
Anna Kocsis

Published on

September 10, 2025
September 11, 2025

Read time

5
min read

Category

Podcast
Table of contents

This week, OpenAI’s leaked projections show they’re set to spend $115 billion between now and 2029—that’s $80 billion more than they had forecast earlier this year. In today’s episode, we’ll follow the money trail to understand the real cost of OpenAI reaching AGI and where all that money is going. This is In The Loop with Jack Houghton. I hope you enjoy the show.

How much is Artificial General Intelligence (AGI) worth for OpenAI?

Let’s start with the spending figures. OpenAI is now projecting it will burn through $115 billion between now and 2029—that’s about $10 billion more than it had initially forecast. This year alone, they expect to spend over $8 billion, rising to $17 billion next year. Losses are set to peak at around $45 billion in 2028 before turning profitable in 2030, a year later than first planned.

You might be wondering: what’s driving these costs? The answer is compute and training. OpenAI is pouring money into data centers and chips—and potentially into training data to support its business. One of the biggest outlays is Project Stargate, its partnership with Oracle. Stargate is an enormous initiative to build a network of mega AI data centers. In July, OpenAI and Oracle announced a deal to add 4.5 gigawatts of new cloud capacity across the U.S.—the equivalent of two Hoover Dams, enough to power four million homes. This expansion is part of Stargate’s broader $500 billion plan to build 10 gigawatts of AI infrastructure by 2029.

These data centers are critical to making AI work. The goal is to run about two million new AI chips across Stargate sites, unlocking that 10-gigawatt capacity. At $30,000 to $40,000 per high-end chip, that’s $60 to $80 billion in hardware purchases—before accounting for replacements, upgrades, and expansion. No cloud project has ever attempted to assemble this much compute under one umbrella. It’s why NVIDIA’s data center revenue has exploded from virtually nothing to tens of billions per year.

If you break down the economics of an AI service, the picture gets clearer. Out of every $1 spent, 25 to 40 cents might go toward GPU hardware—straight to NVIDIA. Another 10 to 20 cents could go to the cloud provider running the data center to cover power, cooling, and rent. The remaining 30 cents funds the AI company’s costs—research, data, and operations—with hopefully some margin left. It’s an expensive model.

That’s why OpenAI is now investing in its own data centers and chips. Alongside Broadcom, they plan to produce their first in-house chip by 2026. If successful, shifting to custom chips hosted in self-run or partner-run data centers would reduce the share of revenue flowing to NVIDIA, Microsoft, Amazon, and others. In theory, more of that $1 would land in OpenAI’s pocket.

This financial update shows OpenAI’s strategy clearly: outspend rivals, build its own infrastructure, and capture more of the long-term revenue. To fund this, they’re projecting a jump in revenue—about $13 billion this year, $300 million above earlier forecasts. By 2030, they expect $200 billion, 15% higher than previously estimated, most of it still coming from ChatGPT. If they can reduce dependency on NVIDIA’s chips and costly third-party data centers, margins improve significantly.

But the plan is risky. They’re reportedly spending $10 billion with Broadcom to develop these custom chips. If the chips underperform, arrive late, or simply fail, they’ll still have to buy NVIDIA’s—and be worse off for the sunk cost. On top of that, rivals like Google and Microsoft are developing their own chips, so by the time OpenAI’s arrive, the advantage could be short-lived. Yet, they may have no choice—competitors are spending heavily, and falling behind on chips could be game over.

The cost of power for AGI

This model also highlights the sheer scale of resources required. In 2023, U.S. data centers consumed about 4.4% of the country’s electricity, a figure expected to triple by 2028. Connecting new projects to the grid takes five to seven years, creating long delays. That’s why we’re now seeing companies like Microsoft and Meta partnering directly with nuclear plants and natural gas providers to power data centers.

Another concern is the environmental impact. Data centers consume vast amounts of power and water. Ideally, efficiency gains will continue year after year to offset this impact. Still, the net benefit of widely available intelligence may outweigh the environmental cost—but only if those efficiency improvements materialize.

Only a handful of hyperscalers—Google, Microsoft, Amazon—can operate at this scale. Oracle is a good example: it’s spending about $20 billion on data centers this year, much of it to support OpenAI’s needs. For cloud providers, the model carries limited risk. They front the cash and time to build capacity, but if OpenAI fails, they can lease those servers to someone else.

This dynamic also fuels the current “AI bubble” debate. Data centers and chips are so expensive that only a small set of companies captures the revenue. If even a few stumble, the whole market feels it. As I discussed in a previous episode, I don’t think we’re in a full-blown bubble—but I do expect some kind of correction ahead.

The cost of training data for AGI

It’s worth touching on something that happened just last week, which could trigger a wave of unexpected investment from companies like OpenAI simply to train their models.

Until now, most AI firms treated data as free—scraping text from books and websites without paying. That era is ending. This week, Anthropic agreed to a $1.5 billion settlement covering half a million books—pricing each book at roughly $3,000. Lawyers are calling it the largest copyright recovery in history.

On top of that, Anthropic must delete all the pirated book data it previously stored. The settlement doesn’t even grant blanket rights to generate text based on those books, meaning Claude could still face copyright claims if its outputs infringe. You can imagine the ripple effects this could have across the industry, including for OpenAI and other big players racing toward AGI.

Stay In The Loop

Historical parallels

If you zoom out, there are strong historical parallels to today’s infrastructure buildout and the mix of fear and excitement around AI. Take Britain’s “railway mania” of the 1840s, when railway investments peaked at around 7% of GDP per year. During the dot-com era, tech and telecom infrastructure spending hit about 1.5–2% of U.S. GDP. By comparison, AI-related data center spending in 2025 is expected to reach about 2% of U.S. GDP—already matching the dot-com peak.

As with railway mania and the dot-com bubble, many companies collapsed, leaving behind redundant infrastructure. So the real question is: will we end up with more GPU capacity than we actually need, or is demand truly that massive? I could be wrong, but while I expect a correction, I don’t think it will be as severe as the dot-com crash.

Conclusion: Will we actually get AGI?

In conclusion, we’re witnessing one of the largest infrastructure spends in history—companies running $115 billion burn rates. History suggests this level of spending won’t be sustainable forever, and a correction may be inevitable. What it does show is that the path to AGI—however you define it—is going to cost staggering amounts of money. The biggest companies in the world are betting everything on being first to get there.

Become an AI expert

Subscribe to our newsletter for the latest AI news.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Table of contents
Share this:
Related

Articles

Stay tuned for the latest AI thought leadership.

View all

Book a demo today.