Sam Altman's compute bet is paying off, but the bill is coming

· Business Insider

Sam Altman and Dario Amodei actively not holding hands.
  • Sam Altman once said, "compute is destiny," and the AI market this year may be proving him right.
  • Rival Anthropic has been signing its own big AI compute deals recently.
  • How these deals will be paid for is still unclear. Big AI changes make this even harder to predict.

In early December, Anthropic's CEO, Dario Amodei, suggested that some rivals were overextending themselves in AI compute.

Visit freshyourfeel.com for more information.

"There are some players who are YOLOing," Amodei said, hinting that rival OpenAI had signed too many AI compute deals and might struggle to afford them.

But as demand surges and systems strain, the balance is shifting. OpenAI's aggressive push to lock in capacity is starting to look more pragmatic, while Anthropic faces outages and growing pains of its own, a reminder that in the AI race, having enough compute may matter just as much as building better models.

"Anthropic, in particular, is bad right now, and it's a mix of genuine downtime and really degraded service," said Lawrence Jones, founding engineer and AI lead at Incident.io, which helps companies like Netflix and Etsy manage outages.

Altman once said, "compute is destiny," and the current situation suggests his aggressive push to secure massive capacity was prescient. Anthropic has since signed its own large compute deals, at least six months behind OpenAI.

"OpenAI has been clearly ahead of the curve on compute," said Peter Gostev, AI capability lead at Arena.ai.

Given this lead, it's been strange to read reports from The Information and other media outlets that Altman and his CFO, Sarah Friar, have been at odds over whether OpenAI signed too many compute deals. The Wall Street Journal followed this in a story late Monday.

OpenAI dismissed the reported rift as "ridiculous," and Altman and Friar issued a statement saying they are aligned on acquiring as much compute as possible.

Still, financial pressure is real. The Journal also reported OpenAI missed a revenue target and has yet to hit a goal of 1 billion weekly ChatGPT users. Without stronger revenue growth, funding expensive compute deals becomes more challenging.

Even with surging revenue growth, as seen at Anthropic lately, it remains unclear how to profit from cutting-edge AI. Both companies are currently losing significant amounts of money.

That leaves the leading AI labs in a difficult position. They need ever-more powerful models and the infrastructure to serve them globally with speed and reliability. That requires enormous compute resources — more than they can comfortably afford today.

Optimization

One path forward is optimization: improving how models are built and run through better software, new techniques, and redesigned hardware.

In a recent interview with tech blogger Ben Thompson, Altman pointed to OpenAI's GPT-5.5 model. While it costs more per token, it uses far fewer tokens to deliver results (tokens are the basic units of data processing in AI systems).

Instead of being a "token factory," Altman said OpenAI is now an "intelligence factory."

"We just want as many units of intelligence for the lowest price," he added, noting customers don't care how that efficiency is achieved.

Jones expects these optimization efforts to start paying off within the next year and beyond. "Then that changes everything around how you model the cost," he said. "Over the next five years, the economics of training and serving these models is going to change through both software and hardware."

Such improvements could make the current business models more sustainable.

Up the stack

Another strategy is moving "up the stack," expanding beyond selling raw model access.

Anthropic is doing this by building specialized tools for industries like finance, law, design, security, and especially software development.

OpenAI is also diversifying, launching enterprise products through a partnership with Amazon's cloud business, expanding its Codex coding tool, experimenting with advertising in ChatGPT, and developing consumer hardware.

"It wouldn't be a stretch to think they see at least part of their future in these pivots, and are aiming to capture customer relationships and product surface area, rather than just selling tokens." Jones said.

Better models, bigger questions

Meanwhile, a new technological shift is on the horizon. Massive clusters of Nvidia's Blackwell GPUs are currently being used to train next-generation AI models expected in about six months. Even more powerful Vera Rubin GPUs are slated for late 2026.

AI experts, including Jones, believe these advances will produce models that are significantly more capable and cheaper to run. That could unlock new applications and more sustainable revenue streams.

It also explains why OpenAI and Anthropic are racing to secure compute capacity now: they don't want to be caught unprepared when these more powerful models arrive.

But a fundamental question remains: will there be enough demand for vastly more capable AI?

"It might be that most corporations don't actually need much more intelligent models," Jones said. "Instead, they can go to cheaper ones, at which point you have to start thinking, what are the more intelligent models for?"

More advanced models could solve harder problems and enable entirely new products, but that would likely mean serving a different market.

"If you're a CFO, I don't think it's clear that if you create a much more intelligent model, the same people who are buying from you now are going to want the more powerful model," he added. "That would make anyone nervous when they're in charge of planning these big investments."

For now, Altman's big bet on compute looks increasingly justified. But the bigger challenge isn't just building powerful AI, it's figuring out who will pay for it, and why.

Sign up for BI's Tech Memo newsletter here. Reach out to me via email at [email protected].

Read the original article on Business Insider

Read full story at source