
The true cost of AI: who is really paying the difference?
OpenAI loses billions, Microsoft subsidizes your tokens, and the bill is coming
You pay $20 a month for ChatGPT Plus. $20 for Claude Pro. $10 for GitHub Copilot. These prices seem reasonable, even attractive. But one question demands an answer: is that actually what it costs to provide these services?
The answer is no. And the available figures reveal an economic reality that AI companies prefer not to advertise.
GitHub Copilot: the most documented case
In October 2023, the Wall Street Journal reported that Microsoft was losing an average of $20 per month on every GitHub Copilot subscription sold at $10. For the heaviest users, that cost reached $80 per month (source: Tom’s Hardware, Oct. 10, 2023, citing the Wall Street Journal).
Microsoft was sometimes spending twice, even eight times what it was charging. This is not a leaked secret: it is documented and confirmed by multiple publications. It is simply a fact that most users do not know.
Sam Altman admits it: even at $200 a month, we lose money
On January 7, 2025, Sam Altman, CEO of OpenAI, posted on X: “insane thing: we are currently losing money on openai pro subscriptions! people use it much more than we expected.”
He had set the ChatGPT Pro price at $200 a month himself, expecting it to be profitable. It was not. Pro plan users are engaging with the service at a frequency and intensity that even OpenAI’s internal team had not anticipated (source: Fortune, Jan. 7, 2025).
This raises a simple question: if even a $200-per-month subscription runs at a loss, what does that say about the $20 plans?
OpenAI: a company that spends more than it earns
The figures published by the New York Times and confirmed by CNBC in September 2024 are unambiguous: OpenAI generated $3.7 billion in revenue in 2024, against a net loss of approximately $5 billion (source: CNBC, Sept. 27, 2024). The company was spending $2.25 for every dollar it made.
The cost breakdown is telling. According to an analysis of Microsoft’s fiscal disclosures published by Ed Zitron on wheresyoured.at, OpenAI’s inference costs on Azure alone reached $3.767 billion in 2024, broken down as follows:
- Q1 2024: $546.8 million
- Q2 2024: $748.3 million
- Q3 2024: $1.005 billion
- Q4 2024: $1.467 billion
Through the first three quarters of 2025, those inference costs had already reached $8.67 billion, more than the entirety of the previous year’s revenue (source: wheresyoured.at, “Here’s How Much OpenAI Spends On Inference”).
Infrastructure: why every query is expensive
These losses are not accidental. They reflect the real cost of a service that requires enormous physical resources.
A single NVIDIA H100 GPU, the standard chip for large language model inference, costs between $25,000 and $40,000 to purchase. An eight-GPU server represents an investment of $200,000 to $320,000. These machines consume between 1,000 and 1,400 watts each during continuous operation (source: SemiAnalysis, “H100 vs GB200 NVL72 Training Benchmarks”).
On the energy side, each GPT-4o query consumes approximately 0.3 watt-hours, according to an estimate published by Epoch AI in February 2025. That may seem modest, but multiplied across hundreds of millions of daily queries, the total becomes staggering.
Goldman Sachs projects that data center energy demand will increase by 165% by 2030 compared to 2023, requiring $720 billion in infrastructure investment over the same period (source: Goldman Sachs Research, “AI to drive 165% increase in data center power demand by 2030”).
Anthropic: the same reality, at a different scale
OpenAI is not alone. Anthropic, the company behind Claude, recorded a net loss of $5.3 to $5.6 billion in 2024, despite revenue growing from $1 billion ARR in December 2024 to $4 billion in July 2025 (source: TechCrunch, Nov. 4, 2025, citing The Information).
The Information also reported that Anthropic had to lower its gross margin projections, with inference costs on Google and Amazon clouds running 23% higher than anticipated (source: The Information, “Anthropic Lowers Gross Margin Projection as Revenue Skyrockets”).
Who is funding all of this?
These massive losses are covered by investors betting on future profitability. Microsoft has invested more than $13 billion in OpenAI. Amazon and Google have each contributed several billion to Anthropic. Venture capital funds complete the picture.
Deutsche Bank analyzed OpenAI’s financial trajectory and concluded: “No startup in history has operated with losses on anything approaching this scale. We are firmly in uncharted territory.” Their estimates project $143 billion in cumulative losses between 2024 and 2029 before profitability (source: eMarketer citing Deutsche Bank). A figure since revised upward: The Decoder reports that OpenAI added $111 billion to its spending projections, bringing the total to $665 billion through 2030 (source: The Decoder, “OpenAI adds $111 billion to its cash burn forecast”).
For comparison: Uber lost $18 billion over six years before reaching profitability. Amazon needed just $1 billion over five years.
What is going to change: the bill is arriving
OpenAI’s internal documents cited by the New York Times in September 2024 clearly outline the pricing trajectory the company has planned:
- ChatGPT Plus: from $20/month to $22/month by end of 2025
- ChatGPT Plus: $44/month by 2029
(source: TechCrunch, Sept. 27, 2024, citing the New York Times)
OpenAI’s head of ChatGPT stated plainly: “There’s no world in which pricing doesn’t significantly evolve when the technology is changing this quickly.” She also raised the possibility that “unlimited” subscription plans could eventually disappear, replaced by usage-based or tiered pricing.
This is not speculation. It is written into internal financial documents and flows from arithmetic necessity.
A temporary subsidy, not a sustainable price
It is worth understanding why these services are offered at a loss today. The logic is classic in the technology industry: acquire users rapidly, build habits, create dependencies, then adjust prices once the grip is established.
This model worked for Uber with subsidized rides, for Netflix with cut-rate subscriptions during its expansion phase, for Amazon with AWS offered at marginal cost. In every case, the subsidy period ended once the dominant position was sufficiently secured.
With AI, the scale is different. Infrastructure costs cannot be compressed significantly in the short term. Demand is exploding. And investors will eventually demand a return on their enormous bets.
What this means in practice
For individual users, price increases are predictable and documented. A ChatGPT subscription at $44 by 2029 is more than double the current rate. For companies that have integrated AI into their workflows at artificially low prices, the repricing of APIs and enterprise licenses could substantially alter their financial calculations.
For product teams building features on top of these APIs, inference costs that represent a manageable budget line today could become a significant expense in two or three years.
The current price of AI is not its real price. It is an introductory rate funded by capital that expects a return. The real question is not whether prices will rise, but when and by how much.
What might happen next
The normalization of AI pricing will not be a simple tariff adjustment. It is likely to trigger a series of deep rebalancing effects on the way people work, hire, and build products.
A gradual corporate retreat
Since 2023, many companies have integrated AI tools into their processes without measuring their real cost, precisely because that cost was artificially low. When OpenAI, Anthropic, or Google APIs cost two to three times more, finance teams will start asking the right questions: what is the actual return on investment for this integration? Is this automated summary or report generator really worth $0.15 per query instead of $0.04 today?
A portion of current use cases will not survive this arithmetic. AI features bolted onto products to “check a box” will disappear first. Companies will keep only the uses that are economically justified at their real cost, which many have never truly evaluated.
Open source as a pressure valve
Faced with rising prices for proprietary models, open source models represent a credible and growing alternative. Meta with LLaMA, Mistral AI from Paris, and DeepSeek from China have demonstrated that it is possible to produce high-quality models at a fraction of the cost of the American giants.
A company capable of self-hosting a capable open source model on its own infrastructure insulates itself from pricing increases on the major platforms. This scenario was marginal in 2023, when open source models were clearly inferior. It is becoming increasingly realistic as the performance gap narrows.
The direct consequence: companies that build genuine expertise in local inference will be in a much stronger position than those that outsourced everything to APIs without understanding what they were consuming.
A pullback in individual adoption
For independent developers and creatives, the price increase will create a clear segmentation. Those who use AI intensively and productively will find the cost-benefit ratio acceptable even at $44 or $50 a month. Light or experimental use cases, on the other hand, may not survive a two- or threefold price increase.
The Gartner Hype Cycle describes a well-known phenomenon in technology: after the peak of inflated expectations comes the trough of disillusionment, before the genuinely useful applications settle in permanently. Consumer AI may be beginning this turn. The period of relative cheapness has inflated adoption figures without necessarily revealing which uses are truly anchored.
The return of the developer?
This is the most counterintuitive scenario, but not the least plausible. Since 2023, the tech job market has experienced a wave of mass layoffs, partly justified by the promise that AI would replace a significant fraction of development work. Some companies froze hiring, betting on AI productivity gains that did not always materialize.
If AI tools see their prices double or triple, the equation changes. A junior developer at $50,000 to $70,000 per year can produce contextualized code, maintain complex codebases, negotiate with business stakeholders, and make decisions that AI does not make. If GitHub Copilot rises to $50 or $80 per month and Claude Code exceeds $200, some teams might recalculate whether a human role is not more cost-effective for certain tasks.
This reasoning has its limits: an AI tool at $100 per month is still infinitely cheaper than a salary, even doubled. But it illustrates that the current balance between human cost and machine cost rests on prices that do not reflect economic reality. When prices adjust, some hiring decisions frozen for the past two years may be reconsidered.
The profiles most likely to benefit from this potential return are those whose value AI cannot yet reliably replicate: developers capable of understanding a system as a whole, diagnosing intermittent issues, and negotiating technical trade-offs with product teams. Precisely the profiles that many companies have started to undervalue.
The end of unlimited subscriptions
Another likely transformation is the disappearance of the unlimited-use subscription model, already raised by OpenAI. Platforms are migrating toward usage-based or tiered billing, which makes the real cost visible to each user for the first time.
This pricing structure change will alter behavior. When every query has a stated price, users naturally become more selective. The reflexive, unconsidered use of AI, asking because it is fast and free, will give way to more deliberate engagement. That is not necessarily a bad thing.
A two-speed market
The most probable scenario by 2028 to 2030 is neither an AI collapse nor uniform universal adoption. It is a two-speed market: large companies capable of negotiating volume agreements and investing in internal infrastructure, and smaller players facing rising costs with no negotiating leverage.
This fracture could recreate the competitive inequalities that AI initially promised to erase. The democratization of access to capabilities once reserved for large technology companies will have been real, but temporary. Just long enough for investors to recover their investment.
Primary sources: Wall Street Journal (Oct. 2023) via Tom’s Hardware, Sam Altman on X (Jan. 7, 2025), CNBC (Sept. 27, 2024), New York Times (Sept. 2024) via TechCrunch, Fortune (Sept. 28, 2024), wheresyoured.at (analysis of Microsoft fiscal disclosures), Deutsche Bank via eMarketer, The Decoder, The Information, Epoch AI (Feb. 2025), Goldman Sachs Research, SemiAnalysis.
Ready to Transform Your Project Management?
Apply these insights with Sinra - the unified platform for modern teams.
Start Free Trial