Generative AI’s Impending Death By A Thousand Rake-Smacks
Give it enough rope...
Amongst those tired of generative AI — those fatigued from hearing idiot managers claim how it’s “the future,” and those despondent from watching the proliferation of slop across every corner of the Internet — there are usually two questions on their lips: How does this end, and when will it end?
Ed Zitron’s analysis of underlying economics of generative AI makes for some sobering reading. Nobody is making money from this, save for Nvidia (and those adjacent to Nvidia, like Dell, Supermicro, Samsung, and SK Hynix). For OpenAI to survive and to deliver on its obligations to companies like Oracle and Coreweave, it needs more money than currently exists in VC and private equity, and then some. And it’s not just that generative AI isn’t profitable, but that its revenues are actually miniscule.
The supernatural force that distorts reality for those who buy into the AI hype is, essentially, based on hope. So strong is the expectation that generative AI will essentially power entire chunks of the economy, investors are prepared to give OpenAI more money than any other startup in history, for an indeterminate amount of time, while it loses more money than any other startup in history.
Over the past few months, we’ve seen that faith begin to fray, as whispers of “are we in an AI bubble?” turn into shouts. Yesterday, the Bank of England said that the chance of a “sharp market correction” — a euphemism for the sudden decline in the prices of those companies exposed to AI — has increased.
“On a number of measures, equity market valuations appear stretched, particularly for technology companies focused on Artificial Intelligence (AI). This, when combined with increasing concentration within market indices, leaves equity markets particularly exposed should expectations around the impact of AI become less optimistic,” the minutes for the latest meeting of the Financial Policy Committee read.
The outlook for AI, it said, remains “mixed,” although it only listed the potential downsides that could lead to a mass disillusionment with AI — and then, that pesky “sharp market correction” I mentioned earlier.
“The Committee noted the future outlook for valuations was uncertain, with both downside and upside risks. Downside factors included disappointing AI capability/adoption progress or increased competition, which could drive a re-evaluation of currently high expected future earnings. Material bottlenecks to AI progress – from power, data, or commodity supply chains – as well as conceptual breakthroughs which change the anticipated AI infrastructure requirements for the development and utilisation of powerful AI models could also harm valuations, including for companies whose revenue expectations are derived from high levels of anticipated AI infrastructure investment.”
These are all valid points. But there’s two things that we need to acknowledge, even beyond the fact that generative AI costs more to run than it brings in, and that if model operators were forced to charge prices that reflect their actual costs, nobody would be able to afford to use generative AI:
Generative AI companies cannot survive on consumer subscriptions alone.
The enterprise case for generative AI is only as strong as the faith in the technology itself.
As you probably know, OpenAI recently signed a five-year commitment to spend $300bn on compute with Oracle. Now, while these costs (at least, in theory) won’t be spread evenly across those 60 months, let’s pretend they are. OpenAI would have to make $5bn each month in pure revenue to cover its costs.
That number doesn’t include its other spending commitments with CoreWeave, Microsoft, and Nvidia. Or those with Broadcom. Or Google. Or AMD.
Last year, OpenAI’s API business (where third-party developers integrate the company’s models into their own code) was around 30% of revenue. Let’s assume that’s still the case, meaning that 70% of its revenue comes from sales of subscriptions. So, to cover the cost of the Oracle deal, it would need to make $3.5bn in subscription revenue.
That’s an insane figure. In practical terms, OpenAI would have to make the same amount of subscriber revenue as Netflix each month, and then tack on an extra $1.5bn in API revenue, just to meet its commitments to one compute provider.
Again, I’m not including OpenAI’s other spending commitments. We’re just talking about its $300bn deal with Oracle.
And, again, it’s likely that this deal would be structured in a way that many of the compute costs would be rear-loaded, in part because it takes a lot of time to build the amount of compute Oracle plans to deploy, and also because both companies are likely anticipating massive growth in the short-term.
So, it’s entirely conceivable that OpenAI will end up having to pay many multiples more than what Netflix brings in from each month.
For the sake of argument, let’s stick with the $5bn-a-month figure. Assuming its business looks (in terms of the ratio of subscriber-to-API income) the same, we’re left with the question of how many people does OpenAI need to sign up for this deal to become even remotely viable.
We don’t really know how OpenAI’s subscribers break down, but I find it highly unlikely that most people are paying $200-a-month for ChatGPT. Even those with enterprise subscriptions pay a discounted rate, depending on how many seats they buy.
In August, OpenAI reported it had 5m paying business subscribers — which sounds impressive, but when you consider that around 20% of that figure were likely seats bought by the University of California system at a cost of $2.5 apiece, it becomes less so.
For the sake of argument, let’s assume that of that $3.5bn subscriber revenue it needs for the Oracle deal to work, around $1bn of its revenue comes from business customers and those paying for the most expensive subscriptions. And let’s assume that the prices of its packages remain the same — although it almost certainly won’t, in part because inflation is a thing, but also because as its financial pressures grow, it’ll likely try to squeeze customers for more.
So, we’ve got $2.5bn, all coming from subscribers to the $20 ChatGPT package. Do you know how many people you would need to make that?
125 million.
Now, admittedly, Netflix has (as of January) over 300 million subscribers. But here’s the thing: Netflix costs less than ChatGPT (the cheapest package in the UK costs £6 with adverts), and the cost of Netflix reflects the actual cost of living in the countries where the subscriber is located.
Or, put it another way, the basic Netflix plan in Pakistan costs around 20% of the same package in the US.
There is no way the math makes sense, if we’re just leaning on consumers. It just doesn’t.
Admittedly, the above used a bunch of assumptions, and it’s entirely possible that my numbers may not fully reflect the actual conditions when OpenAI starts receiving invoices from Oracle. But even if I’m off slightly, the basic point that OpenAI will need to massively increase subscriber numbers remains absolutely true.
The problem is, I’m not really sure that generative AI has the same mass-market consumer appeal that, say, Netflix or Spotify do. And even if there are mass-market consumer use-cases, how compelling are they to get potentially hundreds of millions to pay $20 each month?
The point I’m inching towards is that, given the costs of the commitments it’s made, let alone those inevitable costs of operating, OpenAI can’t be a primarily consumer-focused company. It just doesn’t work.
For generative AI to become even remotely viable, we need to see massive, unprecedented enterprise buy-in — and this is especially for companies like OpenAI, which, unlike its rival Anthropic, makes the vast majority of its revenue from individual subscriptions to non-business customers.
This is where we get to the achilles heel of generative AI — it just isn’t that good.
A Matter of Faith
Right now, the enterprise enthusiasm for generative AI isn’t being driven by any objective evaluation of the technology, but rather the same hype that’s permeating across the technology press, and chundering down from genAI hypemen like Satya Nadella and Mark Benioff.
It’s not so much enthusiasm as it is a kind-of faith — a belief that genAI can do more than it can, and that genAI will get progressively better.
The thing with faith is that it’s, by design, not something that’s entirely rational, and thus you can’t rationalize it away. The thing that usually breaks faith isn’t an outsider, but rather the thing that the person has faith in.
To give you an example, in 2011, a preacher called Harold Camping predicted that the end of the world would happen in May of that year. While most people laughed, Camping did have a significant number of believers who collectively spent millions on a splashy nationwide advertising campaign warning that the end was nigh. In Vietnam, 5,000 people gathered to await the rapture.
Obviously, we’re still here. Although Camping — who, incidentally, had incorrectly predicted the end of the world twice previously — offered a revised date for the end times, putting it back to October 2011, the high-profile cock-up absolutely destroyed Camping’s reputation.
Camping was once the head of Family Radio, a Christian broadcaster with over 200 stations, and that was, at its peak, the 19th largest broadcasting company in the US. He died in obscurity two years after his failed prediction, with his radio empire left as a shell of its former self.
The point I’m trying to make is the fact that what didn’t destroy Camping wasn’t a sensible, rational person explaining that numerology isn’t the best basis for eschatalogical predictions. It was Camping himself.
The same thing will happen with generative AI.
This week, we learned that Deloitte was forced to issue a partial refund to the Australian government, after it used generative AI to produce a report costing A$440,000 to produce — and which contained multiple errors and hallucinated references.
It was an embarrassment for Deloitte, sure, with the story covered in top-tier publications across the world. But I’d argue it was equally damaging for generative AI as a whole, in part because of what Deloitte is.
Deloitte is one of the “big four” accounting firms. It’s the company that — at least, in theory — keeps other companies in line. If Deloitte fucked up this bad, then what does that say about generative AI?
This isn’t the first case where something like this has happened. There are plenty of stories about lawyers who were admonished after using genAI to produce legal filings, and they’re deeply funny, but they usually pertain to small firms and inexperienced, not particularly tech-savvy people.
This is Deloitte.
The funny thing is that it won’t be the last time a major corporation — one that enjoys a position of trust — screws up because they trusted ChatGPT or Claude a bit too much.
It’s only a matter of time until something really bad happens — like a major security breach, or a personal data leak that ensnares millions of people — because a developer decided to entrust an LLM with their job.
That too will be a major news story when it inevitably happens.
Or maybe someone uses CoPilot in excel and, because of a hallucinated formula or whatever, their company goes bankrupt, or massively overspends on a project or something. I’m just spitballing.
The point is, generative AI is an inherently unreliable technology, with OpenAI now saying that AI hallucinations are inevitable and unsolvable. By using it in enterprise scenarios where reliability and accuracy matter, you only invite disaster.
It’s the technological equivalent of Sideshow Bob walking through a parking lot that’s littered with rakes — with each rake some high-profile foul up involving a company or person that should have known better.
And with each rake-smack, that faith I described will evaporate.
Neither OpenAI, nor the wider generative AI industry, can afford for that to happen. In the case of OpenAI, it needs to keep the enterprise customers it has, and also massively, massively expand on them — essentially growing this segment faster than it does its individual and consumer customers.
Although enterprise and business customers can fall victim to the same hype that ordinary people do, they’re also constrained by regulatory and legal commitments, as well as fiduciary ones.
The threat that OpenAI faces is that among this cohort, the perception of generative AI will shift from a promising new technology, to an expensive liability.


Great post, Matt!
The security breaches you mention are already happening, from government agencies no less!
See https://www.abc.net.au/news/2025-10-06/data-breach-northern-rivers-resilient-homes-program-chatgpt/105855284?utm_source=substack&utm_medium=email which I found in Sarah Smith's "Totally a Thing" on Substack
Sarah's write-up on it is here: https://totallyathing.substack.com/p/the-northern-rivers-data-breach-shows?utm_campaign=email-half-post&r=1urodg&utm_source=substack&utm_medium=email
The Chinese models are incredibly efficient compared to the Western ones. I run one on my Mac that is as good or better than 4o and it’s open weights, so I paid nothing for it. The Chinese API costs are a fraction of those of OpenAI, Claude et al. And I believe the future lays in local AI models, not giant cloud based ones.
We’ll all be running numerous AI instances on our phones, and those of us who use computers will be running them there. They work great, they are private, and they are ours.
Compare to the history of computing. We are in the “time share” phase, but will quickly leapfrog out of that, with businesses using API access, almost a commodity at this point, and everyone using their own AIs similar to the PC revolution.