Big Tech Always Escapes Justice
How Google, OpenAI, and others get away with (literal and figurative) murder

Note from Matt: This is the first of a two-parter that I’m publishing this week, with the follow-up coming (hopefully) tomorrow. This week’s premium newsletter will go live this weekend.
I’m angry! And I feel like, at least structurally, I need two separate newsletters to convey everything I want to say and also have it make sense.
Also, this is a slightly shorter newsletter than usual, by which I mean it’s less than 4,000 words (though not by much).
Also, UPDATE: you can read the follow-up post here.
I’ve been in an especially foul mood the past week or so, for reasons that are both related and unrelated to tech. As a result, I’ve found it especially hard to take the swirling thoughts in my head and put them into semi-coherent words on a screen. I have several half-finished newsletters floating around on my hard drive, and I imagine they’ll remain half-finished until I exit this funk I find myself in.
Perhaps it’s just me — and perhaps it’s not a good thing — but I’ve always found that anger can drag me out of these funks, at least long enough to write something.
And so, it was perhaps fortuitous that, earlier this week, Judge Amit P. Mehta handed down his sentence in the long-running Google search antitrust case. I’m both being entirely factual here, while simultaneously underselling it.
As a recap, last year, Judge Mehta found that Google held an unlawful monopoly on the search market. It accomplished this dominance through a few questionable tactics, including paying companies like Apple and Samsung billions of dollars each year to use Google as the default search engine in Safari.
This matters because other rivals can’t outspend Google here. In 2021, Apple received $18 billion — more than Bing made during the entirety of Microsoft’s entire 2021 financial year, which, confusingly, ends on June 30. And, as the trial found, people seldom change the default search settings on their devices.
In essence, rather than letting the market decide which provider has the best search engine, Google was buying users in the knowledge that those users were unlikely to switch to something else, even if that “something else” is objectively better than Google.
And so, you can see how, if Microsoft can’t outspend Google while remaining profitable in search, any new competitor has no chance.
When Judge Mehta found that Google had broken the Sherman Act last year, there was a palpable sense of schadenfreude online — which I think came from both a feeling that Google is a meaningfully-worse product than it once was, and a schadenfreudian desire to see Google punished for all its excesses. Though we would have to wait for more than a year for Judge Mehta to issue his sanctions on the company, people were happy to wait, provided said sanctions were meaningful and reflected the inherent underhandedness of Google’s behavior.
So, what did we get?
Google can continue paying Apple and Samsung (and others, like Mozilla) billions of dollars each year to continue remaining the default browser — although it will no longer be able to insist on exclusivity.
Google won’t have to sell Chrome — which, naturally, has Google as the default search engine.
It won’t have to allow the Department of Justice to monitor its management of the Android platform to ensure that Google isn’t unfairly disadvantageing its competitors.
Google will have to share certain information with its competitors — though the extent to that information-sharing is far less than that sought by federal prosecutors.
To describe this as a “slap on the wrist” would be a grotesque overstatement. As the New York Times summed up the ruling: “The Message for Big Tech in the Google Ruling: Play Nice, but Play On.”
Honestly, what’s the point?
To be clear, I didn’t think that the punishment metered on Google would be anything like what people hoped for. While most hoped to see a kind-of 1911-style ruling — which broke up the oil giants — the most likely outcome was always going to be something far more modest.
And that’s because, as we’ve seen in previous tech antitrust cases, Google would inevitably exhaust its avenues for appeal, and likely make a settlement offer to the Department of Justice that concedes on some points, but doesn’t radically change the game.
This is what happened in 1998, when the DoJ sued Microsoft over its monopoly over the Windows browser market. Microsoft lost, the judge ordered the break-up of the company. It appealed, and in 2001, settled, agreeing to share its APIs with third-party companies and agree to DoJ monitoring.
If Judge Mehta brought the hammer down on Google — I mean, really brought the pain — it would appeal, and as the case drags on, it’ll offer a settlement agreement where it makes certain concessions. Those concessions would be, no doubt, more painful than what Judge Mehta ordered here.
Hilariously, we can blame generative AI for this depressingly tepid ruling, with Judge Mehta noting that the AI summaries attached to search results now means that Google is a fundamentally different product than it was when he first issued his ruling — and thus, its competition isn’t just companies like Bing and DuckDuckGo, but also the likes of OpenAI, Anthropic, and Perplexity.
Quoting Judge Mehta’s ruling, which you can read here:
“Much has changed since the end of the liability trial, though some things have not. Google is still the dominant firm in the relevant product markets. No existing rival has wrested market share from Google. And no new competitor has entered the market. But artificial intelligence technologies, particularly generative AI (“GenAI”), may yet prove to be game changers.
Today, tens of millions of people use GenAI chatbots, like ChatGPT, Perplexity, and Claude, to gather information that they previously sought through internet search. These GenAI chatbots are not yet close to replacing GSEs, but the industry expects that developers will continue to add features to GenAI products to perform more like GSEs [note: general search engines].
The emergence of GenAI changed the course of this case. No witness at the liability trial testified that GenAI products posed a near-term threat to GSEs. The very first witness at the remedies hearing, by contrast, placed GenAI front and center as a nascent competitive threat. These remedies proceedings thus have been as much about promoting competition among GSEs as ensuring that Google’s dominance in search does not carry over into the GenAI space. Many of Plaintiffs’ proposed remedies are crafted with that latter objective in mind.”
The problem with this argument is fourfold:
There’s no real good information on how many people are using generative AI as a replacement for search — and even if that information exists, it doesn’t address the question of whether people are using generative AI because Google, through its lack of competition, has become so rotten.
It presumes that there’s a long-term financial future in generative AI, especially as a mass-market consumer product, when there absolutely isn’t.
It lumps OpenAI, Perplexity, and Anthropic into the same group, when they’re actually really different companies.
Anthropic makes the majority of its revenue not from subscriptions, but from API revenue, which it gets mostly from vibe-coding companies like Cursor and Replit.
Perplexity is an absolute minnow of a company. Its most recent ARR is $150m — which sounds impressive, except when you realize that ARR is simply one month’s revenue multiplied by twelve.
Put it another way, Perplexity makes $12.5m a month — and it loses much, much more than that.
OpenAI, in fairness, makes the majority of its revenue from subscriptions, with APIs being a small part of its income, but again, nothing suggests that those customers are using ChatGPT as an alternative to Google.
People really fucking hate AI overviews.
Essentially, Google managed to escape serious harm by deploying a technology that isn’t popular, isn’t profitable, and where its utility — and thus, any competition it might have — is not obvious. There’s only one high-profile company that specializes in generative AI-based search, and that company’s annual revenues are $100 million less than what Meta offered one 24-year-old AI researcher in compensation.
Sidenote: And that assumes that ARR is a particularly useful or accurate metric, which it isn’t! Have you ever wondered why the only companies who use ARR are pre-IPO software firms?
I feel the need to repeat myself: I did not expect to see Google broken up, though it would cause me no pain if that happened. However, I expected something more than… whatever the hell this is. And I imagine that if this case went through the usual appeal processes, Google would have likely offered a settlement deal that would be more punitive than Mehta’s own ruling.
You need to understand how this affects you, and why this matters. This isn’t just one company’s malfeasance going unchallenged, but where that malfeasance only affects other companies in the search space.
For billions of people around the world — and, I imagine, you too — Google is the first port of call when trying to find something on the Internet. It’s an empire built not on providing an objectively better service than its competitors, but by outspending them, and by building an expansive software ecosystem across desktop and mobile that actively deters consumers from changing to an alternative.
Every time you search for something and you can’t find it, you have to ask yourself: “Is this because Google has a captive audience and feels no compulsion to offer a better search product?”
I believe the outcome of this trial is, in part, because being part of the judiciary does not require a level of tech-savviness. Hell, there are people regulating tech who don’t understand the first thing about it — like Japan’s former cybersecurity minister who didn’t know the difference between a CD and a USB drive, and had never used a computer. Or, like Britain’s technology minister, who said that people who want to repeal a law that has resulted in vast swaths of the Internet being age-gated are “on the side of the predators.”
I also believe that, deep down, the Department of Justice didn’t have the stomach for a long, expensive, drawn-out fight with Google — and while today’s ruling doesn’t quite staunch the bloodlust that many feel, I imagine that many are breathing a sigh of relief that this saga is over, assuming that Google doesn’t issue an appeal.
At the same time, I also believe that none of those things matter.
What matters is that big tech has shown, once again, that it enjoys a sense of impunity that ordinary people — and the non-tech sectors of the economy — do not enjoy.
Stealing is fine, actually
In 2011, I was living and working in Switzerland. And so, you can only imagine my surprise when I showed up to the office one day, only to see my hometown of Liverpool on CNN.
Not because it’s objectively better than any UK city — sorry, it’s true — but because it, like many other English cities, had become embroiled in riots that started in Tottenham earlier that week.
One of my enduring memories of the 2011 England riots is how the government showed absolutely no mercy to those who had participated in them. One college student, who had no prior criminal record, was handed a six-month sentence for looting a case of water from a Lidl supermarket worth just £3.5 (just shy of $5 by today’s exchange rates). The student didn’t even consume any of the water — he ditched it on the walk home after being confronted by police.
Separately, on today’s drive to the coffee shop that I’ve turned into my office, I was listening to the latest episode of the Darknet Diaries — one of my favorite podcasts — and Aaron Schwartz was mentioned.
Swartz was a verifiable genius, creating both Reddit and the RSS standard, as well as the markdown formatting language I’m using to write this newsletter. In 2011, he smuggled a laptop into MIT, where he then set about downloading academic literature from the JSTOR archive using a guest account he had been provided with.
While this was technically illegal, there’s a principled excuse for his actions. JSTOR is a private company that acts as a gatekeeper to academic research. Academics do not receive payment whenever someone accesses their work through JSTOR — and, moreover, so much of the literature under its control is publicly-funded, and thus, should be shared freely to all who wish to access it.
JSTOR would later sue Swartz, who settled. That didn’t stop any criminal case, however, and Swartz would later be arrested by the MIT Campus Police, as well as a Secret Service agent. A grand jury would later indict him for breaking and entering with intent, grand larceny, and unauthorized access to a computer network.
The following year, Swartz would be hit with further charges that could have resulted in him spending a maximum of 50 years in prison. Prosecutors offered him a plea deal that would see him serve six months in a minimum-security facility. Instead, he killed himself.
What happened to Swartz was appalling — and the only silver lining in this harrowing ordeal is that the prosecutor responsible for dragging Swartz through hell for the “crime” of downloading publicly-funded academic research, has, for the most part, seen her career suffer as a consequence.
I write this to say that, on both sides of the Atlantic, governments have taken a firm line on stealing — or, in the case of Swartz, “stealing,” said with sarcastic undertones and massive, massive air quotes.
Hell, earlier this year, a guy from North Yorkshire was sentenced to three years in prison for his role in running an unauthorized online streaming service.
We all agree that stealing — or, in the case of digital content, “stealing” — is wrong, or at the very least, unlawful.
Except when the tech industry does it!
I’m desperate to know what, whether legally or morally, separates Swartz’s actions from those of Meta, which, according to The Atlantic, trained its generative AI models on a massive online database of pirated eBooks and research papers. Why the fuck aren’t any Meta employees staring down 50 years in the slammer?
How is it that Jammie Thomas Rasset was ordered to pay nearly $250,000 for inadvertently sharing 24 songs on KaZaa, whereas the generative AI industry is able to scrape one news organization’s content ten times a second, consuming resources and repurposing that publisher’s content without compensating them.
These are just a handful of the ongoing generative AI copyright cases I found with a cursory Google search:
Several Indian newspapers are suing OpenAI for scraping their content.
The BBC has threatened to sue — although it’s not clear if a suit has been filed — Perplexity.
The New York Times is suing OpenAI and Microsoft — which OpenAI is not happy about!
While these are all civil cases — and they’re all ongoing — I can’t help but point out the disparity between what OpenAI (and Perplexity, and Anthropic, and Microsoft, and Meta) are doing and have done, and what Aaron Swartz did, and how the authorities responded.
While you could argue that Swartz accessed paywalled content, whereas this stuff is largely (though not entirely) publicly accessible, I’d counter by saying that no it fucking isn’t, as evidenced by the fact that many LLMs have been trained on copyrighted works by musicians like Ed Sheeran and the Beatles.
Maybe I’m stupid. Maybe I don’t get it. If you can, explain to me the difference. What separates Swartz from Altman, other than the fact that one was a decent person that created stuff, and the other is a serial liar who hasn’t created a single thing in his life, other than an updating dictionary definition for the term “oxygen thief.”
Hell, forget Swartz. What’s the difference between OpenAI and the guy from North Yorkshire mentioned earlier, who sold access to copyrighted content? Both examples are using material they do not own for their own commercial purposes — although, in the case of the guy from North Yorkshire, he actually made a decent amount of money from the scheme, whereas OpenAI is a cash incinerator the likes of which we haven’t seen since the K Foundation.
It’s not even that we’ve got a double-standard for what kinds of theft the authorities are willing to prosecute. It’s that we’re actively trying to redefine theft to permit the activities of these companies.
Earlier this year, the UK government tried to update copyright law to allow generative AI companies to train on materials created and owned by other people, unless the other copyright holder explicitly opts-out.
This wasn’t just idiotic, bad law, and a middle-finger to the creative industries that contributed $125bn to UK economic activity in 2024. It didn’t just put the onus on people to say, explicitly, that they don’t want their stuff stolen for the benefit of the Patagonia-wearing dipshits I rail so frequently against.
It was an attempt to redefine theft for the benefit of those doing the stealing, and I’m relieved that the government has backed down — or, at least, appears to have done so.
This whole point has dragged on, but I feel like I need to hammer home the fact that there’s a double standard here — one that benefits the tech industry and disadvantages the ordinary people.
The founders of The Pirate Bay went to jail. People have gone to jail for selling Fire TV sticks that are pre-loaded with access to illegal streaming services. Ordinary users are being threatened with jail time for using these modified Fire TV sticks — although the likelihood of them actually seeing the inside of a cell is, I’d argue, nonexistent and this rhetoric is simply a scare tactic.
Aaron Swartz was threatened with half a century of jail time, and then killed himself. One dude was given half a year in prison for stealing a case of water.
Nobody, as far as I’m aware, has faced any criminal penalties for copyright-related infractions committed as part of their generative AI work.
I do not understand how, or why — either on a legal level, or a moral level.
Tech Always Wins
Last week, I mentioned the tragic case of the 16-year-old boy who was counseled by ChatGPT on the virtues of committing suicide, with the chatbot telling him that he didn’t owe his parents survival, and providing practical advice on how to hide the marks on his skin from previous suicide atttempts, and on the most effective ways to kill himself.
Forgive me for writing in such stark, brutal terms — but I see no reason to cushion what is a grotesque, tragic case in soft language, as doing so would only help obfuscate the fact that a tech product created by a company now worth $500bn, and backed by Microsoft and Oracle, literally told a child how to kill himself.
I didn’t quite make the question as starkly as I should have last week, so allow me to ask it again, in the similarly blunt terms that I described the facts of the case:
How the fuck is that nobody in jail for this? Why is OpenAI only facing civil, not criminal charges?
You could say that OpenAI is a company that made a product, and thus, it’s not as though a person told the child to harm himself — as was the case in 2017, when Michelle Carter was convicted of manslaughter for encouraging her boyfriend to kill himself, for which she was sentenced to fifteen months in prison.
But here’s the thing! Company directors can — and do! — go to prison when their companies, or their products, harm or kill people. In April of this year, the owner of a paddleboarding company was sentenced to more than a decade in prison for leading a white-water rafting expedition that led to the deaths of four people.
That was in the UK, so here’s an American example. A couple of years ago, the director of a trucking company was handed a decade in prison for his role in an explosion that took the life of one of his drivers.
While you could argue that the previous example involves other factors that contributed to the sentence — including separate charges for tax evasion, Covid relief fraud, and the fact that he knowingly told an employee to do something he knew was dangerous — I’d counter by saying that I don’t believe OpenAI, whether we’re talking about the leadership or its employees, were oblivious to the potential harms of their products.
That, incidentally, is the subject of one of my half-written newsletters, which I may publish tomorrow because I feel as though it follows the points raised in this one.
Sidenote: fuck it, yeah. I’m going to write it for tomorrow. Premium article on the weekend.
I want to make it clear that, on both sides of the Atlantic, there is a parallel justice system that advantages the tech industry and disadvantages ordinary people. Big tech is able to get away with the most appalling crimes — crimes that would see ordinary people sent to jail for a long, long time.
And I don’t know how to fix it. And that, I guess, is one of the reasons why I’ve been feeling a bit down lately.

Matthew, I have no idea how to fix this either, but I believe that your writing is part of the solution. Its raising awareness of the problem in a way that people can grasp.
I had a horrible realisation when I read your essay: The cost of using chat GPT is higher than the tech bro's would have you believe. Accounts vary, but its one of the reasons I don't use LLMs. However, now when I use my browser (Brave) it annoyingly gives me a LLM generated answer to my search, so I'm involuntarily using LLMs and contributing to the cost of the planet against my will...
Looking forward to tomorrows posting Matthew, keep being angry, it suits you...
That's it. Next phone is going to be CalyxOS. The amount of enshittification is unheralded.
I'm going to experience the Internet in a manner of my choosing and I suggest you all do the same. Even DDG and lightweight browsers like Kiwi Browser are providing AI responses in their native searches...