Mediocrity, Accountability, and Artificial Intelligence
The managerial class loves AI because it allows them to avoid responsibility for their failures. That should terrify us all.

Part of what makes our current world so frightening is that there are people empowered to make big decisions, and these people are often, at best, mediocre.
The thing about mediocre people isn't that they have a propensity to screw up — though they do — but that they seldom take accountability for their failures. Allow me to give you an example that illustrates this point, and has the benefit of being very, very, very funny.
In 2023, Clorox — the company that makes cleaning supplies — was hacked by a group dubbed Scattered Spider, resulting in $356m in damages and $49m in remediation expenses. According to filings with the SEC, the attack resulted in a production slowdown and "an elevated level of consumer product availability issues."
Not good. So, how did it happen? Well, Clorox — a company that had revenues of $7.1bn in 2024, with a profit of $280m — decided to outsource its helpdesk to Cognizant, a massive Indian outsourcing firm with an… ahem… somewhat choppy reputation when it comes to security.
Cognizant is one of the “big five” outsourcing firms — often called the WITCH companies, based on the first initial of their names (Wipro, Infosys, Tata, Cognizant, HCL). These firms have enjoyed stratospheric growth over the past five years (Cognizant’s revenues grew from $16.6bn in 2020 to $19.7bn in 2024), in part due to the growing drive for offshoring within the corporate world. These companies are popular because they’re cheaper — although cheaper doesn’t necessarily translate into better, or even good.
You see where this is going. Clorox blames Cognizant for its 2023 security incident, and is now suing the company for damages. It alleges that the outsourced helpdesk staffers literally just handed over credentials to the attackers, without even verifying that they worked at the company. If you think I'm exaggerating, here is a snippet of an exchange between the hacker and the helpdesk, courtesy of ArsTechnica's Nate Anderson.
Cybercriminal: I don’t have a password, so I can’t connect.
Cognizant Agent: Oh, ok. Ok. So let me provide the password to you ok?
Cybercriminal: Alright. Yep. Yeah, what’s the password?
Cognizant Agent: Just a minute. So it starts with the word "Welcome"...
Okay, so this is a company that cheaped out on its helpdesk team, when it could have easily afforded to hire in-house, and any savings it made from outsourcing are, almost certainly, now eclipsed many times over by the cost of this incident. Funny.
But not as funny as Cognizant's response, which basically boils down to: "no, you fucked up, we're not your security team. You're supposed to check that we don't do anything chaotically stupid, like give credentials to any randomer who asks."
I’m not joking. This is the statement Cognizant’s PR gave to Ars Technica:
A PR agency representing Cognizant reached out to us after publication with the following statement: "It is shocking that a corporation the size of Clorox had such an inept internal cybersecurity system to mitigate this attack. Clorox has tried to blame us for these failures, but the reality is that Clorox hired Cognizant for a narrow scope of help desk services which Cognizant reasonably performed. Cognizant did not manage cybersecurity for Clorox."
Remember that old meme, where there's two Spidermen pointing at each other? That's what this is. Except the Spidermen are wearing business suits, and they're blaming each other for setting nearly $400m on fire. Neither side is taking responsibility for their role in the fuck-up.
No Clorox executives have resigned after apologizing for a tragedy that could have been easily avoided by hiring people in-house — people who the company could monitor, vet, train, and who actually had a stake in the company, rather than seeing it as one of their clients that'll come-and-go over time.
Part of the lawsuit claims that Cognizant diverged from the training materials that Clorox provided the outsourcing firm. Quoting Ars Technica:
"Clorox says that it held regular meetings with Cognizant to ensure that everyone was following the same playbook. Cognizant gave 'explicit acknowledgments and consistent reassurance that it was following Clorox's credential support procedures.' But the cybercriminal calls in 2023 showed this to be a 'blatant lie,' says Clorox."
Gee, you know what could have avoided that? Actually hiring some help desk staff.
Similarly, no Cognizant execs have apologized for having staff that are so inept, they literally just create credentials for whoever asks, even setting them up with MFA (multi-factor authentication) and access to Okta (the platform that allows employees — or, in this case, shadowy cybercriminals — to access the various apps they need to do their job).
Accountability and the Business Idiots
My friend Ed Zitron wrote about the idea of the Business Idiot, which explains many of the moronic decisions we see companies of all stripes — but especially tech companies — make.
You've probably worked for a Business Idiot. Perhaps you know one. Or (unlikely, if you're reading this newsletter), you are one.
A Business Idiot is someone who is detached from the thing they do, or the work that their employees do, and they make no effort to understand either. A Business Idiot is someone who is driven exclusively by the need to provide shareholder value. A Business Idiot is… Well, I'll let Ed finish the rest.
"We live in the era of the symbolic executive, when "being good at stuff" matters far less than the appearance of doing stuff, where "what's useful" is dictated not by outputs or metrics that one can measure but rather the vibes passed between managers and executives that have worked their entire careers to escape the world of work. Our economy is run by people that don't participate in it and our tech companies are directed by people that don't experience the problems they allege to solve for their customers, as the modern executive is no longer a person with demands or responsibilities beyond their allegiance to shareholder value."
Oh, and Business Idiots rarely say sorry. Again, from Ed:
"While CEOs do get fired when things go badly, it's often after a prolonged period of decline and stagnancy, and almost always comes with some sort of payoff — and when I say "badly," I mean that growth has slowed to the point that even firing masses of people doesn't make things better."
Business idiots are removed from what you, the worker, does. They don't understand it. They don't share your motivations. And, when things invariably go wrong, they never face any consequences.
I’d wager they don’t even feel guilt. That’s because, even if things go horribly wrong, as in the case with Clorox, and every other company that’s experienced some kind of avoidable catastrophe, or encountered a period of unstoppable decline, they’ve still acted within their fiduciary duties to shareholders.
It’s this fiduciary duty that explains why so much of what we use, and so much of what these companies do, is inherently mediocre. If building a better product — having a larger team that you treat well — means that your profit margins are even slightly smaller, then you, as an executive, are failing in your job.
Do you think that Facebook and Instagram would be as broken as they are if Meta didn’t feel a crippling pressure to express growth in perpetuity? Would Google be a better search product without the constant pressure of reaching certain revenue growth targets? Would companies, in general, be better if their boards weren’t beholden exclusively to the shareholders, and could put other considerations first — like the company’s long-term health, its customers and employees, and the products they make?
I’d also argue that many of the decisions that these Business Executives make, whether by design or as a consequence, shield the company or its leadership from any culpability for when things go wrong.
Outsourcing is a great example of that. When something goes wrong with the outsourcer, the company that actually paid them to do the job — often laying off staff in the process — can simply wash their hands, blaming it on the ineptitude or the misconduct of the company to which they entrusted their business.
Here’s another example, also reportedly involving the Scattered Spider ransomware group, and another major outsourcing company, Tata Consultancy Services (the ‘T’ in the WITCH acronym).
On Monday, April 21, Marks and Spencer (a prestigious UK retailer that’s also known as M&S) fell victim to a ransomware attack that effectively crippled the company, preventing it from taking and fulfilling online orders or processing contactless payments. The attackers also stole customer data. At the start of July, M&S said it was still fixing issues caused by the attack, and that process would take as much as four weeks to complete.
Most reporting has pointed the finger of blame at Scattered Spider — a group whose members, allegedly, come from the UK and US. In early July, British police arrested four members they claim belong to the group, and were involved in the attack on M&S and other UK retailers (namely the Co-op and Harrods).
M&S has pointed the finger at a third-party supplier for allowing this attack to take place. Speaking to a parliamentary committee, its chairman, Archie Norman, said “There have been media reports [of] M&S leaving the back door open. We didn’t.” Norman also said that the attackers used social engineering — essentially, manipulating a human to compromise a computer system, rather than using sophisticated technical means — as with Clorox and Cognizant.
The following quote comes from his testimony to the House of Commons Business and Trade Sub-Committee.
“In our case, the initial entry, on 17 April, occurred through what people now call social engineering. As far as I can tell, that is a euphemism for impersonation, but it was sophisticated impersonation. They didn’t just rock up and say ‘Would you change my password?’ They appeared as an individual, with their details. Part of the point of entry in our case also involved a third party. That is just a reminder that that attack surface is very hard to defend.”
Tata, for what it’s worth, says it wasn’t responsible for the breach, though it did conduct an investigation. Reuters quoted Tata director Keki Mistry, speaking to a shareholder meeting, as saying: “As no TCS systems or users were compromised, none of our other customers are impacted”
That’s curious, considering that, according to the BBC, M&S’s CEO and other company figures were sent a ransom email from the email of a Tata employee.
The [ransom] email was sent apparently using the account of an employee from the Indian IT giant Tata Consultancy Services (TCS) - which has provided IT services to M&S for over a decade.
The Indian IT worker based in London has an M&S email address but is a paid TCS employee.
It appears as though he himself was hacked in the attack.
TCS has previously said it is investigating whether it was the gateway for the cyber-attack.
The company has told the BBC that the email was not sent from its system and that it has nothing to do with the breach at M&S.
Reporting from The Times also blamed a contractor, although it didn’t name names, adding that the attackers were able to remain within the system for nearly three days. This was a colossal, colossal cock-up.
Now The Times can reveal that the hackers, thought to be from the Scattered Spider group, penetrated the retailer’s IT systems through a contractor.
“What went wrong was human error. Human error is a polite word for somebody making a colossal mistake,” a source said.
The hackers were able to work undetected in the systems for around 52 hours before the alarm was raised, insiders said, before emergency response teams defended M&S over a five-day “attack phase”.
Look, I’m not saying that Tata was the entry point into M&S’s systems — and the reason why Shattered Spider was able to inflict £300 million in damages. I have no insider knowledge, and the investigation into the breach — both internal and criminal — is likely still ongoing.
But if it is — and, again, note the word ‘if’ — in that sentence, it would illustrate the double-edged sword of what happens when a company avoids accountability, and literally pays someone to assume responsibility for something important. Because when that thing goes wrong, they can simply say “it wasn’t me, guv,” or, in the case of Cognizant, point the finger back at you.
But, again, I have to ask — what would have happened if M&S decided that, rather than spend a billion dollars on outsourcing, it spent a bit more to build its tech stack in-house, with a team that it hired, knew, and could vouch for, and, again, had an actual stake in the business?
It’s curious that, around the same time that M&S experienced its breach, Scattered Spider also launched a similar ransomware attack on the Co-op. And the Co-op also uses Tata for much of its outsourced IT work. Harrods was, as mentioned, similarly targeted around the same time, although I’m yet to find any decent information on the makeup of its IT infrastructure.
Mediocrity as a Virtue
I’m going to talk about AI towards the end of this newsletter — and why I think generative AI is a terrifying prospect in the Age of the Unremarkable — but I want to go into a bit more detail about why the incentives that push companies towards mediocrity, and that punish excellence, are so pervasive, and how that mediocrity manifests itself.
Most people are, from a young age, told to aspire to be the best in whatever they try. That effort is, in its own way, a kind of reward. We're taught the difference between not being good at something, and not giving a shit. This is something I can personally identify within my own childhood.
I’m dyspraxic — also called developmental coordination disorder in the US — and that means I am, in a nutshell, congenitally bad at sports. This, combined with the fact that I naturally shied away from anything even remotely athletic, made me something of an outlier among my classmates.
I fucking hated sports. PE was my least-favorite class, and I’d do anything to get out of it, and even when I participated, it would be the most half-arsed participation imaginable. This, obviously, didn’t endear me to my teachers who knew that I was unathetic and disinterested, but also could tell when I wasn’t giving my best — as shitty as that “best” might be.
It’s curious to see how as people get older and assume positions of power, they also become less interested in that distinction — especially considering the near-fetishization of “growth mindset” ideology in many tech companies.
The C-Suite doesn’t care about your best, or even good. It’s unconcerned with having a “good” business, or a “better” business. Nor does it care whether the quality of its products gets worse. Rather, it has figured out ways to transform mediocrity from a vice into a virtue. Something that, under the right circumstances (by which I mean shareholder value), can be excused.
It's funny that two of the most troubled companies of our era — Intel and Boeing — illustrate this point perfectly. Both companies were, between the 1990s and 2010s, riding high on their respective market dominance — Boeing in airplanes, and Intel in computer processors.
Boeing, having just merged with McDonnell Douglas, was the largest civil aviation company in the world. The 737 was the backbone of the short-haul airline market, and the 777 and 787 routinely carried passengers between continents.
Intel, meanwhile, was in an especially comfortable place. Not only did it own the factories that made its chips, but it also owned the underlying technology, and its designs were the de-facto standard for servers and laptops alike. Even its rivals, AMD (and, depending on when you look, Cyrix and VIA) used the x86 instruction set.
Neither company could see what was in front of them. Intel missed out on the rise of ubiquitous smartphones, or the growing need for more energy-efficient chips. In 2006 — less than a year before Apple announced the iPhone — it sold its XScale unit to Marvell for a reported $600m. XScale chips used the same ARM technology as the chips powering your smartphone (and, perhaps, your laptop), and were already used by companies like Palm and BlackBerry.
If Intel had a bit more foresight — if it wasn’t obsessed with hollowing-out the company for the sake of the shareholder class — it would have stuck around with XScale, and perhaps have emerged as a serious rival to the likes of Qualcomm and Mediatek. However, it was, according to Jon Stokes at Ars Technica, primarily concerned with “fat-trimming.”
But the bigger picture is that Intel is clearly in fat-trimming mode, and they're trying to refocus on their core businesses. They've had a rough few years, and their main competitor, AMD, is now in a position of strength that nobody at Intel would have forseen when the Pentium 4 was first launched.
A few years later, Intel recognized its mistake and tried to re-enter the smartphone market with a slimmed-down version of its Intel Atom chips. It was too late.
Sidenote: I actually bought the first phone to use an Intel Atom phone that was released for the European market — the Orange San Diego. It was, without question, the worst piece of shit I ever used. I got rid of it within less than a year.
Boeing, meanwhile, thought it could continue making refreshed and stretched versions of the 737 (an aircraft that first entered production in 1966) rather than spend the billions required to create a brand-new design that’s suitable for the needs of the 21st century. A clean-sheet design, while providing long-term value, would have cost significantly more — and taken longer — than simply stretching the existing 737 airframe and shoving some new engines under its wings.
Both companies spent lavishly on dividends and stock buybacks, rather than invest in their technology and their infrastructure, allowing their rivals to leapfrog them (AMD in the case of Intel, Airbus for Boeing) to such an extent that neither company has managed to catch up. Between 2010 and 2024, Boeing redirected over $68bn to shareholders. Intel, meanwhile, spent $152bn on buybacks in the past 35 years.
Sidenote: I think it’s worth being fair to Intel. In 2021, Intel hired Pat Gelsinger — a former engineer who helped design the 386 architecture — as its CEO. Gelsinger then began an impressive turnaround project, investing in capital-intensive projects that would allow for the company to produce better, more competitive chips.
Gelsinger wanted to close the gap with TSMC and Samsung, and turn Intel into a foundry for fabless semiconductor companies (those companies that design, but don’t manufacture, their own silicon). As much as the company had missed and wasted opportunities, it had recognized its failures and was embarking upon an ambitious turnaround project that could have restored it to its original glory.And… uh. In December, Intel’s board fired Pat Gelsinger. He was replaced by David Zinsner and Michelle Johnston Holthaus — a finance person and a marketer, respectively — who have since instituted massive waves of layoffs and culled several long-term manufacturing projects.
Their obsession with being “shareholder-first” companies, rather than “employee first,” or “customer first,” or “innovation first” led them to pursue mediocrity with full-force. Boeing, in particular, lost much of the culture of engineering-led innovation that had defined its pre-merger existence, and began selling off core parts of the company — including the part that manufactures the airframe itself, which became Spirit AeroSystems, and which Boeing is now, more than two decades later, trying to re-acquire.
Along the way, it laid off workers, and brought in cheaper outsourced talent to replace them. Parts of the Boeing 737 Max’s software was written and tested by outsourced engineers making less than $9 an hour, with said engineers “often from countries lacking a deep background in aerospace — notably India.”
One of Boeing’s software partners on the 737 Max was, if you’re curious, HCL — the ‘H’ in the WITCH acronym.
Both Intel and Boeing are fairly critical to US national security, and so I can't imagine either company failing — which, in their cases, I'd define as being sold for parts to a bunch of private equity firms. The companies are, quite literally, too big to fail.
But there's a difference between "too big to fail" and "impressive," or “doing some really important, groundbreaking work,” and any outsider can readily identify the complacency that sits at the heart of their ongoing existential crises. A company — or any entity, really — can be mediocre and essential at the same time.
For those looking for a non-American example of this phenomenon, you only need to look at the UK. The Thatcherite belief in the free market to handle the functions of the state didn't disappear when she left office, nor when she shuffled off this mortal coil, nor when Labour — an ostensibly left-wing party — took office in 1997, and again in 2024. This belief has become a dogma, which has, in turn, allowed for the tolerance of objective mediocrity.
Let me give you an example. Britain's military is… to put it charitably, understaffed. Our lack of soldiers and officers is, in part, because a previous government decided to outsource military recruitment to a company called Capita (which Private Eye aptly calls "Crapita” and “ the world's worst outsourcing company").
Crapita is neither cheaper, nor more efficient than the previous in-house military recruiters. It's not uncommon to hear of people waiting more than a year to get accepted into the military, during which time they've found other — and often, better paid — jobs that don't involve dodging IEDs and bullets. Capita has routinely missed its recruitment targets — even when said targets have been reduced.
From a UK parliamentary committee:
In 2012, the Army contracted with Capita to transform its recruitment approach. The Army committed £1.3bn to a 10-year programme and partnership with Capita to manage its recruitment process. Since the contract began Capita has not recruited enough Army regulars and reserves in any year. In 2017-18, Capita recruited 6,948 fewer regular and reserve soldiers and officers than the Army’s target. The shortfall has been largest for regular soldiers. Since the contract began, Capita has missed the Army's annual target for regular soldiers by an average of 30%, compared to 4% in the preceding two years.
In April 2017, the Army agreed to reduce Capita’s recruitment targets by around 20% for the next three years as it believed Capita was insufficiently incentivised to improve performance. Over the last year, the Army and Capita have introduced some significant changes to their approach to recruitment, although these have not yet resulted in the Army’s requirements for new soldiers being met. The cost of the Capita contract has risen by 37% to £677 million. The Project will not achieve its planned savings of £267 million for the Ministry of Defence. The Army has begun to consider options for the successor contract to start in 2022 and has commissioned a review to understand the lessons from the Project.
Despite these obvious, ongoing failures, Capita received a contract extension in 2020 worth £140m. Earlier this year, the UK Ministry of Defense gave the plush recruitment contract to another outsourcing firm, Serco, which similarly has a dismal track record across the various jobs it’s been tasked with.
I expect Serco to deliver similarly dismal results as Capita — in part because these companies, generally speaking, just aren’t that good. Which begs the question, why do we pay billions to these companies when we know that they epitomize mediocrity?
Well, I’d wager the answer is partially because, just like with the private sector, outsourcing is an excellent way to avoid accountability for those at the top — by which, I mean, those occupying ministerial positions within the government. I also believe that the neo-Thatcherite belief in the free market is another major driving force behind this privatization of essential public services.
I also believe that, after forty years of this bullshit, the British state has been so hollowed out of talent, it’s no longer able to do many of the things it once handled in-house. With the government unable to offer salaries that compete with the private sector, bringing those competencies in-house is virtually impossible.
Then, we come to arguably the biggest miscarriage of justice in British history — the Post Office Horizon scandal, where thousands of subpostmasters were bankrupted, imprisoned, and had their good names tarnished within their local communities because of a computer system that was outsourced to Fujitsu, but ultimately was unfit for purpose.
If you read Nick Wallis's reporting — and his excellent book about the case — or watch any of the testimonies from the public inquest, it becomes immediately apparent that most people on the inside knew that Horizon was fundamentally broken. And so, when cash started going "missing" from more than a thousand Post Offices, the first response should have been to investigate Horizon, rather than presume that literally thousands of subpostmasters had, all of a sudden, decided to become thieves.
To be clear, the Horizon scandal was driven by amorality, certainly, and alleged criminality, most likely, though not of the subpostmasters, but rather of the people working at the Post Office and Fujitsu. But the human catastrophe we learned about during the inquiry, and in Nick Wallis's reporting, and that from other dogged reporters from Computer Weekly and Private Eye, was also caused by a tolerance of mediocrity.
I'm simplifying here, but not by much. For the full story, I highly recommend you read Wallis's book.
Essentially, Horizon replaced a bunch of earlier systems, as well as some manual processes. The procurement and design process was a bit of a mess, and the people at the Post Office weren't particularly technical, and so when they were presented with a non-functional prototype, they said: "Yes please, as quick as you can."
What followed was an equally chaotic and rushed development process, created in no small part with junior talent, that was then pushed into production at breakneck speed.
The software had a bunch of undisclosed functionality (like the ability for a Fujitsu employee to remotely alter the records of each Post Office) and a myriad of bugs. These bugs made Horizon fundamentally unfit for its purpose — which was, essentially, to provide an accurate record of transactions at Post Offices. Here’s one example from a list published by the ACM:
“A messaging software bug called the “Callendar Square/Falkirk Bug” (first seen at a post office in the Callendar Square shopping center in Falkirk, Scotland) caused transactions to mistakenly be entered twice. If a customer withdrew £250 from a bank account via a local post office, the information about the transaction transmitted to Post Office central might indicate two £250 withdrawals. The central Post Office would then hold the local sub-postmaster responsible for the “missing” £250. This bug had its roots in faulty messaging software called Riposte provided by a company called Escher Group, Justice Fraser concluded. Riposte itself was buggy. It was a Horizon bolt-on intended to simplify the process of messaging the host computer. In some cases, it failed to synchronize those updates in a timely manner.”
Compounding matters further, most of the people who used this system weren't technical. Many subpostmasters were, effectively, semi-retired. They left their careers for the slower pace of running a village Post Office, or something. As a result, they lacked the confidence to challenge a system that was clearly malfunctioning, or the confidence to challenge Fujitsu and the Post Office, who insisted that Horizon was working perfectly. These people would pay these computer-generated shortfalls with their own funds, until they couldn’t any more — upon which they’d be bankrupted, arrested, and likely convicted.
As documents revealed as part of the inquiry revealed, many of these issues were understood within the highest echelons of the Post Office, as well as within Fujitsu, and as such, any conviction obtained from Horizon data was unsafe. And yet, the existence of these bugs weren't disclosed to those subpostmasters prosecuted, or even really dealt with, in part because some of the people within these organizations are (most likely) criminals themselves, but also because they genuinely did not give a shit.
Indeed, many of the bugs weren’t fixed because it would have been too expensive. To date, the state has spent over £1bn compensating those affected, and the final bill will likely be much, much more.
The Horizon scandal is an example of what happens when mediocrity meets an absolute absence of accountability, or even morality.
Quality mattered less than shipping the product, no matter how flawed it was, or how broken. Quality was subordinate to creating the appearance of digital transformation within an organization that goes back centuries. And, because the people making the decisions at Fujitsu and the Post Office weren’t subpostmasters, they didn’t care.
Finishing with a contemporary example, there's that of Tea — a dating app that, admittedly, I was oblivious to the existence of until recently.
That changed when the company was victim to a massive data breach, wherein third-parties were able to download the photo IDs of its members from an unsecured online storage bucket. Many of these IDs later found their way to 4chan.
A second breach, discovered a few days later, leaked the contents of DMs with users. According to 404 Media, the data includes “multiple messages which appear to show women discussing their abortions,” and cheating spouses.
This cock-up will likely sink Tea — and rightfully so. The idea that a large, commercial business with tens of thousands of users (and possibly more) could quite leave a bucket of its customers' most sensitive data unprotected is, quite frankly, shocking. It's an elementary failure, and one that is easily avoided.
And, if this had happened in the EU, Tea's leadership would likely be facing serious criminal — not just civil — charges. Not merely did they fail to protect their customer data, but they also retained data — namely, the ID documents — longer than necessary (namely, the time it would take to verify the customer's identity).
It's still early days, you have to wonder what happened behind the scenes — what corners were knowingly cut — for this to happen.
What's telling about all these examples — except, I add, in the case of the Post Office, where those most responsible were dragged before a live-streamed inquiry, and where many now face criminal investigations — is that none of those culpable for their respective failures has demonstrated any real accountability.
I'm yet to hear anyone from Intel or Boeing say: "Yeah, it was a shit idea doing all those cutbacks." Or, perhaps, "yeah, I screwed this one, here's all my stock options back."
The Reason Why The Managerial Class Loves AI Is Why AI Terrifies Me
I took a long time to reach this point, in part because I think it’s so important to set the scene of why I’m so scared, and why you should be too.
Over the past 5,000 words, I’ve made the case that today’s business elite are fundamentally unconcerned with quality, and they’ll happily cut corners if doing so addresses their duties to their shareholders — namely, the need to show consistent, perpetual growth, and to maximise returns.
These are the “adults in the room” — those tasked with making big decisions, but with no interest in whether those decisions are, in the long-term, sensible.
The examples I’ve provided — from both the UK and the US, covering both the public and private sectors — collectively illustrate that these people aren’t concerned about real value (whether that be customer value, or employee value, or just long-term sustainability), but rather march to the beat of a short-termist drum, with shareholders and Wall Street analysts doing the drumming.
Moreover, the (relatively) wide historical range of the examples I provided show that this short-termism isn’t a transient fad, but rather something that’s deeply ingrained into the business world — and, I’d argue that it has been ever since Jack Welch ascended to the top of General Electric.
As the generative AI fad heats up, I feel duty-bound to warn you that things are going to get much worse.
So far, we’re seeing a cacophony of excitement from the managerial class about the prospect of AI replacing their human employees. Part of that stems from slimy opportunists like Jensen Huang and Dario Amodei trying to make generative AI seem more impressive (and more inevitable) than it actually is, with the other part coming from the moronic managerial class that’s bought into the hype, and is now set on amplifying it to whoever will listen.
Just to give you a few examples:
Last week, Jensen Huang said on the All-In Podcast — a show for tedious dipshits hosted by four of the most punchable faces in technology: Chamath Palihapitya, Jason Calacanis, David Sacks, and David Friedberg — that “if you're not using AI, you're going to lose your job to somebody who uses AI.”
As Ed Zitron pointed out last week, Nvidia currently makes the vast amount of its revenue from selling GPUs to the likes of Microsoft, Amazon, Coreweave, Oracle, and Google, so that they can run generative AI models for the likes of Anthropic and OpenAI.
It’s as self-serving as the CEO of BP saying that “If you don’t drive a car that gets less than four miles per gallon, you’re a big, stupid, doody-head.”
Shockingly, few publications actually point out that obvious bias.
In May, Axios credulously repeated claims by Dario Amodei — CEO of Anthropic — that AI could wipe out half of all entry-level jobs in the next five years. His evidence for this is… uh… don’t worry too much about that.
I have to echo something that Ed Zitron previously said. Well done to CNN’s Allison Morrow for actually pushing back on this claim, and pointing out that these bold, doomerist proclamations only serve to benefit those making them — namely those running companies that need to keep the fad going, so that they can continue to raise the untold billions to keep their companies alive.
And, because generative AI doesn’t generate any profit, and costs an insane amount to run, these companies have to raise untold billions every fucking year.
On June 17, Amazon CEO Andy Jassy said that AI would result in workforce reductions in the coming years, and encouraged workers to “be curious about AI.”
I think this claim stems from Amazon being both a backer of generative AI (both through its investments in AI infrastructure, and its backing of Anthropic), as well as the fact that Amazon is a miserly company that’s long shown a disregard for the people that work for it.
Source: Literally everyone who has ever driven an Amazon delivery van, or worked at an Amazon
workhousewarehouse.If your employees routinely have to piss in bottles, you’re probably a shitty employer and you don’t care about your staff.
Klarna, the buy-now-pay-later, ditched its human customer staff because, according to Sebastian Siemiatkowski, its CEO, he was “of the opinion that AI can already do all of the jobs that we, as humans, do.”
Klarna eventually re-hired for many of the roles it cut, in part because generative AI sucks.
“Before anything else, I want to speak to what’s been weighing heavily on me, and what I know many of you are thinking about: the recent job eliminations. These decisions are among the most difficult we have to make. They affect people we’ve worked alongside, learned from, and shared countless moments with—our colleagues, teammates, and friends,” he wrote.
“I also want to acknowledge the uncertainty and seeming incongruence of the times we’re in. By every objective measure, Microsoft is thriving—our market performance, strategic positioning, and growth all point up and to the right. We’re investing more in CapEx than ever before. Our overall headcount is relatively unchanged, and some of the talent and expertise in our industry and at Microsoft is being recognized and rewarded at levels never seen before. And yet, at the same time, we’ve undergone layoffs.”
When discussing Microsoft’s priorities for the upcoming year, naturally AI came up.
“We will reimagine every layer of the tech stack for AI—infrastructure, to the app platform, to apps and agents. The key is to get the platform primitives right for these new workloads and for the next order of magnitude of scale. Our differentiation will come from how we bring these layers together to deliver end-to-end experiences and products, with the core ethos of a platform company that fosters ecosystem opportunity broadly. Getting both the product and platform right for the AI wave is our North Star,” he wrote.
I’m not going to comment on this, other than to tell you to read Edwin Evans-Thirlwell’s excellent retort: “Dear Microsoft CEO Satya Nadella, please prove that an AI didn’t write your insulting, vacuous blog about why you're laying off thousands during a time of huge profits.”
Thomas Claburn of The Register’s coverage is also very, very good. Here’s my favorite part of his write-up:
“We're just guessing here, but given that 1 in 3 people lack clean drinking water – a popular beverage at datacenters – a few billion among us probably have priorities other than kibitzing with Microsoft Copilot. As for those of us fortunate enough not to worry about such matters, interest in chatbot access probably takes a backseat behind having a job.”
I could go on. It’s not hard to find members of the managerial class that have gone on record to explain how they believe AI will shred the labor force. In his interview with Axios, Amodei even claimed that generative AI could “ spike unemployment to 10-20%” by the end of the decade — a claim that, again, isn’t rooted in actual evidence.
As I’ve said, time and time again, I don’t believe AI can do what its boosters claim — and those boosters are, quite often, those who stand to benefit from generative AI adoption, and thus are about as believable as the owner of a coffee shop that claims to have “the world’s best coffee.”
These models hallucinate. They always will hallucinate, because, on a basic level, they don’t understand the text they produce. LLMs are deterministic models, using math to guess what word follows another, without understanding the concepts that underpin those words — or, indeed, the concept of concepts.
They are guessing machines, and that means when faced with a novel problem — something they haven’t seen before in their training data — they shit the bed. This was demonstrated by a recent AI coding challenge, which used novel questions, and where the winning prompt engineer sailed to victory with the correct answers to just 7.5% of the questions on the test.
That’s not a typo, by the way.
Agents? You mean, the agents that fail 70% of multi-step office tasks — which, by the way, comprise pretty much every office task? Behave.
Generative AI fucking sucks, and the only reason you’d actually be enthusiastic about it is if you had some sort of vested interest, or if you grew up on a diet of lead paint chips and Glenn’s Vodka. There is nothing to indicate that this technology can do what its proponents claim, or that it ever will.
The problem is that the managerial class doesn’t care about this. It doesn’t care about quality. And it doesn’t care about long-term sustainability, or customers, or employees. They do not work for the company, but rather the shareholders, and so it’s not hard to understand why many are so gung-ho about a technology that fails at literally everything it puts its hand to.
We know these people don’t care about people, or human employees, and they’re most excited about AI because of the “efficiencies” it’ll bring — which is, itself, a ghoulish euphemism for “firing lots of people,” and anyone who talks glowingly about the “efficiencies that AI will bring” deserves to be fired into the fucking sun.
We know these people don’t care about quality, or mediocrity, and the fact that AI simply produces garbage slop doesn’t matter to them.
None of this should come as a surprise. But we’ve spent less time thinking about how AI will provide cover for the existence of said mediocrity, and how it’ll act as a shield from any accountability.
That is fucking terrifying.
Remember how Clorox and Cognizant pointed fingers at each other after a major catastrophic security breach that cost hundreds of billions of dollars, neither taking responsibility for the role that each party played?
Just imagine what'll happen when engineering work is shifted to someone whose brain has already atrophied from years of "vibe coding" and the latest Claude model. And what happens if someone dies from a malfunctioning medical device that was “coded” by an AI model and a prompt engineer that doesn’t actually know how to code.
If that last bit sounds outlandish, allow me to quote Jensen Huang on the All-In Podcast.
"AI is the greatest technology equaliser of all time. Everybody's a programmer now. You used to have to know C, and C++, and Python… y'know, everyone in the future can program a computer, right?"
We're yet to (at least, as far as we know) see the first major security breach caused by a vibe-coded app. I do, however, think it’s a matter of time until such a breach happens. And I think that it’s entirely possible that said breach may occur within an app created by a large, established company — not just some hobbyist developer that’s playing around with Claude Code.
I think it’s only a matter of time until generative AI kills someone. This isn’t me being histrionic. And I’m not just talking about someone with precarious mental health that turns to ChatGPT as a therapist, only for the model to go down a dark path. I’m talking about something else, though what, I’m not sure.
And I think the software — or the text, or the chatbot, or whatever AI-generated slop it may be — that kills said person might come from a business we’ve all heard of, and that we’ve all interacted with on some level. An actual major brand.
I also think that there will be warning signs before said death occurs, and they’ll be ignored because, again, the managerial class has an absolutely twisted set of priorities.
I think something bad is going to happen. And I think when it does, there’ll be a needlessly-fraught and convoluted conversation about who is culpable. I believe that we’ll witness some Olympic-level buck-passing.
Because, as we've seen, Business Idiots will do anything to avoid responsibility when human beings are involved, or the aforementioned cock-up is because of a decision made by said Business Idiot.
The people who destroyed Boeing and Intel left as multi-millionaires, if not billionaires, and they’re still fabulously wealthy. Cognizant and Clorox are still pointing fingers at each other. The question of who is responsible for the M&S hack is still unresolved.
Companies are still making shit, stupid, short-term decisions that those in the company — those doing the actual work, the engineers and the techies — think are moronic. And when said decisions prove to be moronic, it’s not the managerial class that gets fired. It’s the people who do the actual work.
AI allows the managerial class to outsource, not merely the actual production of something, but also the thinking behind it. It’s the ultimate shield for a managerial class that’s proven itself to be utterly allergic to accountability.
And, unlike an outsourcing firm, an AI won’t point fingers back at the company that runs it.
I’ve spent far too long thinking about how to end this newsletter. I wrote this line and then stared at it for twenty minutes, hoping that inspiration would strike and I’d be able to finish this thought with some hope.
What happens when the guilty party isn't a person, but a machine? Who do we prosecute? Who apologizes?
Sadly, I don’t have any hope to offer you. Nobody’s going to apologize — because nobody has apologized for anything previously, and so, why would they start now? These people are as mediocre as the work they’re willing to accept, and mediocre people rarely take accountability for their actions.
Liability — whether civil or criminal — remains an unsolved question. The EU was considering draft legislation that would assign responsibility to the operators of the models that caused harm (namely, the The AI Liability Directive), but withdrew it in February of this year.
And so, I fear we’re heading to a really dark place — one where the only way we’ll see a reversal of course is if the generative AI industry implodes (which I believe is coming sooner rather than later), or if something really, really, really bad happens.
I’m talking about a company suffering an incident that threatens its short-term survival, or someone dies, or multiple people die.
We’re cooked.
Afterword
A couple of notes:
If you don’t already, follow me on Bluesky.
If you want to support this newsletter, consider signing up for a paid subscription. I don’t have any plans for any premium-only content (for now), but that’ll likely change over time.
I already have ten premium subscribers, which is insane considering that I started this newsletter six weeks ago!
I know I said in my last newsletter that my next post was going to be about how the Internet dies. That’s still coming, but I had a burst of inspiration last night and wanted to write something.
I’ve got some fun stories in the pipeline. By fun, I mean wholly depressing. If you haven’t already, subscribe so you don’t miss anything.
I’ll end with a bit of trivia. The first draft of this newsletter was 2,500 words — and, if published, would have been my shortest newsletter so far. I had some doubts, however, and I asked my good friend Justin Pot to read through my piece and check whether I’d threaded every needle.
He suggested some (genuinely modest) changes, which somehow resulted in me writing 7,500 words — or my longest newsletter yet.
I am, seemingly, incapable of writing short, pithy newsletters.
Justin Pot is a nice guy and has a good newsletter, and you should read his stuff.
To quote the philosopher Marilyn Monroe: “If you can't handle me at my worst, then you sure as hell don't deserve me at my best.”
Live, Laugh, Love babes. xx

Literally could not have said that better myself, having been made (happily) redundant this year from a tech company that is trying to do the AI revolution having sacked all their experienced, qualified, passionate middle managment and replaced them with vibe management. AInproven not to work in our company, though we didntrybhard, but inexorably the 5% of hallucinations cost way more to fix than the savings of the 95% of accurate tasks. And, since they've now sacked all the people who could spot the idiot mistakes, they now can't even tell if the model is working or not. Who'se going to tell them, the customers, or more likely their solicitors. Complete shit show, and these are the early days as you point out.
Beautiful essay thank you.
Enshittification in the name of short-term profit will use whatever tools it has available. Great article, thanks