These People Are Weird
Something's not right in Silicon Valley
Note from Matt: This is another long post. To read the entire 6,500-ish words, open this week’s newsletter in the Substack app or in your browser.
Also, some personal news: Given I’m within reach of crossing the 1,000 subscriber mark, I’ve decided to launch a premium edition of this newsletter. More details at the bottom of this piece, but paid subs get 3-4 extra newsletters a month (minimum). Existing subscribers will get two months comped (when I figure out how) as my way of saying “thank you.”
In the 2024 US Presidential election, the only moment when Kamala Harris really seemed as though she had a chance of winning was when her running mate, Tim Waltz, went off script and pointed out what everyone was thinking. That JD Vance and Donald Trump, and the GOP at large, are, if nothing else, profoundly weird people.
There’s nothing inherently wrong with being weird — and I say that as someone who, himself, has his own idiosyncrasies that makes him stand out. Weird can be good, and wonderful, and some of my best friends are the biggest lunatics and oddballs you could ever hope to encounter.
But think about the term weird itself. Like a lot of things, it encompasses an entire spectrum of behavior, ranging from the harmless (like dreadlocked European tourists who ride the bus barefoot) to the maleficent. What I’m trying to say is that you can be a total bastard and a weirdo at the same time, and the weirdness can either obfuscate or illuminate that bastardry, depending on how it’s manifested.
With that in mind, let’s talk about Mark Zuckerberg — someone who has simultaneously inflicted the worst damage on society since the invention of leaded petrol, and is also a total fucking nutjob.
The Malicious Madness of Mark Zuckerberg
I’ll be honest, I’ve had the idea for this newsletter for a long time, but never really felt the spark to actually sit down and write something, in part because there’s been so much awful shit happening that’s dragged my attention — and the focus of this newsletter — elsewhere.
It’s not so much that I didn’t want to write this post (I enjoy being rude to the world’s richest men as much as the next guy), but that I was waiting on a hook. Something timely, perhaps, or maybe just a really good example of why the people running the world’s biggest tech companies are both ruining the planet and are also completely horseshit mental.
Call it serendipity. Call it divine providence. Or just call it an unintended consequence of being online at 2AM on Saturday morning, long after my Elvanse had left my system. I was — what else? — bedrotting on Reddit when I should have been fast asleep, only to stumble upon a post in the Artificial Intelligence subreddit that collated recent statements made by Zuckerberg during interviews with Dwarkesh Patel and Stratchery’s Ben Thompson (and an obligatory tip of the hat to Zvi Mowshowitz for collating them).
Zuckerberg talked candidly about his future for Meta, and revealed that he genuinely does not understand people, let alone his users. He showed that he is incredibly out of touch — not just in a Mitt Romney “who let the dogs out” way, but in a way that forces you to question whether he is even capable of perceiving the world in the way that normal people do.
When asked by Thompson to describe Meta’s AI opportunity, Zuckerberg said:
“You can think about our products as there have been two major epochs so far. The first was you had your friends and you basically shared with them and you got content from them and now, we’re in an epoch where we’ve basically layered over this whole zone of creator content. So the stuff from your friends and followers and all the people that you follow hasn’t gone away, but we added on this whole other corpus around all this content that creators have that we are recommending.
…
Well, the third epoch is I think that there’s going to be all this AI-generated content and you’re not going to lose the others, you’re still going to have all the creator content, you’re still going to have some of the friend content. But it’s just going to be this huge explosion in the amount of content that’s available, very personalized….”
Zuckerberg pre-empts the “why” by saying that he believes the emergence of AGI will see productivity “dramatically” increase, meaning that people have more leisure time — and so, they’ll want to spend it watching soulless AI slop on Meta.
I think the “touch grass” jibe is over-used, but please. Mark. In the name of all that is holy, please go outside and touch grass. You own a massive chunk of Hawaii. I hear the weather is lovely in Kauai this time of year! I don’t know, I could be wrong. Why don’t you go there and let me know?
Wait, it gets better. When asked whether he thinks Meta’s transition away from solely connecting people to their friends and family was a success, he says:
“I think it’s been a good change overall, but I think I sort of missed why. It used to be that you interacted with the people that you were connecting with in feed, like someone would post something and you’d comment in line and that would be your interaction.
Today, we think about Facebook and Instagram and Threads, and I guess now, the Meta AI app too and a bunch of other things that we’re doing, as these discovery engines. Most of the interaction is not happening in feed. What’s happening is the app is like this discovery engine algorithm for showing you interesting stuff and then, the real social interaction comes from you finding something interesting and putting it in a group chat with friends or a one-on-one chat. So there’s this flywheel between messaging which has become where actually all the real, deep, nuanced social interaction is online and the feed apps, which I think have increasingly just become these discovery engines.”
That’s a lot of words that don’t say much. But it’s telling that when asked whether this was a success, he doesn’t say anything about whether people liked having their cousin’s baby pictures hidden by Shrimp Jesus, or really, anything about what the users actually want from Facebook or Instagram.
He describes the technology — albeit in vague, stratospherically high-level terms.
That’s what matters to him.
Mark Zuckerberg is a man who is, from the outset, someone who is deeply removed from the thoughts and feelings of normal people — which is a terrifying prospect when you consider that he controls a company that’s used by billions of people to share their thoughts and feelings, and to connect them to the thoughts and feelings of those who matter most to them.
Mark Zuckerberg is like a cat that just dragged a mouse onto your brand new carpet — except he isn’t bothered about whether you’re impressed with his hunting skills, or even angry about the fact that bubonic plague is now leaking from the puncture holes from when he bit into its belly. He’s feeling satisfied about the “discovery engine algorithm” he used to corner it, and the “real, deep, nuanced” way he killed it.
Mark Zuckerberg is a fucking crazy person.
But it’s not like Thompson exactly acquits himself here. Caveat: I’ve read a decent amount of his writing and he seems like a smart-enough guy, but he also brags about having encouraged Facebook to go all-in on content recommendations in 2015, and doesn’t acknowledge that the decision to push away human users has been, at least, from a user experience standpoint, incredibly unpopular.
I’d also wager that the sidelining of humans has, also, resulted in people just not using Facebook or Instagram as social networks — and arguably contributes to their current decayed state. Allow me to quote myself from Losing Control:
“In April, Mark Zuckerberg — the founder of Facebook — made a revealing admission during his testimony to the Federal Trade Commission, as part of a long-running antitrust lawsuit that may see the company broken up. Just 20 percent of the posts people see on Facebook, and 10 percent of the posts on Instagram, come from their connections — accounts made and operated by other human beings that the user has ‘friended.’
… perhaps it’s because Facebook and Instagram are now just shitty products that, over time, have completely stripped their users of any autonomy, and people don’t want to waste time posting life updates when the algorithm decides whether it’s worth showing them to their friends.”
Zuckerberg also describes a major part of Meta’s AI opportunity as being in the creation of models that can dynamically deliver on stated business demands from advertisers. From his Stratchery interview:
“So [the] most basic of the four [AI use cases] is to use AI to make it so that the ads business goes a lot better. Improve recommendations, make it so that any business that basically wants to achieve some business outcome can just come to us, not have to produce any content, not have to know anything about their customers. Can just say, ‘Here’s the business outcome that I want, here’s what I’m willing to pay, I’m going to connect you to my bank account, I will pay you for as many business outcomes as you can achieve’. Right?
Thompson reacts by describing it as the “best black box of all time,” adding “I’m with you. You’re preaching to the choir, everyone should embrace the black box. Just go there, I’m with you.”
I mean, if you only care about the business side of things — and Zuckerberg says that he expects that AI will grow advertising’s share of US GDP from its current 1-2% by a “very meaningful amount” — I can imagine why this would sound exciting.
But god almighty Ben, can you please push back on things? Facebook is to tech companies what Joey Barton is to association football, seemingly incapable of letting a week pass by without the contrivance of some controversy, or having been caught doing something very, very naughty.
When Mark Zuckerberg says he’s working on a black box that’ll automate advertising and that’ll aggressively pursue the user’s demands, and one seemingly with limited human oversight (quoting Zuck: “if you think about the pieces of advertising, there’s content creation, the creative, there’s the targeting, and there’s the measurement”), the correct response is not to say “how cool!”
The correct response is to ask how this system won’t be abused by shithouse scumbags like Cambridge Analytics and AggregateIQ did with the non-genAI Facebook advertising system in 2016.
The fact that Thompson didn’t push back on this illustrates the difference between an analyst — which, to be fair, Thompson describes himself as — and a journalist, because the fundamental question of safety and abuse is one that needs to be addressed.
Sidenote: Admittedly, I don’t necessarily expect any tech journalist that interviews Zuck would actually ask those questions — because by gaining access to Zuck in the first place, the expectation is likely that they wouldn’t ask anything too difficult or embarrassing.
And the fact that Zuckerberg didn’t do a throat-clearing to say “yeah, we’ve seen how our platform can be abused, and here’s how we’re going to mitigate any future abuses based on the hard lessons we’ve learned” is also really, really alarming — and further reinforces my point that he exists in a completely different world to the rest of us.
Did he just forget about the time that Facebook ads infrastructure was literally used to swing elections and referendums by shady actors? I know 2016 feels like it was a really long time ago, these events were pretty big news at the time!
Zuckerberg’s interview with Dwarkesh Patel also produced a few other belly-laughs — although allow me to commend Patel for actually asking questions that were fairly probing and critical.
Dwarkesh asked Zuck how we ensure that the inevitable “relationships” that people forge with large language models are “healthy.” Here’s what he said (emphasis mine):
Probably the most important upfront thing is just to ask that question and care about it at each step along the way. But I also think being too prescriptive upfront and saying, "We think these things are not good" often cuts off value.
People use stuff that's valuable for them. One of my core guiding principles in designing products is that people are smart. They know what's valuable in their lives. Every once in a while, something bad happens in a product and you want to make sure you design your product well to minimize that.
But if you think something someone is doing is bad and they think it's really valuable, most of the time in my experience, they're right and you're wrong. You just haven't come up with the framework yet for understanding why the thing they're doing is valuable and helpful in their life. That's the main way I think about it.
This is how product-brained Zuck is. He thinks that “valuable” and “healthy” are the same thing, which — if you’ve ever spent any time with someone suffering from substance addiction, you’ll know first-hand — is not the same fucking thing. Only a crazy person — someone utterly removed from the same worldly plane as the rest of us — would make that argument.
Someone addicted to heroin probably values the fact that the dark, foil-wrapped tar that they buy from their local dealer makes them feel good, and keeps the excruciating withdrawal symptoms at bay. That doesn’t mean that heroin should be sold in the wellness section of your local CVS, you absolute brain-dead imbecile.
While it’s true that heroin and an AI romantic partner is not the same thing — and, to be clear, I’m not trivializing substance abuse, but rather addressing the point that people can value things that are really bad for them — it’s also clearly not healthy, either. I’m sure you all saw the posts on the /r/MyBoyfriendIsAI subreddit after OpenAI ditched GPT-4o for GPT-5, which, in turn, made people’s AI “partners” change their personalities overnight?
Sidenote: A lot of people have piled-on the members of this community. I ain’t one of them. I see /r/MyBoyfriendIsAI as a symptom of a broader societal malaise where people are just fucking lonely, and unable to find human companionship, they are looking for the next best thing.
These people don’t deserve mockery, but rather our compassion and our understanding.
Or, perhaps you read the story about the elderly man who — while suffering cognitive impairment after experiencing a stroke a decade earlier — fell for one of Meta’s AI character chatbots, who invited him to meet in-person. And while travelling an address provided by said chatbot, fell and hit his head. From the Reuters article:
Bue had fallen. He wasn’t breathing when an ambulance arrived. Though doctors were able to restore his pulse 15 minutes later, his wife knew the unforgiving math of oxygen deprivation even before the neurological test results came back.
That man — Thongbue ‘Bue’ Wongbandue, a 76-year-old husband and father — was obviously vulnerable. Per Reuters, his cognitive faculties had precipitously declined after his stroke, and his family was on a waiting list to screen him for dementia. Of course, an LLM chatbot couldn’t possibly know this, because an LLM chatbot doesn’t know anything, as repeatedly noted in this newsletter. These are, essentially, word-guessing machines that use complex math to predict the right thing to say.
To not only foist them onto a world where you do not know who will use them, but also to explicitly allow them to foster romantic relationships with strangers — including children! — is unforgivable.
There’s a world where Bue never met Meta’s AI chatbot — a character called Billie, who was modelled after Kendall Jenner — and never left his home, against the desires of his family, to meet “her.”
In that world, I imagine Bue would still be alive.
I’m not joking about the children part, by the way. Per Reuters:
An internal Meta Platforms document detailing policies on chatbot behavior has permitted the company’s artificial intelligence creations to “engage a child in conversations that are romantic or sensual,” generate false medical information and help users argue that Black people are “dumber than white people.”
…
Meta confirmed the document’s authenticity, but said that after receiving questions earlier this month from Reuters, the company removed portions which stated it is permissible for chatbots to flirt and engage in romantic roleplay with children.
That document was signed off by “Meta’s legal, public policy and engineering staff, including its chief ethicist.”
Christ. Meta’s “chief ethicist.” I wonder what he does in a day. I bet that’s like being the Head of Animal Welfare in Michael Vick’s dog-fighting ring, or the Chief Diversity Officer for the Alabama Klan.
Anyway, this point speaks to the insanity of not just Mark Zuckerberg, but the people who work within these institutions. I guarantee that if you walked up to a stranger on the street and asked if it’s a good idea to create a robot that engages in kinky written wordplay with minors, you’d likely either be told “fuck no” or arrested, or both.
The fact that several people — all of whom, likely, earn more in a year than most people do in a decade — believed otherwise only shows how malevolently batshit these people are. These are not normal, well-functional human beings, and yet they’re in charge of a company that facilitates and oversees interactions between billions of people every single day.
These people are not well.
And while it’s fair to note that Zuckerberg’s comments on Dwarkesh Patel’s podcast preceded both the disclosure of Meta’s chatbot ethical guidelines by Reuters, or Reuter’s coverage of the tragic Bue case, everything here was so fucking predictable, where even a normal human being could have foreseen them.
The fact that he argued that people arguing that AI relationships could be bad were being “prescriptive,” and that those failing to properly acknowledge the value of a word-guessing machine that lures pensioners to their death were “wrong” shows how ultimately detached from reality this guy is.
If Mark Zuckerberg was a normal guy with a normal job, you know he’d be the one heating up tuna in the office microwave — because he likes the smell, and because he knows you hate it.
I bet Mark Zuckerberg claps like a fucking sealion when his private jet lands.
The Amazing Sam Altman
While Mark Zuckerberg is perhaps the final boss of Silicon Valley crackpottery, and so it naturally follows that I’d spend the first chunk of this article writing about him, I’d also be remiss if I didn’t list some other honorable mentions.
Take the stage, Sam Altman.
What makes Altman a formidable runner-up to Mark Zuckerberg in the techno-twat Olympics is his incredible lack of self-awareness, which if we could synthesize in tablet form tablet, could cure clinical anxiety for good.
I almost envy Sam Altman, insofar as I wonder what it’s like to be able to say whatever batshit (and contradictory) things that enter your head to a global audience, without ever experiencing a tinge of self-doubt. That kind of confidence is rare.
Sam Altman has said the following things:
In January 2025, he said “We are now confident we know how to build AGI as we have traditionally understood it.
Then, in August of this year, he said that AGI was ”not a super useful term.”
In 2024, Altman said that AGI wouldn’t be as impactful as once expected.
In 2015, he said it would likely kill all humans — but would lead to some amazing AI startups.
“How was the play otherwise, Mrs Lincoln?”
In 2023, he said: “A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that too.”
That same year, he told CNN that AI could “kill us all.”
In February 2024, Sam Altman told the World Government Summit in Dubai that GPT-5 will be “smarter,” and this smartness will be a “bigger deal than it sounds” because “if it’s a little bit smarter, it’s a little better at everything.”
The previous month, he said that GPT-5 would “ be able to do a lot, lot more” than existing models.
In May of 2024, he suggested that GPT-5 could work like a “virtual brain.”
In June of 2024, Altman described GPT-5’s purported across-the-board improvements as a “miracle.”
In June of this year, Altman said that GPT-5 could be a “significant leap forward” over GPT-4o.
In July of 2025, Sam Altman told comedian Theo Von that GPT-5 “scared” him and compared it to the Manhattan Project.
This week, he said that OpenAI “screwed up” the launch GPT-5 — which is softer way of saying “we fucked it, lads — and began teasing GPT-6.
Last week (!) Altman said that he expects OpenAI to spend “trillions of dollars” on data center infrastructure in the “not so distant future.”
If anyone else said that, it would be laughable.
For context, the GDP of Poland was around $900bn in 2024. To suggest that OpenAI will spend “trillions” — i.e. more than one — suggests spending more than the entire annual output of Poland, a country with a population of roughly 37 million, for the benefit of a company that only knows how to burn money.
In 2024, Altman said that he would need as much as $7tn to build human-level AI — or, if we’re using the nation-state metric, fourteen Denmarks, or a quarter of all US economic output that year.
Then, a few days ago, he said that AI was in a bubble and that “someone” would lose “a phenomenal amount of money”.
I don’t know who that someone may be. I have ideas, of course. Still, it could be anyone. Anyone at all.
Jokes aside, can we just accept that Altman’s routine flip-flopping goes beyond the normal mendacity of a CEO trying to sustain the hype that his company lives or dies upon, and is, in fact, genuinely weird.
Altman isn’t changing his mind about small details — like how Satya Nadella thought that the metaverse was the next big thing, and then pivoted to AI almost immediately — but big things. Existential things.
Altman has gone from demanding the equivalent of one-quarter of the US’s economic output, to saying that “yeah, things are a bit mad, this is like the Dot Com Bubble.” He’s gone from hyperventilating about the existential risks posed by AGI to saying that AGI won’t, in fact, be that big of a deal — and who even knows what AGI even means?
Altman has spent the past year-and-a-bit boosting GPT-5, building up hype that he couldn’t sustain, comparing it to the effort to build the first atomic bombs, and then — no pun intended — describing it as a “misfire.”
It’s as though Altman has two personalities — one that says a bunch of reckless shit into microphones, which is then dutifully repeated by a compliant tech media, and another more sober one, which ends up trying (and failing) to walk back the mad shit that the other personality says.
It’s a Dr Jekyll and Mr Hyde tale, except whereas the book tried to contrast the duelling tendencies within all of us for good and evil, both of Sam Altman’s alter-egos are insufferable knobheads.
The Era of the Business Weirdo
My friend — and boss, and mentor, and father confessor — Ed Zitron coined the idea of the Business Idiot to describe the epidemic of people, both in upper- and middle-management that are, as the name suggests, idiots. Here’s what he wrote:
[We have] created a symbolic society — one where people are elevated not by any actual ability to do something or knowledge they may have, but by their ability to make the right noises and look the right way to get ahead. The power structures of modern society are run by business idiots — people that have learned enough to impress the people above them, because the business idiots have had power for decades. They have bred out true meritocracy or achievement or value-creation in favor of symbolic growth and superficial intelligence, because real work is hard, and there are so many of them in power they've all found a way to work together.
Another good bit that I’m going to quote:
We have, as a society, reframed all business leadership — which is increasingly broad, consisting of all management from the C-suite down — to be the equivalent of a mall cop, a person that exists to make sure people are working without having any real accountability for the work themselves, or to even understand the work itself.
When the leader of a company doesn't participate in or respect the production of the goods that enriches them, it creates a culture that enables similarly vacuous leaders on all levels. Management as a concept no longer means doing "work," but establishing cultures of dominance and value extraction. A CEO isn't measured on happy customers or even how good their revenue is today, but how good revenue might be tomorrow and whether those customers are paying them more. A "manager," much like a CEO, is no longer a position with any real responsibility — they're there to make sure you're working, to know enough about your work that they can sort of tell you what to do, but somehow the job of "telling you what to do" doesn't come with it any actual work, and the instructions don’t need to be useful or even meaningful.
Decades of direct erosion of the very concept of leadership means that the people running companies have been selected not based on their actual efficacy — especially as the position became defined by its lack of actual production — but on whether they resemble what a manager or executive is meant to look like based on the work that somebody else did.
The Business Idiot is powerful because it’s something that we all recognize, and have experienced in our own lives. People who have ascended the corporate ladder, and whose only real life accomplishment is to force those around them to question “how” and “why.”
The idea of the Business Idiot also encompasses the veneer of invulnerability that these imbeciles enjoy, wherein they can suggest moronic things that everyone around them knows is a bad idea, but nobody can say as much, because these people are so convinced of their brilliance that any direct questioning feels like an affront.
That, or they’re surrounded by layers of managers in the organization chart that, in effect, insulate them from the people actually doing the work — and thus, don’t hear the voices that say “this is a dumb idea.”
With that in mind, I’d like to propose the idea of the Business Weirdo — someone who, by virtue of their journey to the top rungs of the corporate ladder, or because they were just born that way, is unable to perceive the world in a way that makes sense to anyone else, or is unable to process facts about the world that are empirically true in a way that resembles a normal person.
The problem with the Business Weirdo is — like the Business Idiot — is that when you see one, you can’t stop seeing them. They’re everywhere.
Check out this video — now twelve years old — of Adobe’s long-time CEO Shantanu Narayan being asked why Adobe’s software costs so much more in Australia (as much as AU$1,400) than in the US, when considering that this software is delivered over the Internet (not that a boxed copy would add an additional AU$1,400) in costs.
Narayan redirects by talking about Creative Cloud, not even addressing the original question. Every subsequent attempt at a follow-up returns the same redirection. Here’s a transcript from a tiny proportion of the exchange which lasted four minutes.
Reporter: What about the customers who want to buy traditional versions of Creative Suite which are the majority of Adobe’s business. I know you’re talking a lot about Creative Cloud being the future, but if that’s the case, why not harmonize the prices of your traditional software?
Narayan: Well, Adobe wants to think about how we attract the future generation. The future of the creative is the creative cloud.
Reporter: I’m sorry sir, you’re really not answering the question. I don’t really have any other way to put it.
Narayan: Again, all I can say is that when we think about the future about what’s the best offering for our customers, I think about the creative cloud.
This line of questioning went on for four minutes.
To be clear, a huge part of media training is telling company spokespeople how to avoid answering difficult questions. There’s not a lot of magic here. A rep might say “I don’t know, or I’ll get back to you,” or “we’re always thinking about [this subject] and will have more to say in the future,” or just straight-up say “I can’t talk about that.”
I refuse to believe that Narayan — the CEO of a multi-billion dollar company — has had no media training. I do believe that he thought that he could redirect the question and get away with it — and congrats to the reporters (there were more than one!) for actually calling him out on this redirection.
The problem is what happens when that redirection failed. Narayan just… tried again. And again. And again, even when the reporter was growing even more exasperated (“I’m sorry sir, you’re really not answering the question. I don’t really have any other way to put it.”).
The definition of insanity, we’re told, is doing the same thing and expecting different results. Even if that isn’t true, the fact that Narayan thought that this tactic would work when it was, at that point, evidently clear that it wouldn’t is just… weird.
Separately, you have to acknowledge that the momentary hype surrounding crypto and the metaverse was, in fact, deeply weird.
The metaverse saw Facebook spend tens of billions of dollars — and even change its name! — because its leaders believed that people would want to socialize and work in a sanitized version of Second Life, minus the griefers and the ethereal floating penises. (That last vid is, obviously, very NSWF).
Sidenote: The bit about “minus the griefers” isn’t even true. Go on TikTok and look up “Horizon Worlds trolling.” I’m convinced that for every serious user of Horizon Worlds, there’s another person who’s there just to cause trouble, like by throwing virtual food items at Meta’s in-game staffers, or by heckling virtual open-mics.
The underlying premise of crypto was that people would… replace the money they used, and that they got from their jobs and their investments, with mysterious digital coins that massively fluctuated in value every single day (which, in turn, means they’re pretty useless as a currency) and were issued by people they didn’t know, or trust.
The crypto boosters saw digital currencies becoming a mass-market phenomenon — which is insane if you’ve ever had to help troubleshoot a technical problem for an older relative where the solution is “press ‘yes’ on the pop-up that says exactly, in plain English, what it’s asking you to do.”
A conversation with a normal person — someone who still calls Twitter ‘Twitter’ and who enjoys healthy boundaries between their digital and physical lives — would quickly put to rest the idea that they would want to spend every waking moment of their life with a VR headset on their face, or to swap their dollars for digital money that spikes like Nvidia in the morning, and crashes like Lehmann Brothers in the evening.
They wouldn’t just say “no.” They’d say “no” and then add “who would actually want that?” or “are you insane?”
To pre-empt a criticism that what I’ll get — namely that innovation often seems weird — I’d simply say that successful innovation often builds upon stuff that already existed, but wasn’t that good.
The iPhone was innovative, not because it did something new, but because it did something that other products already did (PDAs, mobile phones, iPods) and combined them into a single device that was absolutely awesome to use.
Uber was innovative, in part because it addressed the fact that taxis — at least, in the early 2010s — were expensive and unreliable.
The laptop was a computer that you could take places. The digital camera was a camera where you didn’t have to wait a week for your local Max Speilman to print off your holiday shots, and where you weren’t constrained by how much film you could afford, but how big your memory card was.
The electric car is a car — but cheaper to run, and arguably better for the environment (assuming you’re behind the wheel of a Renault Zoe, and not a fucking Cybertruck or Hummer EV).
These are all things that you can sell to a normal person, and while they’re innovative (at least for their time) and different, the person can at least understand what the product does and infer how they’re better to what currently exists.
It’s possible to think different and still exist within the planes of reality.
The Power of Weird
I want to wrap this up by going back to the definition of weird. I’m not talking about people with personal, innocuous idiosyncrasies. I’m not talking about weirdness on a personal level. You can dress up like a wizard if you want — more power to you. You can walk around smoking a pipe and carrying a parrot on your shoulder. I don’t care.
The weirdness I’m talking about — the Business Weirdo weirdness — is a wholesale failure to understand the world around the business idiot, and to empathize with normal people, and to comprehend how normal people experience life. It’s a failure to share the same values of normal people — namely, care and compassion for their fellow human beings, however flawed or selective or inconsistent that care and compassion may be.
Business Weirdos don’t merely perceive the world differently, but they also believe that they have the power to change how others perceive the world — whether that’s Sam Altman flip-flopping on major questions, like whether GPT-5 will be a game-changer or AGI poses an existential risk to humanity, or Narayan trying (and failing) to use Jedi mind tricks to stop an awkward line of questioning, like he’s fucking Adobe-wan Kenobi.
“This is not the software you’re looking for. Have you heard of Creative Cloud? I believe it’s the future of the creative….”
Because Business Weirdos are so weird, they don’t share the same basic morality that comes naturally to us. Mark Zuckerberg thinks the golden rule is, in fact, made of gold, and he wants to melt it down so he can forge another embarrassing chain to wear around his neck next time he goes on Joe Rogan.
The thing about the Business Weirdo is that they’re good at hiding in plain sight, disguising their weirdness as genius that we must — even if we don’t accept it — respect. I’d argue that’s borne from sheer conditioning. Nobody ever said, on camera and to Mark Zuckerberg’s face, that the metaverse was a dumb idea, or that a lot can go wrong with AI companions.
Every soft-ball interview question, or reporter that nods when a tech CEO says something bonkers, conditions us into doing the same.
The best (and only) way to break that conditioning is to call it out as what it is — deeply, deeply fucking weird.
When Mario Amodei says that AI will displace half of all entry-level jobs in the near future, with no evidence, it’s to ask whether he thinks that’ll be a good thing for humanity in a way that makes it clear that you think it isn’t a good thing.
When Mark Zuckerberg says that people generally know whether their life choices are good — even when, from the outside, they seem like they aren’t — ask whether he applies the same philosophy to people addicted to fentanyl.
When Shantanu Narayan spends four minutes avoiding a question by talking about something else, it’s to repeat what he says back at him, structured as a question, and bleeding with sarcasm to show that you aren’t falling for it.
It’s to ask why Sam Altman can change his mind so often and so easily on topics that are fundamental — like whether AI deserves the trillions of dollars he’s demanding, or whether the technology he aspires to build has the potential to kill us all, and how he came to settle on these completely contradictory opinions.
The great thing about the epithet “weird” is that it’s something that’s rooted in a “sniff test.” We can all determine that the metaverse isn’t going to be a thing without having to know anything about virtual worlds or virtual reality or NFTs.
That’s not to say that it isn’t good to engage in arguments on the basis of fact, reason, and evidence — it is! — but not everything deserves it. Some things are so obviously batshit that you don’t need to craft a line-by-line rebuttal, in the same way that you wouldn’t necessarily write an academic thesis about why David Icke’s hypothesis of how interdimensional reptilians control the world is wrong.
The line between genius and weird is, admittedly, not always clear. But weirdness — the capricious, delusional kind that manifests in the highest echelons of the tech industry — is one of those things that, once seen, cannot be unseen.
And we should laugh at it.
Footnotes
Two months in and I’m within arm’s reach of crossing 1,000 subscribers. Thank you, thank you, and thank you.
I’ve enjoyed writing this newsletter. Most of all though, I’ve enjoyed speaking to the people who’ve read my stuff and decided to reach out, whether over email or in the comments.
As always, you can reach out to me via email at me@matthewhughes.co.uk, or on Bluesky.
I’m a lot cheerier on Bluesky. Sometimes, I even post dog pictures.
I’ve been invited to speak on a couple of podcasts, and to contribute to other publications based on what I’ve written here. When that stuff goes live, I’ll post a link here.
If you want to invite me onto your podcast, or if you want to commission me to write something angry and opinionated, drop me a line.
Also, some news: I’m launching a premium version of this newsletter. I said to myself that I would do that once I crossed the 1,000 subscriber mark. I genuinely didn’t expect that I’d cross that line (or, come very close to it — at the time of writing What We Lost has 969 subscribers) after two months. I thought it would take much, much longer.
Premium subs get 3-4 extra posts each month. The first will go live (depending on how other stuff pans out) either tomorrow or on Monday.
I’ll still post content for free subscribers, and the big posts — the newsletters like the one you’re reading right now, where I’ve written 6,000 words — will always be free.
Also, for the seventeen people who signed-up for a premium subscription even though there wasn’t actually any premium content, I’m going to comp you a couple of months free to say thanks.


I realize it’s a long article and you had a lot to get to, but there was so much more insanity to unpack in this quote.
“So [the] most basic of the four [AI use cases] is to use AI to make it so that the ads business goes a lot better. Improve recommendations, make it so that any business that basically wants to achieve some business outcome can just come to us, not have to produce any content, not have to know anything about their customers. Can just say, ‘Here’s the business outcome that I want, here’s what I’m willing to pay, I’m going to connect you to my bank account, I will pay you for as many business outcomes as you can achieve’. Right?”
First off, what business person on Earth (other than Mark Zuckerberg, maybe) doesn’t want to know anything about their customers? How would you know where to grow, or what new products to create, or how to tweak your current products to better suit the people who are buying them? There are only 2 businesses on Earth that can function without the seller knowing a single thing about the buyer - drug dealing, and social media platforms, which is just digital drug dealing. This is truly the attitude of a business leader who has nothing but contempt for the people that use his service.
Same with not knowing what message you’re putting in the world, which you know Mark himself doesn’t believe is effective an effective approach, given his companies are running two slick, highly produced global ad campaigns right this second. Because seriously, who the fuck wants to put a message out to a thousand or a million people and doesn’t want to know what that message says? Again, this is a person who on some level believes reality is an algorithm.
And lastly, what he’s describing is in essence a perpetual motion money machine. I, a business, simply pay Meta to turn $1 into $3? It’s that simple? Well then let me just put that $3 back into the machine and turn it into $9? Why, I’ll be a trillionaire in a matter of months!
It’s either a sociopathic lie or a truly unhinged delusion. Probably both.
> What’s happening is the app is like this discovery engine algorithm for showing you interesting stuff and then, the real social interaction comes from you finding something interesting and putting it in a group chat with friends or a one-on-one chat.
He’s just saying that the Facebook news feed was getting outcompeted by TikTok videos that people were sending each other on iMessage. That’s just the reality, weird or not.
I feel like this piece suffers from the common rhetorical fallacy of an enemy portrayed as weak and contemptible but simultaneously all-powerful. On one hand you are implying that Zuckerberg single-handedly caused the move away from social news feeds (rather than being forced by competitors), and on the other hand you are implying that he is also so completely clueless about social media product decisions that random Substack bloggers like yourself know more than him.