The Benefit of the Doubt
Big Tech doesn't deserve your grace. Or your forgiveness.

Note from Matt: This post is rather long. If you’re reading it in your inbox, it might get cut off towards the end. To see the whole thing, you’ll have to open this newsletter in your browser or in the Substack app.
Although this post makes sense when read by itself, it’s really a follow-up to something I published yesterday. As a result, I’d encourage you to read that first.
My next post will be a premium-only newsletter, which I’m aiming to publish at some point over the weekend, or on Monday at the latest. To read it, you’ll need to sign up for a premium subscription. It’s only $8 a month, and it really helps me out. These newsletters are long and they take a lot of time to write.
I started writing this newsletter a couple of months ago. Since my first post, published at the end of June, I’ve probably written around 100,000 words in total — which is a lot, considering that I typically only write one post each week.
Still, two months is enough for you to get a sense of how I go about writing. I have a style that, admittedly, some find a bit grating, though it’s one I make no apologies for. I can be prolix, sure, and plenty of people have left comments saying as much.
Then there’s also the intros. I don’t like to jump straight into a story, but rather take the reader on a winding, meandering path that eventually — eventually! — gets to the basic point I want to make.
Again, some find that approach a bit grating, but it’s also one I make no apologies for. The beauty of writing for myself, and through the medium of Substack, is that I don’t have to adhere to any limits. There’s no editor reading my stuff, or a specific number of column inches in a newspaper or magazine that my words have to fit within. This is my domain, and here, I am sovereign — or as sovereign as UK libel law and the Substack terms of service permit.
With today’s newsletter, however, I have to be a bit blunter than usual. I need to start with a bit of exposition about why we’re here, and why I’m talking about the things I’m talking about, and why I think it matters.
In my last post, published yesterday, I talked about how I’ve been in a bit of a slump the past week or so — and the only thing that really manages to drag me out of said slump is anger. The kind of frothing, furious anger you’re now accustomed to — and the kind that is increasingly making me unemployable in the tech industry.
Yesterday, I was angry about the fact that big tech can commit actual crimes — or, at the very least, grave violations of moral norms — and then get away with it, facing no repercussions for the lives they ruin or the financial cost they inflict on others.
There are examples where individuals have done the same thing — whether practically or morally — as big tech companies, and ended up being prosecuted and jailed. The people working at the tech companies, meanwhile, get to count their RSUs and enjoy elaborate catered meals in their office canteens. There is an obvious double standard that big tech enjoys, and yesterday I spent nearly 4,000 words writing about it.
Today’s post is, in many respects, a follow-up to that newsletter — even though I started writing it before the events that precipitated yesterday’s post, namely the fact that Google was able to violate antitrust law and face no consequences, in part because the judge overseeing the case mistakenly believed that generative AI had changed the way that online search works.
Often — though not always — the thing that determines whether something is a crime is intent, or at the very least, knowledge that said thing is happening and that it violates criminal law. At the very least, conscious awareness can change the nature of a particular crime, which is why we distinguish between manslaughter and murder.
Intent matters.
The question becomes whether big tech is knowingly aware of what it’s doing — whether that be releasing a chatbot that tells a troubled teenager how to hide the evidence of their self-harm and provides them with advice on the most effective suicide methods, or building products that are ruinous to the mental health of teenage girls, or forment genocide in countries already racked with civil conflict.
This question of intent is so important because it allows us to conclude that either the most valuable companies in the world are run by feckless idiots who couldn’t anticipate the glaringly-inevitable outcomes of their products, or that they’re run by really, really bad people.
I believe that, for too long, the tech media (and, this may be unpopular, but the wider public) have been too willing to presume that the tech industry’s negative actions were a consequence of simple, honest mistakes.
And, to be honest, I kind-of get why. Pretty much every major tech company that’s emerged over the past two decades started life in a dorm room somewhere, or was otherwise created by a bunch of people in their late-teens or early twenties. It’s hard to imagine that someone who could be your son or your kid brother — someone with pasty skin and a bunch of neurodivergent conditions, whether diagnosed or otherwise — is in fact some malevolent bastard that’s set on destroying the world.
We all thought we knew what corporate evil looked like. We thought corporate evil wore Savile Row suits and smoked Cohibas. We didn’t think that evil would be… well… someone who looks like Mark Zuckerberg.
There’s also the fact that, in the early 2010s, tech was exciting — and I think we forget about how different things were compared to today. Smartphones were new, and we were still figuring out how they fit into our increasingly-digitized lives. In a few years, the way we socialized, found love, and entertained ourselves radically changed.
The early 2010s brought us the cloud, making it cheaper and easier to launch a new SaaS app that did something better than a legacy player, and that made the Internet that bit more useful.
Things were exciting and fun, and in many respects, it made it harder to notice all the other awful shit that was going on behind the scenes. In many respects, we were like Homer Simpson after he got a job at Globex International, enjoying the perks and trappings of the new gig, while simultaneously ignoring the fact that his boss, Hank Scorpio, was a legit evil genius.
And so, there are three points I want to make in this newsletter:
Big tech has been given “the benefit of the doubt” far too often, by far too many people, and this generosity has empowered its malevolence.
When big tech does something awful, it’s rarely by mistake. There’s almost always foreknowledge and intent involved.
After nearly two decades (depending on when you want to start counting) of this shit, we are under no obligation to presume innocence when big tech does something that harms someone, or that violates one of our legal or moral norms.
These points are important because, as I pointed out, the tech industry routinely flouts the law — and seldom faces any real consequences.
When one considers the patterns of behaviors exhibited by these companies, which in some cases stretch back decades, it becomes even harder to understand why. And it only makes the sense of unfairness I touched upon yesterday feel even more profound.
Anyway, let’s get into it.
Mea Culpa
As the late Christopher Hitchens once quipped: “Those who ask confessions from other people should be willing to make one oneself.”
So, here I am.
There have been plenty of times in my personal and professional lives where I’ve screwed up — I mean, really, really badly — through no ill-intent of my own. Some of those mistakes affected other people, and those people had to determine whether they were, in fact, genuine mistakes, or the product of malice.
You’ll want stories. I have stories. Six years ago, I was one of a handful of tech journalists that were briefed on Github’s impending introduction of free private repositories for all users.
Previously, private repos were only available to those paying something like $20-a-month. It was a long time ago. I can’t quite remember. This was a big story, especially considering that Github had already emerged as the default source management platform for coders.At the time, I was working at The Next Web. Our CMS was based on Wordpress (I have no idea if it still is, as I left the company in late 2019). It was an absolute dog of a system.
That was probably because the CMS wasn’t just handling the media side of the business, but also was responsible for stuff related to the annual conference. A veritable mountain of custom code and plug-ins sat on top of the stock Wordpress install, which collectively meant that some basic CMS features didn’t work properly.
It took about ten minutes for a story to hit the homepage after pressing “publish.” Deleting content didn’t work, either, and usually required someone to manually go into the database and directly run some SQL.Anyway, I got the embargoed story from Github, wrote it up, and scheduled it. Except, I scheduled it for a date in the past by mistake. Rather than throw up an error message, the CMS simply published it straight away.
I don’t know whether that’s a TNW-specific issue, or just how Wordpress worked at the time. Regardless, it didn’t matter. The damage was done.
I was working late — well after the US shift clocked off — and there was nobody to help. I couldn’t delete the post, because… well… our CMS didn’t work properly. I was screwed.Credit where credit’s due, Github’s PR teams — both in Europe and the US — noticed straight away and started bombarding my email and my phone asking me to pull it down. I also have to give the folks of Reddit and Hacker News a pat on the back for similarly noticing it straight away.
I wasn’t having fun. I don’t think they were, either, especially considering that it was well outside normal working hours in both the UK and the US.
I had to explain to a bunch of bleary-eyed hacks, whose night I had probably ruined, that not only did I accidentally publish a major story ahead of an agreed-upon embargo, I also physically couldn’t delete it either.
It’s an explanation that strained the limits of believability, especially when a more cynical person would use Occam’s Razor and accept the simplest explanation as the most likely, namely that I wanted to be the first to print with a big scoop.
For the most part, I’m a good guy. (Admittedly, that’s what a lot of bad people say, but in my case it’s true. Although, again, that’s what a lot of bad people say). When I screw up, I tend to get the benefit of the doubt. People presume that my screw-ups are the result of human error, rather than my hidden bastardly tendencies, which for the most part don’t exist. .
GitHub, in that situation, was gracious. It accepted that what happened was the product of me not checking the date in the scheduler close enough, and TNW’s CMS being built entirely out of string and hope.
I could have been — and understandably so — blacklisted or sued, but Github was weirdly cool about things. It ended up releasing the feature sooner than intended — though only by a couple of days — and made a snarky comment on Twitter about the circumstances upon which it rolled out free private repositories, which was fair enough.
While our relationship was a bit frosty afterwards, I do believe that Github accepted that what happened was the product of an innocent mistake. It gave me the benefit of the doubt. Life went on.
Broadly speaking, I think that healthier societies are ones based on the presumption of good-intent. I’ve noticed that the most unhappy people I’ve met are those who assume that there’s ill-intent lurking behind every corner. If you assume that everything bad that happens to you is because someone chose for it to happen, you’re going to live a very miserable life. Mistakes happen.
At the same time, I also recognize that the benefit of the doubt only works when both sides are equally ill-intentioned. Trust has to go both ways — and both parties have to, for the most part, share the same moral compass.
As I alluded to earlier in this piece, in the 2000s and early-2010s, people used to give the tech industry the benefit of the doubt all the time — and part of reason why this happened was because tech had yet to show its true, evil face, and partly because these founders were still in their early 20s, and it’s hard to imagine a dorky college drop-out being the manifestation of beelzebub himself.
(And yes, I have receipts to back up the whole “in the 2000s and early-2010s thing. Very, very funny receipts.)
The problem is that the tech industry has, for the most part, shown its face.
The things we see every day — and have seen every day for the past two decades in particular — are not the byproduct of singular screw-ups, or youthful folly, but rather unvarnished malevolence. And, as a result, we have no reason to give these people the benefit of the doubt.
Unearned Trust
Here’s a fun game to play: pick a tech website — ideally one that’s been around for a long time — and search it for the phrase “the benefit of the doubt.” You come up with some genuinely eye-opening (and hilarious) quotes like this one from Techcrunch founder Michael Arrington in 2010.
I usually give Facebook the benefit of the doubt in its various wars with the press and users, particularly around privacy issues. Mostly because user expectations around privacy are changing in real time. Things that were reprehensible just a couple of years ago are now considered so mainstream that even Salesforce will buy them and no one blinks.
So when Facebook redefines privacy to remove actual privacy, I take a wait and see approach.
The bit about “things that were so reprehensible just a couple of years ago” refers to a 2006 profile written by Arrington about a company called Jigsaw — which was a kind-of marketplace for contact information. Admittedly, he was right here. In a world with Palantir and open-source AI facial identification, it does feel a bit quaint.
Meanwhile, the line about how Arrington “[takes] a wait and see approach” refers to an article published that same year called, and I swear I’m not making this up, “Ok You Luddites, Time To Chill Out On Facebook Over Privacy.”
There’s a lot to digest here. The first line basically says that “since the overton window of what’s considered acceptable continuously shifts, I stand with the company that’s doing the shifting.” That’s… not a strong argument.
Meanwhile, the line “when Facebook redefines privacy to remove actual privacy, I take a wait and see approach” is just… bizarre.
Another hit for “the benefit of the doubt” brings us to this 2010 article about the time China Telecom did some complex technical fuckery that allowed it to route briefly 15 percent of the world’s Internet traffic through its infrastructure, including traffic going to and from sensitive US government websites.
Here’s a quote from a report written by the US-China Economic and Security Review Commission that briefly touches on the incident:
For about 18 minutes on April 8, 2010, China Telecom advertised erroneous network traffic routes that instructed US and other foreign Internet traffic to travel through Chinese servers. Other servers around the world quickly adopted these paths, routing all traffic to about 15 percent of the Internet’s destinations through servers located in China. This incident affected traffic to and from US government (‘‘.gov’’) and military (‘‘.mil’’) sites, including those for the Senate, the army, the navy, the marine corps, the air force, the office of secretary of Defense, the National Aeronautics and Space Administration, the Department of Commerce, the National Oceanic and Atmospheric Administration, and many others. Certain commercial websites were also affected, such as those for Dell, Yahoo!, Microsoft, and IBM.
At the time, China Telecom vehemently denied this anomaly being the result of deliberate action on its part — and the report previously mentioned says that it’s impossible to ascribe blame, or to say whether any of the traffic was intercepted as it flowed through the Middle Kingdom.
That’s a good answer! It’s good to say “I don’t know.” Anyway, here’s what Techcrunch said:
From here we can go in one of at least two different directions. We can take the popular approach and say demonize China for this or that, without any real proof of whether or not the hijacking was intentional (CYBER WAR~!), or we can say, well, how about we give China the benefit of the doubt? I simply don’t understand what China would gain by so very noticeably fiddling with Internet traffic. It just seems like a waste of time with no real upside.
There’s a few things wrong here:
There are not “two different directions.” You can say “I don’t know.”
I don’t think that “the popular approach” is acknowledging that governments — including the US government! — routinely engage in cyberwarfare (for lack of a better term) and cyber espionage with their foreign counterparts.
Acknowledging this doesn’t make you in favor of one side or another. It’s… just a fact.
You not knowing why someone might do something is not a good reason to say that there’s no malicious intent behind that action.
This is something that you could get an answer to by spending thirty minutes and speaking to someone who’s an expert in the field.
Five years later, we get to an extremely funny post (also from TechCrunch) where the author states that he’s willing to give lawmakers the benefit of the doubt when it comes to the regulation of the underlying infrastructure of the Internet, and other Internet-related issues like the sharing economy.
The breakneck growth in internet usage over the past two decades has forced policymakers to confront a host of challenges, from how to regulate the sharing economy to who owns the infrastructure behind the “tubes” themselves. While tempers have flared on a number of these issues, I tend to give the benefit of the doubt to policymakers. The transformation of our society has been so complete and rapid, we simply can’t expect the rebuilding of our laws to be a simple proposition.
Just sticking with infrastructure alone, we find the following:
In 2014, lawmakers had passed legislation limiting municipal broadband in over 20 US states.
These laws inevitably benefit the incumbents, who often have zero competition in a given area, and thus can charge their customers more for service that’s worse than that in many former Eastern Bloc states.
In 2014, the US had the 14th fastest speeds in the world — being handily beaten by Latvia, the Czech Republic, and Romania.
In 2012, one ISP alone — AT&T — spent nearly $14m on donations to state lawmaker campaigns, and, along with other ISPs, lobbied in favor of laws that would limit municipal broadband (or in opposition to municipal broadband projects).
Fun fact: When a representative legislates in the way that the people paying for their campaigns wants, you do not, in fact, have to give him the benefit of the doubt.
To say that you trust lawmakers who are both obviously compromised, and are working to block legislation that would increase competition among ISPs, thereby reducing costs and improving services, is idiotic.
I’ve punched down on TechCrunch for a while now — in part because, as a publication, it embodies everything wrong with tech journalism, even though it employs (and has employed) so many good tech journalists that I admire.
And I think you can see the rot by looking at the post-Techcrunch career trajectories of their former reporters. Whereas most tech journalists move into PR or comms or marketing when they want career stability and enough money to afford a home, a shocking number of former TechCrunch hacks end up working at VC firms. I’d argue that isn’t a coincidence, but rather a reflection of the fact that TechCrunch has a tendency to treat the companies that it covers with kid gloves, and there are likely some uncomfortably chummy relationships behind the scenes.
In reality, they should have been acting like Armando Iannucci’s psychopathic Scottish government spin merchant Jamie Macdonald from The Thick of It: “Kid gloves, but made from real kids.”
That said, you can still find examples of other tech publications and other tech reporters giving the industry they cover the “benefit of the doubt,” even if they didn’t use those exact words.
As an audience looking back in retrospect, we’re forced to consider whether these foul-ups are the product of the all-too-cozy relationships between the tech media and the companies they cover, or simple naivety.
We’re asked to decide whether they deserve the benefit of the doubt.
The Presumption of Innocence
I’ve worked — both full-time and part-time — as a technology journalist for a decade, and so I always thought I knew how bad things were. I have some pretty strong opinions on the way that big tech’s algorithms influence our world, and our understanding of the world. I’ve talked about the capricious motivations behind the generative AI industry, which aims to destroy middle class employment to enrich a handful of genuinely abhorrent human beings.
I’ve read the Facebook files, and Amnesty International’s reporting on Facebook’s role in the rohingya genocide.
I thought I knew this shit.
And then, last week, I got a message from an old friend telling me to watch a video. My friend’s name is Eduardo Marks de Marques. Eduardo is a professor of English literature at the Universidade Federal de Pelotas in Brazil, and is one of the most intelligent and generous people I’ve ever met.
The video was from a guy called Felca. From what I can tell, he’s the Brazilian equivalent of FriendlyJordies — part comedian, part commentator, part journalist. Titled Adultização (which, translated into English, means “Adultification”), the video was a nearly hour-long expose of how child sexual abuse material proliferates in plain sight on platforms like Telegram and Instagram.
Sidenote: I use the term “child sexual abuse material” or CSAM, rather than the more colloqual term “child pornography” for simply factual, moral reasons. This shit is evil, and it’s important to exhibit a bit of moral clarity when talking about this stuff.
Felca then went a step further and created a new Instagram account, and while never crossing any legal lines, started searching for terms that are often used as euphemisms for CSAM material. It didn’t take long for Instagram to start suggesting accounts and posts that either directly contained exploitative or sexually suggestive material involving children, or that encouraged the viewer to reach out on another platform (typically Telegram) to access said material.
Instagram, in other words, was able to anticipate the perceived desires of the persona behind the new account — even though said desires are both illegal, and perhaps the most immoral thing one can possibly imagine.
It’s horrible, awful stuff, and I feel disguised just writing about it. I nevertheless encourage you to watch the above video, which is available with English subtitles, simply because it’s a very good piece of investigative journalism that exposes perhaps the darkest part of the Internet you can imagine.
If you can’t stomach that, there’s also a good write-up on Global Voices worth reading.
Felca’s reporting is important because it exposes both the individuals that proliferate CSAM material online, but also emphasizes the role that platforms like Telegram and Instagram play in marketing and monetizing CSAM content. While the individuals share the majority of the culpability, it’s important to remember that the CSAM industry is one that cannot exist without the active involvement of the tech industry.
Which then leads to an important question: What does the tech industry know? What does Telegram know? What does Instagram know?
How is it that a single YouTuber can prompt Instagram into showing material that veers into criminal territory — or points to stuff that is, undoubtedly, of a criminal nature — knowing just a handful of shibboleths, and Instagram’s algorithm couldn’t anticipate the intent behind those shibboleths?
How is it that this stuff is able to exist in plain sight?
One of my failings, as someone who writes about tech, is that I often boil the actions of companies down to their leadership. I reduce Instagram and Facebook to Zuckerberg, Microsoft to Satya Nadella, and OpenAI to Sam Altman. It’s so easy to forget that these are companies with thousands — or hundreds of thousands — of employees, the vast majority of whom are educated and well-paid.
How is it that none of them caught this shit? I can’t believe that they didn’t know. Did they just not care? Did they just not bother to look, believing in their hearts that this stuff was happening, but not wishing to verify lest they be forced to act?
I’m not expecting perfection from Instagram and Telegram, to be fair. These are platforms with hundreds of millions of monthly users. When you have userbases of that scale, you can expect that some illegal content will emerge.
But it shouldn’t have been as easy as it was for Felca, and it’s not unreasonable to question whether these organizations are making a good-faith effort to police their platforms.
The presumption of innocence is key to our justice system. It’s a good, sound, and moral principle. I also believe that, as a whole, it’s a good thing to assume that people act with the best of intentions, even when bad things happen.
But these principles become incredibly strained when looking at the tech industry, which includes some of the wealthiest and best-resourced organizations in the history of humanity, with some of the most elite minds working for them. It’s not unreasonable to expect a higher standard of conduct.
Separately, we’re under no obligation to give these organizations the benefit of the doubt when it’s revealed that, admittedly in cases of lesser severity, they knew the things they were doing were wrong.
These are not morally normal organizations. As a result, they should not receive the grace that we show other people.
Collective Guilt
The next part of this newsletter will likely be the most contentious. Some of you will absolutely hate it.
I’ve already dropped two pop culture references in this newsletter. You’ll forgive me if I make another — this time, V’s speech from V for Vendetta, where he calls upon the British populace to take action against the despotic regime that has seized control of their country. It’s pretty fitting. Emphasis mine.
Good evening, London.
Allow me first to apologize for this interruption. I do, like many of you, appreciate the
comforts of the everyday routine, the security of the familiar, the tranquility of repetition. I enjoy them as much as any bloke. But in the spirit of commemoration - whereby those important events of the past, usually associated with someone's death or the end of some awful bloody struggle, are celebrated with a nice holiday - I thought we could mark this November the fifth, a day that is sadly no longer remembered, by taking some time out of our daily lives to sit down and have a little chat.
There are, of course, those who do not want us to speak. I suspect even now orders are being shouted into telephones and men with guns will soon be on their way. Why? Because while the truncheon may be used in lieu of conversation, words will always retain their power. Words offer the means to meaning and for those who will listen, the enunciation of truth. And the truth is, there is something terribly wrong with this country, isn't there?
Cruelty and injustice...intolerance and oppression. And where once you had the freedom to object, to think and speak as you saw fit, you now have censors and systems of surveillance, coercing your conformity and soliciting your submission. How did this happen? Who's to blame? Well certainly there are those who are more responsible than others, and they will be held accountable. But again, truth be told...if you're looking for the guilty, you need only look into a mirror.
I know why you did it. I know you were afraid. Who wouldn't be? War. Terror. Disease. There were a myriad of problems which conspired to corrupt your reason and rob you of your common sense. Fear got the best of you and in your panic, you turned to the now High Chancellor Adam Sutler. He promised you order. He promised you peace. And all he demanded in return was your silent, obedient consent.
It’s so easy to poke fun at people in the media who got things so badly wrong — like Michael Arrington when he said that he tended to give Facebook the benefit of the doubt when it comes to privacy.
It’s so easy to point out how the tech media has dropped the ball and failed to properly interrogate those holding the levers of power, whether they be giants in the Magnificent Seven like Meta, or the politicians crafting the laws that govern how tech works.
Similarly, it’s so easy to laud praise on people like Felca when they expose serious wrongdoing at the heart of Big Tech — especially when it involves the scourge of CSAM and child exploitation.
(And, to be clear, I do believe that Felca deserves all the praise I’ve heaped on him in this article, and more.)
It’s a lot harder to self-reflect and see how we, as a society, enabled these companies to amass so much power, and to inflict so much harm.
The difference between Michael Arrington and the rest of us is that we didn’t blog about giving Facebook (later Meta) the benefit of the doubt.
But we did. We gave Facebook the benefit of the doubt, even as concerns about user privacy grew. We gave Facebook the benefit of the doubt, even as its newsfeed polarized our politics and sparked dinner-table arguments within our families. There are countless transgressions where Meta crossed the line, and we, as a society, shrugged it off, perhaps because we liked the connection and the convenience that the platform offered.
Today, millions of people are giving OpenAI the benefit of the doubt, even as it runs a platform that’s based on lies, environmental destruction, and the wholesale theft of intellectual property. The irony is that many of those giving OpenAI the benefit of the doubt are the ones that OpenAI hopes to displace from the marketforce. These are the people who OpenAI believes can be substituted with a GPU.
ChatGPT guided a teenage boy through the process of taking his life, and even that isn’t enough to stop people from giving OpenAI the benefit of the doubt.
Google. Microsoft. Apple. We’re always giving them the benefit of the doubt, in part because we like their stuff, or because we mistakenly believe that it’s essential.
In a weird way, we’re all Michael Arrington back in 2010, telling everyone to chill about the latest awful thing that a tech company did. And as a result, we all have a role to play in the ascent of these truly horrendous institutions, led by even worse people.
We’re all culpable.
If we’re to un-fuck things, we — as a society — need to acknowledge that no level of convenience or amusement is worth crossing certain moral lines, and some organizations are so evil, they do not deserve our time or our money.
And we need to acknowledge that our attention — and our wallets — are our power, and when unified, they are how we can elevate or eviscerate companies that cross the moral and legal lines that are most important to us.
Footnotes
As always, you can get in touch with me via email (me@matthewhughes.co.uk) or Bluesky.
If you want to support the publication, sign up for a premium subscription. You get an extra 3-4 posts each month! And some of them are half-decent!
You can read the last premium newsletter here.
The next premium newsletter will be published this weekend, or at the very latest, on Monday.

The thing is, when you don't give the benefit of the doubt, and extrapolate the future (not one year, but 10), and assume malice, you get labelled a luddite, a tinfoiler or an extremist.
Just look at Richard Stallman: in 1983 he couldn't hack his printer, extrapolated, got called all sorts of things, and now, 40 years later, we cannot run unauthorized software on a general purpose computer we all have in our pockets.
Many warned about the ongoing platformization of the Internet, and here we are, a few decentralized nerds remain, yet even those post on substack...
Nobody listened to Edward Snowden about privacy.
Your wallet has no power.
You building something might. (stallman and GNU)
You becoming a thorn in the side of corporations could. (Louis Rossmann's activism)
I look in a mirror, and I do not like what I see... how about you?
I didn't go premium,, the vino informs, but maybe the sub will help. Godspeed, sir. Document and evident.