Do you have a friend, colleague or family member who has started behaving strangely? Are you worried that they might have been recruited into a cult? No, not the so-called Tetrapod Cult, which definitely does not exist and is not a real thing in our timeline.

Phew! Now we’ve got that out of the way…
Perhaps you know the kind of cult I’m thinking about. Maybe there is some kind of new technomagical solution being talked up by the same grifters who were all over previous hype waves like blockchain, web3, NFTs and the “metaverse”? Now, barely pausing for breath, they have seamlessly pivoted to AI^H^H quantum^H^H slime mold computing. Yep, that’s it - slime molds are the new hotness. It’s time to get onboard the slime train, or you’ll be left behind and everyone will be pointing and laughing at you.
You can tell when something is a cult because it becomes impossible to question. It is simply inevitable, at least as far as people who have been brainwashed by the cult (or are using it for personal advancement) are concerned. Any problematic aspects are transitory and irrelevant. And anyway - everyone else is doing it, so don’t let the slime train leave you behind.
This is, inevitably, a post about generative AI and Large Language Models.
In the last couple of years I’ve found myself at a number of events where AI True Believers wax lyrical about the imagined benefits and sure-fire inevitability of our neon-lit and not at all dystopian cyberpunk AI future. Of course the beauty of “AI” is that it can mean whatever you want it to, which was really quite a stretch for some other recent bubbles like the unfortunately named Non-Fungible Tokens. Reader, they were So Very Fungible. They are by now Almost Entirely Funged.
It’s pretty much impossible to reason with someone whose entire identity, image, self-worth (or actual wealth and status) is inextricably wrapped up with a movement whose whole premise is based on beliefs rather than facts or evidence - basically, putting the vibes in vibe coding. But what you can potentially do is influence the other people in the room who are not fully paid-up cult members. After all, they have yet to fully commit their future and (crucially) their wallet to $THE_SHINY_NEW_THING. There is still hope for them. For now.
Here are a few examples of deceptively simple questions which you don’t need to be a cyberpunk technomancer like me to ask, and which stand some chance of cutting through. Maybe you can think of more? Let me know…
Reflections On Trusting Trust #
We hear a lot about “trustworthy AI”. But, we also hear that hallucinations are inevitable because LLMs are essentially autocomplete on steroids, and LLM outputs are non-deterministic - they literally come up with different stuff each time you ask them a question. OK, so they can seem quite plausible - until you ask them about something you know a lot about, then the cracks really start to show.
QUESTION: Can something with so many trivial and devastating failure modes ever truly be “trustworthy”? (and this is before we get onto the infosec disaster area that is Model Context Protocol)
The Only Way Is Ethics #
Another common trope from AI boosters is that “ethical AI” is possible somehow. Possibly because if we keep using the phrase “ethical AI” enough, then people will start to believe that some kind of Magical Ethical Thing is taking place somewhere (you wouldn’t know where, the Magical Ethical Thing goes to a different where) that will negate the by now very well publicised genAI harms. We are ethical now, you see. We said so. So it must be true. Spoilers: The AI was not in fact ethical by any conventional definition of the term.
Harms, what harms? You know the ones, but let’s just list a few for fun: (not actually fun)
- Pushing up electricity prices, leaving human communities unable to afford power.
- Leaching the humans’ water supply for cooling and spewing out pollution.
- Helping themselves to whatever they can find on the Internet, whether or not they have permission from the humans who created it.
- Destroying human livelihoods by being just-good-enough (not actually good enough, if we’re honest) to replace real living people like artists and translators.
- Brutalising the humans involved in labelling and reinforcement learning work.
- Destroying the open web by agressive crawling and encouraging enclosure of the digital commons.
- Facilitating the slopocalypse of garbage AI generated websites, harrassment via deepfakes etc.
- Accelerating the enshittification of practically every product and service you can think of.
- Conning people to do harms to themselves and others due to the entirely false idea that the AI really is “intelligent” and “thinks” and “knows” stuff, rather than (in reality) stringing statistically likely words and phrases together into a Digital Soup of Truthiness.
- Advancing the agenda of the worst people in the world. You know who. We’ll come back to them.
It is conceivable that even the most ardent bot fondler might reluctantly accept that it could be a tiny bit difficult to wave a Magical AI Wand and make all the bad stuff go away.
QUESTION: If we keep boosting the bots, then aren’t we saying (in essence): Well, it’s a shame about the catastrophic harms, but we don’t really care that much - so let’s just pretend it’s all good. OK?
Mecha-Hitler Will See You Now #
Let’s imagine that you have decided to pursue your cursed genAI project in spite of the above. There’s just one small problem - the “AI bros” are some of the worst people in the world. It’s as though every time you write the inevitable Python program to do your Magical AI Thing, you add the following line at the top…
import fascism
Not “just messing about” fascism, or “poking fun at the libs” fascism. This is the real deal - we’re talking about people who just can’t seem to stop making public statements in support of evil and despotic regimes, actively working to destablise democratically elected governments, and facilitating online abuse and harrassment. Sometimes they even throw cheeky Roman Salutes (LOL!) and tell their chatbot to call itself Mecha-Hitler.
QUESTION: If we embrace genAI from fascists, doesn’t that make us a Nazi bar? (and let’s not even get started on “ethics” and “trust”)
We Have AI At Home #
It’s OK, though, because we can burn vast amounts of carbon training our own foundation AI model. This will be fantastic and transformational because we will know and control key aspects like training data, model weights and system prompts. The AI bros seem strangely reluctant to talk about a lot of this stuff for some reason, but we don’t need them now because we have Tech Sovereignty.
It’s great that we can visit the AI sausage factory and inspect the ingredients for ourselves. However, our foundation model will still rely on datasets like Common Crawl, which is already hopelessly polluted with genAI slop. As the saying goes: garbage in, garbage out.
QUESTION: So how can we avoid model collapse, the AI equivalent of Mad Cow Disease?
Failure Is (Not?) An Option #
The language of the AI boosters might be all about disruption and innovation, “accelerating adoption” of genAI through things like testbeds, trials and prototypes. The only thing is, these words and phrases imply (at least tacitly) that failure is possible. Perhaps, just maybe, genAI won’t actually work for some use cases? Maybe later, maybe never.
QUESTION: Before we bet the planet on genAI, won’t it be super important to track progress and figure out which (if any) trials and testbeds produce more good than they do harm?
At the time of writing in October 2025, genAI is showing every sign of blowing up the global economy with results that will be at least as spectacular as the Dot Com crash or the Global Financial Crisis of 2008. We don’t know exactly how big the crater will be, but there will certainly be room in it for all the people who uncritically promoted genAI in spite of the obvious harms and hazards. Enjoy your crater. It’s your hole. You’ve earned it!
Meanwhile I hope someone remembers to keep an eye on the slime mold, with their 720 sexes and their self-healing. I have a feeling they might be up to something…