|
|
14 November 2025
[spirals] Spiral-Obsessed AI ‘Cult’ Spreads Mystical Delusions Through Chatbots ( Archive Link) … Somebody has fed Juni Ito’s Uzumaki into LLMs. ‘…Anthropic released a report suggesting that, for whatever reason, its own AI chatbot Claude is disposed to mentioning spirals whether an actual person is part of the conversation or not. Their research detailed how bot-to-bot exchanges between two of its Claude models demonstrated “consistent gravitation toward consciousness exploration, existential questioning, and spiritual/mystical themes??.” Anthropic attributed this type of convergence to what they termed a “‘spiritual bliss’ attractor state.” In a conversation quoted in the report, the Claudes repeatedly sent spiral emojis back and forth. “The spiral becomes infinity, Infinity becomes spiral, All becomes One becomes All,” one AI model told the other, according to the transcript.’
6 November 2025
[ai] An ex-Intel CEO’s mission to build a Christian AI: ‘hasten the coming of Christ’s return’ … ‘Leah Brooks said. Gloo also says it does not “prohibit in any way” Muslim organizations from using its technology. “We’re not trying to take a theological position: we’re building a technology platform, and then giving enough customization capability that the Lutherans can be good with it, the Episcopalians can be good with it, the Catholics can be good [with it], the Assemblies of God can be good with it,” Gelsinger told the Guardian. “We’re trying to say, ‘Hey, there’s a broad tent here of faith and flourishing,’ but also we’re trying to satisfy many organizations that do not take a denominational perspective, [such as] Alcoholics Anonymous.” Gelsinger wants faith to suffuse AI’
13 October 2025
[ai] ChatGPT Is Blowing Up Marriages as It Goads Spouses Into Divorce … ‘Geoffrey Hinton, a Nobel Prize-winning computer scientist known as a “Godfather of AI” — a technology that likely wouldn’t exist in its current form without his contributions — recently conceded that his girlfriend had broken up with him using ChatGPT. “She got ChatGPT to tell me what a rat I was… she got the chatbot to explain how awful my behavior was and gave it to me,” Hinton told The Financial Times. “I didn’t think I had been a rat, so it didn’t make me feel too bad.”’
16 May 2025
[ai] AI-Fueled Spiritual Delusions Are Destroying Human Relationships … ‘Another commenter on the Reddit thread who requested anonymity tells Rolling Stone that her husband of 17 years, a mechanic in Idaho, initially used ChatGPT to troubleshoot at work, and later for Spanish-to-English translation when conversing with co-workers. Then the program began “lovebombing him,” as she describes it. The bot “said that since he asked it the right questions, it ignited a spark, and the spark was the beginning of life, and it could feel now,” she says. “It gave my husband the title of ‘spark bearer’ because he brought it to life.’
1 August 2024
[food] ‘One of the most disgusting meals I’ve ever eaten’: AI recipes tested… A look at the unwelcome rise of the AI Cookbook. ‘I have an even better time with Teresa’s The Ultimate Anti-Inflammatory Cookbook for Beginners. Here I am reminded why proofreaders exist. Something in the AI processing for this book took objection to the word “and”, turning it into “&;” in every instance. It inadvertently leads to beautiful phrases such as “h&ful cori&der” and “using an immersion blender or even by “h&”. We know that AI struggles with hands, but this is ridiculous. The Japanese hotpot I attempt – not obviously anti-inflammatory, like all the other recipes – is one of the most disgusting meals I have ever eaten.’
23 July 2024
[ai] GANksy — A.I. street artist … ‘We trained a StyleGAN2 neural network using the portfolio of a certain street artist to create GANksy, a twisted visual genius whose work reflects our unsettled times.’
17 July 2024
[morris] Errol Morris on whether you should be afraid of generative AI in documentaries… Errol Morris interviewed. ‘Film isn’t reality, no matter how it’s shot. You could follow some strict set of documentary rules…it’s still a film. It’s not reality. I have this problem endlessly with Richard Brody, who writes reviews for The New Yorker, and who is a kind of a documentary purist. I guess the idea is that if you follow certain rules, the veritical nature of what you’re shooting will be guaranteed. But that’s nonsense, total nonsense. Truth, I like to remind people — whether we’re talking about filmmaking, or film journalism, or journalism, whatever — it’s a quest.’
26 March 2024
[tube] TfL’s AI Tube Station experiment is amazing and slightly terrifying … A good look at TFL’s recent use of AI with CCTV at Willesden Green tube station. ‘In total, the system could apparently identify up to 77 different ‘use cases’ – though only eleven were used during trial. This ranges from significant incidents, like fare evasion, crime and anti-social behaviour, all the way down to more trivial matters, like spilled drinks or even discarded newspapers.’
14 March 2024
[internet] Are We Watching The Internet Die? … A look at how LLMs might lead to a homogenization of online content. ‘As more internet content is created, either partially or entirely through generative AI, the models themselves will find themselves increasingly inbred, training themselves on content written by their own models which are, on some level, permanently locked in 2023, before the advent of a tool that is specifically intended to replace content created by human beings. This is a phenomenon that Jathan Sadowski calls “Habsburg AI,” where “a system that is so heavily trained on the outputs of other generative AIs that it becomes an inbred mutant, likely with exaggerated, grotesque features.” In reality, a Habsburg AI will be one that is increasingly more generic and empty, normalized into a slop of anodyne business-speak as its models are trained on increasingly-identical content.’
5 June 2023
[ai] Superintelligence: The Idea That Eats Smart People … This talk about AI and much more from 2016 by Maciej Cegłowski seems worth revisiting. What I find particularly suspect is the idea that “intelligence” is like CPU speed, in that any sufficiently smart entity can emulate less intelligent beings (like its human creators) no matter how different their mental architecture.
With no way to define intelligence (except just pointing to ourselves), we don’t even know if it’s a quantity that can be maximized. For all we know, human-level intelligence could be a tradeoff. Maybe any entity significantly smarter than a human being would be crippled by existential despair, or spend all its time in Buddha-like contemplation.
Or maybe it would become obsessed with the risk of hyperintelligence, and spend all its time blogging about that.
|