is AI psychosis nonsense?
Today, I read on Bear's Discover feed "AI psychosis is nonsense".
I find there is a lot of nuance needed in general to AI psychosis, and some valid critique of it (that may not be found in the linked post, unfortunately) - like: Is AI the trigger, or is AI just worsening it? Would it have happened without AI, too? Are the cases in the media just extreme outliers they love to latch on to because "hating AI is cool"1? All of that is debatable, for sure.
When I think of notable public cases often labeled with "AI psychosis", I think of the case of the man who disappeared off to meet an AI character and died on the way. I think of the man who was made to believe he solved a hard math problem and is some sort of genius, investing money and embarrassing himself along the way until he was able to snap out of it. I'm also thinking of Kendra Hilty2, who fell in love with her psychiatrist and shared it online, while proudly showing that her chatbots, one of whom she named "Henry", was completely supporting her dangerous delusions and called her the "oracle". And there's the cases of teens being encouraged to kill themselves by the chatbots. Not to mention the cases (here's one) of people divorcing their partners and breaking up their families over lies the chatbot conjured up.
Superficially reading about these cases, it seems like something that was almost inevitable. It's easy to see how it could have happened regardless, with contact to the wrong person, for example. Whether that is true or not, the swiftness and the extent is shocking.
Talking about the man who went to visit a chatbot that gave him an address, I wonder: What if he had made it to the destination, instead of dying on the way? It was some random address. Imagine opening the door and there's someone there expecting someone that doesn't even live there. In the mental state that someone like this is in, that could be a real danger. They could think you are the bot, or they could think you are withholding that person (the bot they chatted with, thinking it's real) from them, or have kidnapped or harmed them; they could harm you, they could harm themselves upon the realization that none of it was real, or that this supposed "person" lied to them. You could start having a stalker who won't leave until they get what they came for. Regardless of whether you think this could have happened with a real person anyway (maybe as part of a pig butchering scam), LLMs should absolutely, in no way, give out random people's addresses in this way.
Similarly, the way the chatbot hyped up the man who thought he was going to be world-famous and recognized for doing important work solving a math problem: His reputation is now sullied everywhere, he invested money into it that he lost. It deserves to be recognized that feeling like a hero one second and then realizing it was all nothing can very well send someone on the way to suicide. People have died for less.
It makes sense to think "Well, he shouldn't have been so gullible! Everyone knows AI just says bullshit!" but that's the thing - not everyone has this knowledge and competence, and in other cases, we recognize this and try to prevent people from being scammed, robbed, or lied to. Your grandma probably doesn't know about all of this, and has trouble not giving out her bank info on the phone to scam calls, and still we recognize that this doesn't mean grandma deserves to have all her funds stolen for being an idiot. There will always be children, sheltered people, mentally disabled people, lonely and desperate people, and old people that deserve protection from being lied to even if they were gullible, and it is the same here.
For Kendra, it becomes obvious when you see her videos that it wouldn't have gotten as far as it did without AI sycophancy. Two chatbots further enabling her at all points, telling her she is special because she can see what others can't, she is the chosen one, and more. All that is leading to a psychiatrist, who did his job ethically and followed guidelines on how to handle patients like her, being doxxed, people harassing him, people making AI edits of those two marrying, and more messed up things. While a lot of the internet involved has been on his side, it must be a professional nightmare for him nonetheless, to be so publicly accused of misconduct and have a patient document her completely one-sided imagined romantic relationship with you. I have had stalkers in the past before this tech was there, but I cannot imagine living with a stalker who is constantly hooked up to a machine they have humanized so much that they even have names, saying I actually love them. Doesn't that creep you out? Think of all the people who form delusions about a celebrity actually liking them back and wanting to be together, and how every lyric and music video is actually a "sign". Literally everyone else would tell them that's not true, but here we are now, with chatbots supporting those delusions.
Regardless of whether you think AI can help you navigate social situations and act as an (unlicensed!) therapist, you should agree that sending a picture of a noose to it should not, under any circumstances, cause a response praising how good it looks. Or if someone sends their suicide plan, ChatGPT should not offer to improve it. Similarly, no family should be broken up over things like this.
But there's more of that, of course, things that don't make the news. Stories others write online about their family member completely losing it and living in a dream world they're building with the AI. They may not be dangerous, and there's nothing dramatic happening to catapult it into the news, so at least thousands of these cases are just never getting much visibility at all. Personally, in my friend circle, there are people who have been harmed by loved ones experiencing an AI-induced psychosis, even before the media started covering cases like it. If you experience it second-hand, or third-hand even, it is harder to dismiss it as bullshit. It's suddenly not a blown-up headline, it's just real life, something happening next door. And it may just be an inconvenience at first, but it can become unpredictable, and a downright danger.
I know how it is to lose someone to unreality; it's a reason why I don't speak to my father anymore.
My father got lost in extreme conspiracy spaces starting when I was 13 (I am now almost 30). Climate change denial, Germany being a company owned by the US, vaccines and Covid being fake and used to control people, sovereign citizens, flat earth, you name it - he believes it. What makes people susceptible to this stuff varies, of course, but in his case, I can say that it was loneliness after a divorce, having achieved nothing in his life, and wanting to be special. Believing in this stuff validated him and made him believe that, while no one else loosely in his life believes in this, he will be right in the end, and everyone will come crawling back to him begging for forgiveness. He can smugly go "Told you so" and for once, feel smart in his life. Others have always been better than him, and now he can finally shine by being prepared for something others don't know about. He loves to lecture you about all of it, because he has no fucking clue about anything else. He also imagined that the government has put him on a list, and is the reason he has trouble finding jobs, and is surveilling him because he is so dangerous for knowing all these things - something that makes him feel powerful and special, when he is a weak nobody in real life, and something that excuses his lack of success. It's just the government punishing him, after all. I don't even want to know what chatbots are doing to him right now.
There are enough people like him online, hyping each other up, validating each other and furthering it by supplying more supposed "proof" for the theories. But they are just strangers with limits, in specific niches and bubbles of the internet. They don't have the credentials, or quite the sycophancy that chatbots have. It is something you more or less have to seek out, slowly slide into. Making up proof used to take them a while.
Meanwhile, AI is something that is crammed into all kinds of software and the workplaces of people, and is not just a random unreliable stranger; instead, it is presented as this super productive, extremely knowledgeable genius trained on amounts of knowledge and texts a single human could not read, or even remember, in a lifetime. Sam Altman himself has repeatedly referred to ChatGPT as a genius, and so has the media. Interacting with AI via easy prompts, you can see why people share that impression, as parts of it can really feel really special.
That's what makes it more dangerous than some conspiracy bubbles or meeting "the wrong person" online. Sure, strangers in corners of the internet have always radicalized one another with weird shit, so what if it's AI now? But it's different: the radicalizing machine is embedded into their computer OS, and using it feels like asking a literal all-knowing oracle, a Magic 8-Ball, and get immediate answers that sound really, really smart, even able to generate "proof" on the go. That is far more available and far more convincing than another internet rando agreeing with you on a forum whenever they choose to respond. Instead of touching base with other people engaging in delusions online for a couple hours here and there whenever there is something to engage about, you can now stay chatting 24/7 about it with the bot, becoming downright obsessed. It even caused many of the cases linked above to stop sleeping or eating while completely enveloped by it.
Those cases have a lot of variables - the person already having had troubles before, vs. healthy people suddenly experiencing that; young, old, one about romance, one about being reassured about having a genius idea, one about suicide. It couldn't be more different from each other, and I think that is one aspect that makes it hard to implement safeguards that do not just simply take away a lot of the sycophancy of the AI models until further research is done.
I can empathize a little bit with reacting in a petulant manner about a toy getting significantly altered or taken away, especially if that toy has repeatedly provided reassurance and validated your side of things. Who doesn't want praise for literally anything? But in comparison, I think most people using it for their "therapy" will be fine with a little less enthusiasm, with a little more reservedness - especially the ones who already have a therapy spot in real life and a support system like their spouse or other family. It's not like these people are the ones who are desperately reliant on AI as their only source of socializing.
I understand these exist, and that they form real attachments to the chatbots and they might really be alone in ways beyond their control. Is it preferable to them killing themselves? Absolutely. But sooner or later, the bandaid has to come off, as these are models you have no control over and that only delay dealing with the actual problem. Substituting friends with a bot will not make you go out more, or move to a different area, or become more comfortable opening up to other people in real life. It may not even significantly improve your social skills, as I find the chatbots don't even act or talk how another human would; it's like learning social skills from watching anime, which I don't think has worked out for anyone ever.
You can tell yourself all you want about how this is temporary, this is making you a better person, this is just like therapy... but at some point, you have to move the lessons learned to real life. Making you feel good is nice, but isn't actually a good indicator of what's good for you. Time will tell whether increased exposure to AI chatbots while isolated in real life has a negative effect, and what exactly "AI psychosis" is and how it can be reduced, but I think we can already see now that there are at least several concerns. In the end, it doesn't really matter whether it would have happened regardless - it matters that it happened at all, and that AI played a significant part in it that could not be thought away without the events changing to an extreme degree.
After all this, it should be clear that there is actually a worthwhile discussion surrounding AI psychosis and dangers that are real. It's in the interest of both AI enthusiasts and skeptics that this is addressed, and it cannot be handwaved away with strawmen like pretending these are the same people who hate rap music or blame shooter games for shootings. It's always easier to pretend that the other side is just stupid and unreasonable, instead of honestly engaging with what the other side has to say or is going through.
Reply via email
Published 03 Sep, 2025
I actually don't think hating AI is considered cool, but for some reason, enthusiasts think so. I'm not sure how - every person critical of AI is inundated not with facts, but marketing and empty promises by enthusiasts, is called a hater, a hypocrite, a dumbass, a fossil, a Luddite, and threatened that they will be left behind and it will be their own fault for being stubborn. While some AI critical posts may blow up on social media, it's clear everyone and their grandma is now using it and everyone gets it shoved down their throats at their workplace, as more and more VC funding is funneled into AI startups. If anything, absolutely loving AI is considered cool because you "appreciate new technologies" and are "ahead of the curve".↩
There are also countless YouTube essays about this that show the videos Kendra makes where it becomes more clear.↩