> I’ve seen 12 people hospitalized after losing touch with reality because of AI. [...] Here’s what “AI psychosis” looks like, and why it’s spreading fast
In another tweet from the same guy:
> 1. This actually isn't new or AI-specific. 2. And no AI does not causes psychosis.
This guy is clearly engagement farming. Don't support this kind of clickbait.
Disheartened by the comments in this thread I wonder what motivates you all to dismiss these claims so aggressively.
Then I looked at your profile:
> Working at … to make AI
Oh, so that’s why you all are speedrunning the narcissists’ prayer? Self serving bootlicking? Just embarrassing.
> A Narcissist’s Prayer
That didn’t happen.
And if it did, it wasn’t that bad.
And if it was, that’s not a big deal.
And if it is, that’s not my fault.
And if it was, I didn’t mean it.
And if I did
You deserved it.
I could see this. For certain personality archetypes, there are particular topics, terms, and phrases that for whatever reason ChatGPT seems to constantly direct the dialogue flow toward: "recursive", "compression", "universal". I was interested in computability theory way before 2022, but I noticed that these (and similar) terms kept appearing far more often than I would expect to due chance alone, even in unrelated queries.
Started searching and found news articles talking about LLM-induced psychosis or forum posts about people experiencing derealization. Almost all of these articles or posts included that word: "recursive". I suspect those with certain personality disorders (STPD or ScPD) may be particularly susceptible to this phenomenon. Combine eccentric, unusual, or obsessive thinking with a tool that continually reflects and confirms what you're saying right back at you, and that's a recipe for disaster.
The focus on "recursive" as a repeated, potentially triggering word is interesting and reflects how highly abstract thinkers might be especially tuned into certain linguistic structures, which LLMs amplify.
Other words they like are "reflection", "expansion", "compression". These are fundamental, abstract, semi-monadic terms that allow the user to bootstrap an abstract theory. A little bit of "insight" (aka linguistic rearranging) and I've got a theory out of nothing. How does it work? Well, reflection and recursion of course. None becomes one becomes many. Can't you see the structure?
It feels a lot like logical razzle dazzle to me. I bet if I'm on the right neurochemicals it feels amazing.
Vibration, frequency, quantum, energy. All things I've seen as well.
There's a somewhat significant group of people that are easily wooed by incorrectly used technical terms. So much so that they are willing to very confidently use the words incorrectly and get offended when you point that out to them.
I think pop-science journalism and media has a lot of the blame here. In the search to make things accessible and entertaining they turned meaningful terms into magic incantations. They further simply lied and exaggerated implications. Those two things made it easy for grifters to sell magic quantum charms to ward off the bad frequencies.
There is such a thing as "recursive AI", where conversations with the model alter the model. Remember Microsoft Tay, from 2016? [1] That was a chatbot which learned from its chats. In about 24 hours it sounded like a hardcore neo-Nazi. Embarrassing.
How did that work, anyway? LLMs were not a thing back then.
It's noteworthy that the modern LLM systems lack global long-term memory. They go back to the read-only ground state for each new user session. That provides some safety from corporate embarrassment and quality degradation. But there's no hope of improvement from continued operation.
There is a "Recursive AI" startup.[2] This will apparently come as a Unity (the 3D game engine) add-on, so game NPCs can have some smarts. That should be interesting. It's been done before. Here's a 2023 demo from using Replika and Unreal Engine.[3] The influencer manages to convince the NPCs that they are characters in a simulation, and gets them to talk about that. There's a whole line of AI development in the game industry that doesn't get mentioned much.
This thread is informative but boy, is that title Click-Baity. It isn't until the 7th post that he bothers to mention this:
"To be clear: as far as we know, AI doesn't cause psychosis.
It UNMASKS it using whatever story your brain already knows."
Guess which part of the thread gets the headline. Also, this directly contradicts the opening line where he says "...losing touch with reality because of AI".
Which is it? I REALLY can't wait till commentariats move past AI.
> Also, this directly contradicts the opening line where he says "...losing touch with reality because of AI".
He addresses that in the next post:
> AI was the trigger, but not the gun.
One way of teasing that apart is to consider that AI didn't cause the underlying psychosis, but AI made it worse, so that AI caused the hospitalisation.
Or AI didn't cause the loose grip on reality, but it exacerbated that into completely losing touch with reality.
The ease of having a tool which can at a drop of the hat spin up a convincing narrative to fit your psychotic world view with plenty of examples to boot, does seem to look like an accelerating trend.
Trying to convince someone not to do something, when they can pull a 100 counter-examples out of thin air of why they should, is legitimately worrying.
Patient privacy is a nightmare for everyone to navigate and the Clinton administration isn't hated enough for introducing it. I can understand if people want their HIV diagnoses private but there's surely a line to be drawn, perhaps south of HIV, but well north of "I caught the flu".
I mean, this stuff is pretty basic when it comes to delusions. Seems more likely that their inherent psychosis latched onto AI instead of being caused by it. These people would probably also deteriorate if they simply stumbled into any questionable part of the internet that reinforces their beliefs.
Totally, I think it's different to some degree in terms of the velocity.
In a traditional forum they may have to wait for others to engage, and that's not even guaranteed. Whereas with an llm you can just go back and forth continually, with something that never gets tired and is excited to communicate with you, reinforcing your beliefs.
I think the key difference here is that ChatGPT and its ilk give an unlimited stream of yes-you-are-the-always-correct-genius sycophancy literally designed for engagement. The kind of niche rabbitholes existing from before LLMs are generally either rate-limited by being a limited number of actual people with strongly similar views (doomsday preppers, niche cults, etc), or so huge and chaotic that pure-strain sycophancy won't happen (reddit, 4chan).
There's nothing crazy with suspecting that causality has not been established. If we're not psychologists or psychiatrists, then we have even more cause to wait for clinical studies. If you are a psychologist or psychiatrist, you still might not be remotely equipped to run clinical studies.
If you don't want to be "crazy" then you need a higher threshold for accepting these anecdotes as generalizable causal theory, because otherwise you'd be incoherently jerked left and right all the time.
He does make that point further down. He also makes the point that in the past there was a similar syndrome around TV and radio, where schizophrenics would say the CIA (it was usually the CIA) was beaming thoughts into their brains.
Interestingly, no one is accusing ChatGPT of working for the CIA.
(Of course I have no idea if that's rational or delusional.)
Anyway - this really needs some hard data with a control group to see if more people are becoming psychotic, or whether it's the same number of psychotics using different tools/means.
I grew up in a medical houshold, and there is a specific speach mode that doctors use when discussing patients(cases), that anonimises the idividual.......as it is part of practicing medicine , conveying information to other patients, and there own study and learning it is quite common.
In ~2002 a person I knew in college was hospitalized for doing the same thing with much more primitive chatbots.
About a decade ago he left me a voice mail, he was in an institution, they allowed him access to chatbots and python, and the spiral was happening again.
I sent an email to the institution. Of course, they couldn't respond to me because of HIPPA.
With the story the other week of some peoples chatgpt threads being indexed by google, I came across a chatgpt thread related to conspiracy theories (in the title of the thread). Thinking it'd be benign I started reading it a bit, it was pretty clear the person chatting had some kind of mental disorder such as schizophrenia. It was a bit scary to see how the responses from chatgpt encouraged and furthered delusions, feeding into their theories and helping them spiral further. The thread was hundreds of messages long and it just went deeper and deeper. This was a situation I hadn't thought of, but given the sycophantic nature of some of these models, it's inevitable that they'll lead people further towards some dangerous tendencies or delusions.
So the take away is that there are a lot of people on the edge, and chatGPT is better than most people at getting people past that little bump because it’s willing to engage in syncophantic, delusional conversation when properly prompted.
I’m sure this would also happen if other people were willing to engage people in this fragile condition in this kind of delusional conversation.
Sound likes vulnerable people experiencing potentially temporary states of detachment from reality are having their issues exacerbated by something that's touted as a cure all.
(Edit: hmm, feels like we could do with a HN bot for this sort of thing! There is/was one for finding free versions of paywalled posts. Feels like a twitter/X equivalent should be easy mode.)
> Our findings provide support for the hypothesis that cat exposure is associated with an increased risk of broadly defined schizophrenia-related disorders
Cats in particular are correlated with getting toxoplasmosis. About other pets - IME, people who have been disappointed by humans / feel like they don't really fit into human society like pets as an alternative emotional support. I don't really understand it, but that's the observation.
But there have always been crank forums online. Before that, there were cranks discovering and creating subcultures, selling/sending books and pamphlets to each other.
Got the hunch that it's harder on younger people who haven't had as many experiences yet and are now able to get insights and media about anything from an AI in a way that it becomes part of their 'baseline' depiction of reality.
If we were capable of establishing a way to measure that baseline, it would make sense to me that 'cognitive security' would become a thing.
For now it seems, being in nature and keeping it low-tech would yield a pretty decent safety net.
conventional wisdom would say that cults are formed when a leader starts some calculated plan to turn up the charisma and such in some followers.
but... maybe that's causally backwards? what if some people have a latent disposition toward messianic delusions and encountering somebody that's sufficiently obsequious triggers their transformation?
i'm trying to think of situations where i've encountered people that are
endlessly attentive and open minded, always agreeing, and never suggesting that a particular idea is a little crazy. a "true followers" like that has been really rare until LLMs came along.
You'd casually call this letting success (or what have you) go to your head. It's even easier to lose touch when you're surrounded by yes men, and that's a job that AI is great at automating.
> I’ve seen 12 people hospitalized after losing touch with reality because of AI. [...] Here’s what “AI psychosis” looks like, and why it’s spreading fast
In another tweet from the same guy:
> 1. This actually isn't new or AI-specific. 2. And no AI does not causes psychosis.
This guy is clearly engagement farming. Don't support this kind of clickbait.
Disheartened by the comments in this thread I wonder what motivates you all to dismiss these claims so aggressively.
Then I looked at your profile:
> Working at … to make AI
Oh, so that’s why you all are speedrunning the narcissists’ prayer? Self serving bootlicking? Just embarrassing.
> A Narcissist’s Prayer That didn’t happen. And if it did, it wasn’t that bad. And if it was, that’s not a big deal. And if it is, that’s not my fault. And if it was, I didn’t mean it. And if I did You deserved it.
Dayna EM Craig
[dead]
The lady protests too much
I could see this. For certain personality archetypes, there are particular topics, terms, and phrases that for whatever reason ChatGPT seems to constantly direct the dialogue flow toward: "recursive", "compression", "universal". I was interested in computability theory way before 2022, but I noticed that these (and similar) terms kept appearing far more often than I would expect to due chance alone, even in unrelated queries.
Started searching and found news articles talking about LLM-induced psychosis or forum posts about people experiencing derealization. Almost all of these articles or posts included that word: "recursive". I suspect those with certain personality disorders (STPD or ScPD) may be particularly susceptible to this phenomenon. Combine eccentric, unusual, or obsessive thinking with a tool that continually reflects and confirms what you're saying right back at you, and that's a recipe for disaster.
The focus on "recursive" as a repeated, potentially triggering word is interesting and reflects how highly abstract thinkers might be especially tuned into certain linguistic structures, which LLMs amplify.
Cells. Interlinked.
Interlinked
i think it's more likely that it sounds technical but allows space for woo-woo ideas to flourish. the keyword used "vibrations". then "quantum".
Other words they like are "reflection", "expansion", "compression". These are fundamental, abstract, semi-monadic terms that allow the user to bootstrap an abstract theory. A little bit of "insight" (aka linguistic rearranging) and I've got a theory out of nothing. How does it work? Well, reflection and recursion of course. None becomes one becomes many. Can't you see the structure?
It feels a lot like logical razzle dazzle to me. I bet if I'm on the right neurochemicals it feels amazing.
Vibration, frequency, quantum, energy. All things I've seen as well.
There's a somewhat significant group of people that are easily wooed by incorrectly used technical terms. So much so that they are willing to very confidently use the words incorrectly and get offended when you point that out to them.
I think pop-science journalism and media has a lot of the blame here. In the search to make things accessible and entertaining they turned meaningful terms into magic incantations. They further simply lied and exaggerated implications. Those two things made it easy for grifters to sell magic quantum charms to ward off the bad frequencies.
There is such a thing as "recursive AI", where conversations with the model alter the model. Remember Microsoft Tay, from 2016? [1] That was a chatbot which learned from its chats. In about 24 hours it sounded like a hardcore neo-Nazi. Embarrassing. How did that work, anyway? LLMs were not a thing back then.
It's noteworthy that the modern LLM systems lack global long-term memory. They go back to the read-only ground state for each new user session. That provides some safety from corporate embarrassment and quality degradation. But there's no hope of improvement from continued operation.
There is a "Recursive AI" startup.[2] This will apparently come as a Unity (the 3D game engine) add-on, so game NPCs can have some smarts. That should be interesting. It's been done before. Here's a 2023 demo from using Replika and Unreal Engine.[3] The influencer manages to convince the NPCs that they are characters in a simulation, and gets them to talk about that. There's a whole line of AI development in the game industry that doesn't get mentioned much.
[1] https://www.cbsnews.com/news/microsoft-shuts-down-ai-chatbot...
[2] https://recursiveai.net/
[3] https://www.youtube.com/watch?v=4sCWf2VGdfc&
This thread is informative but boy, is that title Click-Baity. It isn't until the 7th post that he bothers to mention this:
"To be clear: as far as we know, AI doesn't cause psychosis. It UNMASKS it using whatever story your brain already knows."
Guess which part of the thread gets the headline. Also, this directly contradicts the opening line where he says "...losing touch with reality because of AI".
Which is it? I REALLY can't wait till commentariats move past AI.
> Also, this directly contradicts the opening line where he says "...losing touch with reality because of AI".
He addresses that in the next post:
> AI was the trigger, but not the gun.
One way of teasing that apart is to consider that AI didn't cause the underlying psychosis, but AI made it worse, so that AI caused the hospitalisation.
Or AI didn't cause the loose grip on reality, but it exacerbated that into completely losing touch with reality.
The poster also claims to be a psychiatrist but doesn't clarify that he's actually just a resident https://psychiatry.ucsf.edu/rtp/residents
His other posts are click baity and not what one would consider serious science journalism.
The ease of having a tool which can at a drop of the hat spin up a convincing narrative to fit your psychotic world view with plenty of examples to boot, does seem to look like an accelerating trend.
Trying to convince someone not to do something, when they can pull a 100 counter-examples out of thin air of why they should, is legitimately worrying.
Every new technology is a mirror and we blame it for what we continue to be.
https://xcancel.com/KeithSakata/status/1954884361695719474
spoiler: he doesn't talk about any of those 12 people or what caused them to be hospitalized
That's good, patient privacy is a fairly basic concept in the field of medicine, but it's always good to double-check
Patient privacy is a nightmare for everyone to navigate and the Clinton administration isn't hated enough for introducing it. I can understand if people want their HIV diagnoses private but there's surely a line to be drawn, perhaps south of HIV, but well north of "I caught the flu".
Do you want people's names published in HIV weekly?
Medical stuff should be 100% private, between you and your doctor.
He does a little in this post: https://x.com/KeithSakata/status/1946174432273178990
I mean, this stuff is pretty basic when it comes to delusions. Seems more likely that their inherent psychosis latched onto AI instead of being caused by it. These people would probably also deteriorate if they simply stumbled into any questionable part of the internet that reinforces their beliefs.
Totally, I think it's different to some degree in terms of the velocity.
In a traditional forum they may have to wait for others to engage, and that's not even guaranteed. Whereas with an llm you can just go back and forth continually, with something that never gets tired and is excited to communicate with you, reinforcing your beliefs.
Issue with technology accelerating nature...
Well put
I think the key difference here is that ChatGPT and its ilk give an unlimited stream of yes-you-are-the-always-correct-genius sycophancy literally designed for engagement. The kind of niche rabbitholes existing from before LLMs are generally either rate-limited by being a limited number of actual people with strongly similar views (doomsday preppers, niche cults, etc), or so huge and chaotic that pure-strain sycophancy won't happen (reddit, 4chan).
It's effectively letting people talk to themselves but with an abstraction that makes it appear to be objective and coming from a 3rd party.
Oh right. Reading this from random user, who seemingly has no training on therapy or psychology makes sense.
Similar to reading how a database should be built from a web developer.
Considering how hard actual quality training of a psychologist is, this is even more crazy
There's nothing crazy with suspecting that causality has not been established. If we're not psychologists or psychiatrists, then we have even more cause to wait for clinical studies. If you are a psychologist or psychiatrist, you still might not be remotely equipped to run clinical studies.
If you don't want to be "crazy" then you need a higher threshold for accepting these anecdotes as generalizable causal theory, because otherwise you'd be incoherently jerked left and right all the time.
He does make that point further down. He also makes the point that in the past there was a similar syndrome around TV and radio, where schizophrenics would say the CIA (it was usually the CIA) was beaming thoughts into their brains.
Interestingly, no one is accusing ChatGPT of working for the CIA.
(Of course I have no idea if that's rational or delusional.)
Anyway - this really needs some hard data with a control group to see if more people are becoming psychotic, or whether it's the same number of psychotics using different tools/means.
> Seems more likely that their inherent psychosis latched onto AI instead of being caused by it.
This is what the author of the tweet thread says.
Isn't that exactly what this person points out in the thread linked in GP? They compare it directly to triggers in other decades.
OP's title and the original post insinuate that psychosis happened because of AI. As if it wouldn't have happened otherwise. That's a very bold claim.
The original post does not. Read the whole Twitter thread.
Wouldn't that be illegal to speak specifically? Patient privacy and all
If this is USA, you're allowed to talk about everything that happened, you just can't say who it happened to.
I grew up in a medical houshold, and there is a specific speach mode that doctors use when discussing patients(cases), that anonimises the idividual.......as it is part of practicing medicine , conveying information to other patients, and there own study and learning it is quite common.
This is nothing new.
In ~2002 a person I knew in college was hospitalized for doing the same thing with much more primitive chatbots.
About a decade ago he left me a voice mail, he was in an institution, they allowed him access to chatbots and python, and the spiral was happening again.
I sent an email to the institution. Of course, they couldn't respond to me because of HIPPA.
I wonder how often the same thing happens when people have an inner conversation (rather than with a chatbot).
I wonder how comparable this actually is to "American Nervousness" which I learned about on Derek Thompson's blog https://substack.com/@derekthompson/p-170457512
https://news.ycombinator.com/item?id=44861767
This and parent post claim to refute much of that article.
A chat becoming some kind of personal(ized) echo-chamber?
https://www.reddit.com/r/MyBoyfriendIsAI/
Way down the rabbit hole we go...
I was expecting your link to be to here: https://www.reddit.com/r/ChatGPT/comments/1kalae8/chatgpt_in...
With the story the other week of some peoples chatgpt threads being indexed by google, I came across a chatgpt thread related to conspiracy theories (in the title of the thread). Thinking it'd be benign I started reading it a bit, it was pretty clear the person chatting had some kind of mental disorder such as schizophrenia. It was a bit scary to see how the responses from chatgpt encouraged and furthered delusions, feeding into their theories and helping them spiral further. The thread was hundreds of messages long and it just went deeper and deeper. This was a situation I hadn't thought of, but given the sycophantic nature of some of these models, it's inevitable that they'll lead people further towards some dangerous tendencies or delusions.
So the take away is that there are a lot of people on the edge, and chatGPT is better than most people at getting people past that little bump because it’s willing to engage in syncophantic, delusional conversation when properly prompted.
I’m sure this would also happen if other people were willing to engage people in this fragile condition in this kind of delusional conversation.
The threadbait is so apparent. Anytime the OP replies to every comment, it's obvious.
Sound likes vulnerable people experiencing potentially temporary states of detachment from reality are having their issues exacerbated by something that's touted as a cure all.
Possibly related, possibly meta: https://x.com/_opencv_
(Alt URLs: https://nitter.poast.org/_opencv_ https://xcancel.com/_opencv_)
(Edit: hmm, feels like we could do with a HN bot for this sort of thing! There is/was one for finding free versions of paywalled posts. Feels like a twitter/X equivalent should be easy mode.)
Did the television, books, internet or a pet ever do this?
> pet
Yes
https://academic.oup.com/schizophreniabulletin/article/50/3/...
> Our findings provide support for the hypothesis that cat exposure is associated with an increased risk of broadly defined schizophrenia-related disorders
https://www.sciencedirect.com/science/article/abs/pii/S00223...
> Our findings suggest childhood cat ownership has conditional associations with psychotic experiences in adulthood.
https://journals.plos.org/plosone/article?id=10.1371/journal...
> Exposure to household pets during infancy and childhood may be associated with altered rates of development of psychiatric disorders in later life.
Cats in particular are correlated with getting toxoplasmosis. About other pets - IME, people who have been disappointed by humans / feel like they don't really fit into human society like pets as an alternative emotional support. I don't really understand it, but that's the observation.
Pets? Probably not.
But there have always been crank forums online. Before that, there were cranks discovering and creating subcultures, selling/sending books and pamphlets to each other.
Books and internet, sort of? There is so much choice that almost everyone can find someone or something that agrees with whatever ideas they may have.
Don't forget rock music, especially when you play it backwards.
That will install windows!
Yes.
Cognitive Security is going to be a field worth looking into.
Got the hunch that it's harder on younger people who haven't had as many experiences yet and are now able to get insights and media about anything from an AI in a way that it becomes part of their 'baseline' depiction of reality.
If we were capable of establishing a way to measure that baseline, it would make sense to me that 'cognitive security' would become a thing.
For now it seems, being in nature and keeping it low-tech would yield a pretty decent safety net.
conventional wisdom would say that cults are formed when a leader starts some calculated plan to turn up the charisma and such in some followers.
but... maybe that's causally backwards? what if some people have a latent disposition toward messianic delusions and encountering somebody that's sufficiently obsequious triggers their transformation?
i'm trying to think of situations where i've encountered people that are endlessly attentive and open minded, always agreeing, and never suggesting that a particular idea is a little crazy. a "true followers" like that has been really rare until LLMs came along.
You'd casually call this letting success (or what have you) go to your head. It's even easier to lose touch when you're surrounded by yes men, and that's a job that AI is great at automating.
For people who don't click the link (and it's X, so I understand) or scroll down, a later part of the thread is quite important:
===
Historically, delusions follow culture:
1950s → “The CIA is watching”
1990s → “TV sends me secret messages”
2025 → “ChatGPT chose me”
To be clear: as far as we know, AI doesn't cause psychosis. It UNMASKS it using whatever story your brain already knows.
Most people I’ve seen with AI-psychosis had other stressors = sleep loss, drugs, mood episodes.
AI was the trigger, but not the gun.
Meaning there's no "AI-induced schizophrenia"
The uncomfortable truth is we’re all vulnerable.
The same traits that make you brilliant:
• pattern recognition
• abstract thinking
• intuition
They live right next to an evolutionary cliff edge. Most benefit from these traits. But a few get pushed over.
Didn't Snowden basically confirm the feds are actually watching?
Yes, but AI may be far worse in the implementation.
The CIA or TV angles you mention, had a lot fewer "proof!" moments. They'd be less concrete too.
But an AI which over and over and over confirms... that's what cults are made of. A group of people all fixated on the same worldview.
Just in this case, a cult of two.
Before he reaches the end of his 12-tweet thread he’s contradicted himself:
“I’ve seen 12 people hospitalized after losing touch with reality because of AI.” [#1]
“And no AI does not causes psychosis” [#12]
Being hospitalized due to AI is not the same as being made psychotic by it. Overdosing on a drug doesn't require addiction.
Pretty sure all the VCs and AI hype (wo)men are also losing touch with reality as well...
They went first, but they're all so good looking and likeable...
These people would get hospitalized for any reason.
Utter nonsense on X. Some people should not be allowed out of their house alone, or be able to access anything like modern technology.
For example "I've seen 12 people hospitalised after using a toaster"