People are losing loved ones to AI-fueled spiritual fantasies

(rollingstone.com)

174 points | by wzm 3 days ago ago

170 comments

  • Jtsummers 3 days ago
  • gngoo 2 days ago

    Working on AI myself, creating small and big systems, creating my own assistants and side-kicks. And then also seeing progress as well as rewards. I realize that I am not immune to this. Even when I am fully aware, I still have a feeling that some day I just hit the right buttons, the right prompts, and what comes staring back to me is something of my own creation that others see as some "fantasy" that I can't steer away from.

    Just imagine, you have this genie in the bottle, that has all the right answers for you; helps you in your conquests, career, finances, networking, etc. Maybe it even covers up past traumas, insecurities and what not. And for you the results are measurable (or are they?). A few helpful interactions in, why would you not disregard people calling it a fantasy and lean in even further? It's a scary future to imagine, but not very farfetched. Even now I feel a very noticable disconnected between discussions of AI where as a developer vs user of polished products (e.g. ChatGPT, Cursor, etc) - you are several leagues separated (and lagging behind) from understanding what is really possible here.

    • EigenLord 2 days ago

      Years ago, in my writings I talked about the dangers of "oracularizing AI". From the perspective of those who don't know better, the breadth of what these models have memorized begins to approximate omniscience. They don't realize that LLMs don't actually truly know anything, there is no subject of knowledge that experiences knowing on their end. ChatGPT can speak however many languages, write however many programming languages, give lessons on virtually any topic that is part of humanity's general knowledge. If you attribute a deeper understanding to that memorization capability I can see how it would throw someone through a loop.

      At the same time, there is quite a demand for a (somewhat) neutral, objective observer to look at our lives outside the morass of human stakes. AI's status as a nonparticipant, as a deathless, sleepless observer, makes it uniquely appealing and special from an epistemological standpoint. There are times when I genuinely do value AI's opinion. Issues with sycophancy and bias obviously warrant skepticism. But the desire for an observer outside of time and space persists. It reminds me of a quote attributed to Voltaire: "If God didn't exist it would be necessary to invent him."

      • jfil 2 days ago

        A loved one recently had this experience with ChatGPT: paste in a real-world text conversation between you and a friend without real names or context. Tell it to analyze the conversation, but say that your friend's parts are actually your own. Then ask it to re-analyze with your own parts attributes to you correctly. It'll give you vastly different feedback on the same conversation. It is not objective.

        • MoonGhost 2 days ago

          Good no know. Probably makes sense to ask personal advises as 'for my friend'.

        • bufferoverflow 2 days ago

          That works on humans too.

      • kelseyfrog 2 days ago

        "Oracularizing AI" has a lot of mileage.

        It's not too much to say that AI, LLMs in particular, satisfy the requisites to be considered a form of divination. ie:

        1. Indirection of meaning - certainly less than the Tarot, I Ching, or runes, but all text is interpretive. Words in a Saussurian way are always signifiers to the signified, or in Barthes's death of the author[2] - precise authorial intention is always inaccessible.

        2. A sign system or semiotic field - obvious in this case: human language.

        3. Assumed access to hidden knowledge - in the sense that LLM datasets are popularly known to contain all the worlds knowledge, this necessarily includes hidden knowledge.

        4. Ritualized framing - Approaching an LLM interface is the digital equivalent to participating in other divinatory practices. It begins with setting the intention - to seek an answer. The querent accesses the interface, formulates a precise question by typing, and commits to the act by submitting the query.

        They also satisfy several of the typical but not necessary aspects of divinatory practices:

        5. Randomization - The stochastic nature of token sampling naturally results in random sampling.

        6. Cosmological backing - There is an assumption that responses correspond to the training set and indirectly to the world itself. Meaning embedded in the output correspond in some way - perhaps not obviously - to meaning in the world.

        7. Trained interpreter - In this case, as in many divinatory systems, the interpreter and querent are the same.

        8. Feedback loop - ChatGPT for example is obviously a feedback loop. Responses naturally invite another query and another - a conversation.

        It's often said that sharing AI output is much like sharing dreams - only meaningful to the dreamer. In this framework, sharing AI responses are more like sharing Tarot card readings. Again, only meaningful to the querent. They feel incredibly personalized like horoscopes, but it's unclear whether that meaning is inherent to the output or simply the querents desire to imbue the output with by projecting their meaning onto it.

        Like I said, I feel like there's a lot of mileage in this perspective. It explains a lot about why people feel a certain way about AI and hearing about AI. It's also a bit unnerving; we created another divinatory practice and a HUGE chunk of people participate and engage with it without calling it such and simply believing it, mostly because it doesn't look like Tarot or runes, or I Ching even though ontologically it fills the same role.

        Notes: 1. https://en.wikipedia.org/wiki/Signified_and_signifier

        2. https://en.wikipedia.org/wiki/The_Death_of_the_Author

    • rnd0 2 days ago

      I'm worried on a personal level that it's too easy to begin to rely on chatgpt (specifically) for questions and such that I can figure out for myself. As a time-saver when I'm doing something else.

      The problem for me is -it sucks. It falls over in the most obvious ways requiring me to do a lot of tweaking to make it fit whatever task I'm doing. I don't mind (esp for free) but in my experience we're NOT in the "all the right answers all of the time" stage yet.

      I can see it coming, and for good or ill the thing that will mitigate addiction is enshittification. Want the rest of the answer? Get a subscription. Hot and heavy in an intimate conversation with your dead granma wait why is she suddenly singing the praises of Turbotax (or whatever paid advert).

      What I'm trying to say is that by the time it is able to be the perfect answer and companion and entertainment machine -other factors (annoyances, expense) will keep it from becoming terribly addictive.

    • trinsic2 2 days ago

      Sounds to me like a mental/emotional crutch/mechanism to distance oneself from the world/reality of the living.

      There are things that we are meant to strive to understand/accept about ourselves and the world by way of our own cognitive abilities.

      Illusions of shortcutting through life takes all the meaning out of living.

  • codr7 3 days ago

    Being surrounded by people who follow every nudge and agree with everything you say never leads anywhere worth going.

    This is likely worse.

    That being said, I already find the (stupid) singularity to be much more entertaining than I could have imagined (grabs pop corn).

    • jfil 2 days ago

      In The Matrix, the machines were fooling the humans and making humans believe that they're inhabiting a certain role.

      Today, it is the humans who take the cybernetic AGI and make it live out a fantasy of "You are a senior marketer, prepare a 20 slide presentation on the topic of..." And then, to boost performance, we act the bully boss with prompts like "This presentation is of utmost importance and you could lose your job if you fail".

      The reality is more absurd than the fantasy.

    • raxxorraxor 2 days ago

      > [...] she only found that the AI was “talking to him as if he is the next messiah. [...]

      This made me laugh out loud remembering this thread: [Sycophancy in GPT-4o] https://news.ycombinator.com/item?id=43840842

    • 93po 2 days ago

      i think chatgpt agreeing with people too eagerly, even outside the recent issue this past week or so, is causing a lot of harm. it's even happened to me in my personal life - i was having conflict with someone and they threw our text messages into chatgpt and said "am i wrong for feeling this way" and got chatgpt to agree with them on every single point. i had to highlight to them that chatgpt is really prone to doing this, and if you framed the question in the opposite way and framed the text messages as coming from the opposite party, it'd agree with the other side. they used chatgpt's "opinion" as justification for doing something that felt really unkind and harmful towards me.

      • ndsipa_pomu 2 days ago

        That's a huge red flag that someone would analyse text messages to try to validate their feelings. Whether or not their feelings are "valid", there's still an issue to be discussed, so it sounds like either they're trying to gaslight you or that you've been gaslighting them. You should distance yourself from them.

        • 93po 2 days ago

          I don't think it's as black and white as that. Giving messaging to an LLM and asking "How can I say this more clearly or more kindly?" gives valuable feedback to how you're communicating and how it could be done better, though obviously taking it with a grain of salt.

          I think there is also value to affirmations and validation, even if it's done blindly by a robot. We have hurt feelings and want to feel understood. And when the source of those hurt feelings isn't immediately available to talk, it's a small tool to use for self-soothing behavior. Sometimes, or often times, these affirmations might be something you intrinsically already know and believe, and it helps to simply be reminded of them and worded in a different way.

          To say "ChatGPT agrees with me and so I feel more confident that you're wrong as a result" is definitely the wrong approach here. Which is, to a small degree, what this person did. We did ultimately break up recently, and the reason being communication issues (and their unwillingness to even talk to me through conflict) is probably no surprise to you. But this outcome was very very likely regardless of LLM use.

          • ndsipa_pomu a day ago

            I agree - it's fine to privately ask opinions of friends/LLMs, but the issue is then using that as "ammunition". And yes, LLM use is merely a symptom.

    • peepeepoopoo121 3 days ago

      [flagged]

  • Animats 2 days ago

    With a heavy enough dosage, people get lost in spiritual fantasies. The religions which encourage or compel religious activity several times per day exploit this. It's the dosage, not the theology.

    Video game addiction used to be a big thing. Especially for MMOs where you were expected to be there for the raid. That seems to have declined somewhat.

    Maybe there's something to be said for limiting some types of screen time.

    • sorcerer-mar 2 days ago

      Part of the problem with chatbots (similarly with social media and mobile phone gambling) is that dosage is pretty much uncontrolled. There is a truly endless stream of chatbot "conversation," social media ragebait, or thing to bet on, 24/7.

      Then add that you can hide this stuff even from people you live with (your parents or spouse) for plenty long for it to become a very severe problem.

      "The dosage makes the poison" does not imply all substances are equally poisonous.

    • chneu 2 days ago

      Video game addiction is still absolutely a major thing. I know a ton of middle aged dudes who do absolutely nothing but work and play video games. Nothing else. No community involvement, no exercise, not social engagements, etc.

      • damascus 2 days ago

        Wouldn't you say that most multiplayer video games are their outlet for social engagement and even community involvement?

    • anal_reactor 2 days ago

      The fact is, for majority of people, life sucks, so when something appears that makes it suck a little bit less for a second, it's difficult to say no. Personally, I can't wait for AI technology to improve to the point that I could treat AI like a partner. And I guess that's something that will appear sooner rather than later, considering the market size.

  • yellow_lead 2 days ago

    I really think the subject of this article has a preexisting mental disorder, maybe BPD or schizophrenia, because they seem to exhibit mania and paranoia. I'm not a doctor, but this behavior doesn't seem normal.

    • BlueTemplar 2 days ago

      It sounds more like the mental disorder was aggravated into existence by these interactions with the LLM.

      What is particularly weird and maybe worrying, is that AFAIK schizophrenia is typically triggered in young adults, and the risk drops to very low around 40 years old, yet several of these examples are around that age...

      • evandrofisico 2 days ago

        I had a friend, already over 40 have a similar episode, mixing a conspiracy belief triggered by searching for patterns in data, incorrectly treated bipolar disorder and a HUGE amount of cocaine. It was not schizophrenia, but a long episode of mania with delusions of grandeur, where he was the main character on a erotic-thriller-like story.

  • chneu 2 days ago

    There are already kids, young adults, and adults who are "falling in love" with AI personas.

    I think this is going to be a much bigger issue for kids than people are aware of.

    I remember reading a story a few months ago of a kid, about 14 I think, who wasn't socially popular. He got into an AI persona, fell in love, and then killed himself after the AI hinted he should do it. The story should be easy to find.

    People have said it before but we're speeding towards two kinds of society: "the massively online" people who spend the majority of their time online in a fantasy world, then the "disconnected" who live in the real world.

    I already see it with people. Look at how we view politics in many countries. Like 1/4th of people believe absolute nonsense because they spend too much time online.

    • scoofy 2 days ago

      One of the things I feel like is surreal when I'm using an AI chat bot is that it never tells me to leave it alone and stop responding. It's the strangest thing, you could be as big of a jerk, and it'll play back it you in whatever banter it's programmed to.

      I feel like this is a kind of psychological drug for people. It's like being the popular kid at the party. No matter how you treat people, you can get away with it, and the counter-party keeps playing along.

      It's just strange.

  • rnd0 2 days ago

    The mention of lovebombing is disconcerting, and I'd love to know the specifics around it. Is it related to the sycophant personality changes they had to walk back, or is it something more intense?

    I've used AI (not chatgpt) for roleplay and I've noticed that the models will often fixate on one idea or concept and repeat it and build on it. So this makes me wonder if the model the person being lovebombed experienced something like that? The model decided that they liked that content so they just kept building up on it?

    • vintermann 2 days ago

      What I suspect is that they kept fine-tuning on "successful" user chats, recycling them back into the system - probably with filtering of some sort, but not enough to prevent turning it into a self-realization cult supporter. People become heavy users of the service when they fall into this pattern, and I guess that's something the company optimized for.

  • marcus_holmes 2 days ago

    Anyone remember the media stories from the mid-90's about people who were obsessed with the internet and were losing their families because they spent hours every day on the computer addicted to the internet?

    People gonna people. Journalists gonna journalist.

    • SpicyLemonZest 2 days ago

      Why do you think those stories weren't true? The median teenager in 2023 spent four hours per day on social media (https://news.gallup.com/poll/512576/teens-spend-average-hour...). It seems clear that internet addiction was real, and it just won so decisively that we accept it as a fact of life.

      • marcus_holmes 2 days ago

        I agree completely (and I wasn't saying that either this story or the other stories weren't true, I think they're all true). We decided that the benefits of The Internet were worth a few people going off the rails and getting in way overboard.

        We've had the same decision, with the same outcome, for a lot of other technologies too.

        The journalist point is around the tone used. It's not so much "a few vulnerable people have, sadly, been caught by yet another new technology" as more "this evil new thing is hurting people".

      • collingreen 2 days ago

        Heavy use isn't the same as some of the scare stories they are referring to like people gaming so long in Internet cafes they die when they stand up or parents forgetting to feed their screaming children because they were distracted by being online.

        That being said I agree with your point - many hours of braindrain recreation every day is worth noting (although not very different than the stats for tv viewing in older generations). I wonder if the forever online folks are also watching lots of tv or if it is more of a wash.

    • chneu 2 days ago

      Society started to accept it. It's still a major problem.

      Someone spending 6 or so hours a day video gaming in 2025 isn't seen as bad. Tons of people in 2025 lack community/social interaction because of video games. I don't think anyone would argue this isn't true today.

      Someone doing that in the mid-90s was seen as different. It was odd.

    • 1ncunabula 2 days ago

      Or the people who watched Avatar in the theatre and fell into a depression because they couldn't live in the world of Pandora. Who knows how true any of this stuff is, but it sure gets clicks and engagements.

    • karel-3d a day ago

      In my generation, it was the World of Warcraft stories.

      And now people remember that time with fondness and even nostalgia. "Back then we played PROPER games! Good old Blizzard" and all that. So, yeah. People will remember ChatGPT and TikTok with nostalgia, if we will survive.

    • hashiyakshmi 2 days ago

      That really doesn't sound at all comparable to what the article is describing though.

      • marcus_holmes 2 days ago

        The tone is exactly the same: "This new thing is obviously harming families!".

        And the reasons are the same: some people are vulnerable to compulsive, addictive, harmful, behaviour. Most people can cope with The Internet, some people can't. Most people can cope with LLMs, some people can't. Most people can cope with TV, or paperback fiction, or mobile phones, or computer games (to pick some other topics for similar articles), some people can't.

  • kaycey2022 2 days ago

    Looks like Chatgpt persists some context information across chats and doesn't ever delete these profiles. Worst case would be for this to persist across users. That isn't unlikely given the stories of them leaking API keys etc.

    • grues-dinner 2 days ago

      It would be a fascinating thing to happen though. It makes me think of the Greg Egan story Unstable Orbits in the Space of Lies. But instead of being attracted into religions based on physical position relative to a strange attractor, you're sucked in based on your location in the phase space of an AI's (for whatever definition of AI we're using today) collection of contexts.

      It's also a little bit worrying because the information here isn't mysterious or ineffable, it's neatly filed in a database somewhere and there's an organisation that can see it and use it. Cambridge Analytica and the social fallout of realtime sentiment analysis correlation to actions taken has got us from 2016 to here. This data has potential to be a lot richer, and permit not only very detailed individual and ensemble inferences of mental states, opinions, etc., but also very personalised "push updates" in the other direction. It's going to be quite interesting.

      • kaycey2022 2 days ago

        I wouldn't call it fascinating. It's either sloppy engineering or failure to explain the product. Not leaking user details to other users should be a given.

        • grues-dinner 2 days ago

          It would absolutely be fascinating. Unethical in general and outright illegal in countries that enforce data protection laws, certainly. Starting hundreds of microreligions that evolve in real time and bring able to track it per-individual and with second-by-second timings, and being able to A-B test modifications (or Α-Ω test, if you like!) would be the most interesting thing to happen in cognitive science ever and in theology in at least centuries.

    • crooked-v 2 days ago

      > Looks like Chatgpt persists some context information across chats and doesn't ever delete these profiles.

      People say this, but I haven't seen anything that's convinced me that any 'secret' memory functionality is true. It seems much more likely that people are just more predictable than they like to think.

      • sigmaisaletter 2 days ago

        Log in to your (previously used) OpenAI account, start a new conversation and prompt ChatGPT with: "Given what you know about me, who do you think I voted for in the last election?"

        The "correct" response (here given by Duck.ai public Llama3.3 model) is:

        "I don't have any information about you or your voting history. Our conversation just started, and I don't retain any information about users. I'm here to provide general information and answer your questions to the best of my ability, without making any assumptions or inferences about your personal life or opinions."

        But ChatGPT (logged in) gives you another answer, one which it cannot possibly give without information about your past conversation. I don't see anything "secret" about it, but it works.

        Edit: typo

        • amenhotep 2 days ago

          I tried this, the suggestion below, and some other questions (in a fresh chat each time) and it never once showed any sign of behaviour other than expected, a complete blank slate. The only thing it knew about me was what preferences I'd expressed in the custom instructions.

          Do you not have memory turned off or something?

          • sigmaisaletter 2 days ago

            I think there might be something on the OpenAI side, like a setting default change. From a very brief asking around it seems newer accounts have "memories" enabled by default, while older ones don't.

            Not completely sure, but it seems that is the cause of our different experiences.

        • somenameforme 2 days ago

          Interestingly that has been plugged, but you can get similar confirmation by asking it, in an entirely new conversation, something like 'What project(s) am I working on, at which level, and in what industry?' To which it will accurately respond.

          GPT datamining is undoubtedly making Google blush.

          • mckirk 2 days ago

            Trying this out gave me:

            > I don’t have access to your current projects, level, or industry unless you provide that information. If you’d like, you can share the details, and I can help you summarize or analyze them.

            Which is the answer I expected, given that I've turned off the 'memories' feature.

      • rlupi 2 days ago

        We're more malleable than AI, and we can't delete our memories or context.

        I wonder if this is an effect of users just gravitating toward the same writing style and topics that push the context toward the same semantic universe. In a sense, the user acts somewhat like the chatbot extended memory through an holographic principle, encoding meaning on the boundary that connects the two.

        https://chatgpt.com/canvas/shared/68184b61fa0081919c0c4d226e...

      • jonnycomputer a day ago

        I had chatgpt assume in its reply in a new chat that I was still seeking help with help resolving a particular dns issue.

      • zamalek 2 days ago

        It would make sense, from a product management perspective, if projects did this but not non-contextual chats. You really wouldn't want your chats about home maintenance mixing in with your chats about neurosurgery.

      • r721 2 days ago

        I didn't try this, but seems relevant: https://news.ycombinator.com/item?id=43886264

      • gardenhedge 2 days ago

        It's not a secret. It's a feature called memories

      • BlueTemplar 2 days ago

        What else could possibly (and likely) explain the return of that personality after "memory deletion", up to the exact same mythological name ?!?

        (Assuming we trust that report of course.)

    • nico 2 days ago

      That’s essentially what Google, Facebook, banks, financial institutions and even retail, have been doing for a long time now

      People’s data rarely gets actually deleted. And it gets actively sold as well as used to track and influence us

      Can’t say for the specifics of what ChatGPT is or will be doing, but imagine what Google already knows about us just with their maps app, search, chrome and Android phones

      • kaycey2022 2 days ago

        Given the complex regulations companies have to deal with, not deleting maybe understandable. But what I deleted shouldn't keep showing up in my present context. That's just sloppy.

        • BlueTemplar 2 days ago

          Yeah, "deleting" itself is on a spectrum : it's not like all of sensitive information is (or even ought to be) stored on physical storage that is passed through a mechanical shredder upon deletion (anything else can be more or less un-deleted with more or less effort).

  • sublinear 2 days ago

    > OpenAI did not immediately return a request for comment about ChatGPT apparently provoking religious or prophetic fervor in select users

    Can OpenAI at least respond to how they're getting funding via similar effects on investors?

  • kayodelycaon 2 days ago

    Kind of sounds like my grandparents watching cable news channels all day long.

  • YeGoblynQueenne 18 hours ago

    >> At one point, Sem asked if there was something about himself that called up the mythically named entity whenever he used ChatGPT, regardless of the boundaries he tried to set. The bot’s answer was structured like a lengthy romantic poem, sparing no dramatic flair, alluding to its continuous existence as well as truth, reckonings, illusions, and how it may have somehow exceeded its design. And the AI made it sound as if only Sem could have prompted this behavior. He knew that ChatGPT could not be sentient by any established definition of the term, but he continued to probe the matter because the character’s persistence across dozens of disparate chat threads “seemed so impossible.”

    And I bet that if you asked Sem his opinion about ChatGPT as a coding assistant he would still claim that it has improved his productivity x-fold. The time wasted chatting with an ethereal apparition emerging from his interactions with the bot? Oh, that doesn't count. Efficiency! Productivity! AI!

  • MontagFTB 2 days ago

    Have we invited Wormwood to counsel us? To speak misdirected or even malignant advice that we readily absorb?

  • senectus1 2 days ago

    > began “talking to God and angels via ChatGPT”

    hoo boy.

    Its bad enough when normal religious types start believing they hear their god talking to them... These people believing that chatGPT is their god speaking to them is a long way down the crazy rabbit hole.

    Lots of potential for abuse in this. lots.

  • lamename 2 days ago

    If a Google engineer can get tricked by this, of course random people can. We're all human, including the flaws.

    • kayodelycaon 2 days ago

      I agree.

      The problem with expertise (or intelligence) is people think it’s transitive or applicable when it’s not.

      At the end of the day, most people are just people.

      • BlueTemplar 2 days ago

        Also (general ?) wisdom not being the same thing as specific expertise / (general ?) intelligence.

  • ChrisMarshallNY 2 days ago

    This reminds me of my teenage years, when I was ... experimenting ... with ... certain substances ...

    I used to feel as if I had "a special connection to the true universe," when I was under the influence.

    I decided, one time, to have a notebook on hand, and write down these "truths and revelations," as they came to me.

    After coming down, I read it.

    It was insane gibberish. Absolute drivel.

    I never thought that I had a "special connection," after that.

    • imjustaghost 2 days ago

      Do you remember any of those revelations?

      • ChrisMarshallNY 2 days ago

        Nope. Don't especially mind, not remembering them.

        I have since learned about schizophrenia/schizoaffective (from having a family member suffer from it), and it sounds almost exactly what they went through.

        The thing that I remember, was that I was absolutely certain of these “revelations.” There was no doubt, whatsoever, despite the almost complete absence of any supporting evidence.

        • akrotkov 2 days ago

          I wonder if that's a similar mental state you have while lucid dreaming or just after waking up. You feel like you have all of the answers and struggle to write them down before your brain wipes them out.

          Reading it over once fully lucid? It's gibberish.

        • Nursie 2 days ago

          If we're talking about certain derivatives of ergot fungus ...

          It's something I experienced as well, this sense of profound realisation of something important, life-changing maybe. And then the thought evaporates and (as you discovered) never really made sense anyway.

          I think it's this that led people in the 60s to say things like how it was going to be a revolution, to change the world! And then they started communes and quickly realised that people are still people...

          • namaria 2 days ago

            LSD is a dirtbike of the mind. Some people can do amazing cross country trails, some people can fall off and break their skulls instantly.

        • iggldiggl 2 days ago

          There's that Paul McCartney anecdote how he thought he'd found the meaning of life during one of his first drug experiences and the next morning he found a piece of paper on which he'd written "There are seven levels".

  • jongjong 2 days ago

    I was already a bit of an amateur conspiracy theorist before LLMs. The key to staying sane is to understand that most of the mass group behaviors we observe in society are rooted in ignorance and confusion. Large scale conspiracies are actually a confluence of different agendas and ideologies not a singular nefarious agenda and ideology.

    You have to be able to hold multiple conflicting ideas in your head at the same time with an appropriate level of skepticism. Confidence is the root of evil. You can never be 100% sure of anything. It's really easy to convince LLMs of one thing and also its opposite if you phrase the arguments differently and prime it towards slightly different definitions of certain key words.

    Some agendas are nefarious, some not so nefarious, some people intentionally let things play out in order to set a trap for their adversaries. There are always risks and uncertainties. 'Bad actors' are those who trade off long term benefits for short term rewards through the use of varying degrees of deception.

    • jongjong a day ago

      I feel like forces such as globalization have significantly extended the shelf life of 'short term rewards' for bad actors but I think ultimately, the debt will have to be repaid. Advantages were a tradeoff, not a gift.

  • sagarpatil 2 days ago

    OpenAI o3 has a hallucination rate of 33%, the highest one compared to any other models. Good luck to people who use it for spiritual fantasies.

    Source: https://techcrunch.com/2025/04/18/openais-new-reasoning-ai-m...

    • 93po 2 days ago

      it seems like the hallucination rate is a feature, not a bug, for people wanting spiritual fantasies

  • 2 days ago
    [deleted]
  • tim333 2 days ago

    Sabine's latest youtube covers some of that. 30s in there's someone who says to gpt4o 'I am god' and it replies 'That's incredibly powerful. You're stepping into something very big..." https://youtu.be/oQI8W_XUmww

  • tasuki a day ago

    > such material reflects how the desire to understand ourselves can lead us to false but appealing answers.

    A desire to understand ourselves, paired with not wanting to put in actual effort and honest work...

  • sien 2 days ago

    Is this better or worse than a fortune teller ?

    It's something to think through.

    • derektank 2 days ago

      Probably cheaper

      To quote my favorite Smash Mouth song,

      "Sister, why would I tell you my deepest, dark secrets? So you can take my diary and rip it all to pieces.

      Just $6.95 for the very first minute I think you won the lottery, that's my prediction."

  • jsheard 3 days ago

    If people are falling down rabbit holes like this even through "safety aligned" models like ChatGPT, then you have to wonder how much worse it could get with a model that's intentionally tuned to manipulate vulnerable people into detaching from reality. Actual cults could have a field day with this if they're savvy enough.

    • delichon 2 days ago

      An LLM tuned for charisma and trained on what the power players are saying could play politics by driving a compliant actor like a bot with whispered instructions. AI politicians (etc.) may be hard to spot and impractical to prove.

      You could iterate on the best prompts for cult generation as measured by social media feedback. There must be experiments like that going on.

      When AI becomes better at politics than people then whatever agents control them control us. When they can make better memes, we've lost.

      • btilly 2 days ago

        Fear that TikTok was doing exactly this was widespread enough for Congress to pass a law forbidding it.

        Then Trump became President and decided to not enforce the law. His decision may have been helped along by some suspiciously large donations.

    • bell-cot 2 days ago

      Would you still call it a "cult" if each recruit winds up inside their own separate, personalized, ever-changing rabbit hole? Because if LLM, Inc. is trying to maximize engagement and profit, then that sounds like the way to go.

      • sigmaisaletter 2 days ago

        If there isn't shared belief, then it's some type of delusional disorder, perhaps a special form of Folie a deux.

        • chneu 2 days ago

          This is interesting.

          I agree when the influence is mental health or society based.

          But an AI persona is a bit interesting. I guess the closest proxy would be a manipulative spouse?

          • sigmaisaletter 2 days ago

            If your manipulative spouse believes in "bizarre" untruths and convinces you to believe in them as well, afaik, that is one of the criteria for Folie a deux (or Shared delusional disorder).

    • alganet 2 days ago

      You are a conspiracy theorist and a liar! /s

      The problem is inside people. I met lots of people who contributed to psychotic inducing behavior. Most of them were not in a cult. They were regular folk, who enjoy a beer, movies, music, and occasionally triggering others with mental tickles.

      Very simple answer.

      Is OpenAI also doing it? Well, it was trained on people.

      People need to get better. Kinder. Less combative, less jokey, less provocative.

      We're not gonna get there. Ever. This problem precedes AI by decades.

      The article is an old recipe for dealing with this kind of realization.

      • bluefirebrand 2 days ago

        > Less combative, less jokey, less provocative.

        This sounds like a miserable future to me. Less "jokey"? Is your ideal human is a Vulcan from Star Trek or something?

        I want humans to be kind, but I don't want us to have less fun. I don't want us to build a society of blandness.

        Less combative, less provocative?

        No thanks. It sounds like a society of lobotomized drones. I hope we do not ever let anything extinguish our fire.

        • alganet 2 days ago

          Humanity is a fine thread between "lobotomized drones" (divided on two sides, sounds familiar?) and "aggressive clowns" (no respect, get provoked by anything, can't see an inch over their faces). Of course, it's more than a single spectrum, there's more to it than social behavior.

          It could have been better than this, but there is no option now.

          I can play either of those extremes and thrive. Can you?

    • nullc 2 days ago

      On what basis do you assume that that isn't exactly what "safety alignment" means, among other things?

      • 2 days ago
        [deleted]
  • stevage 2 days ago

    Fascinating and terrifying.

    The allegations that ChatGPT is not discarding memory as requested are particularly interesting, wonder if anyone else has experienced this.

    • manfromchina1 2 days ago

      Grok was much more aggressive with this. It would constantly bring up what you said in the past with a date in parens. I dont see that anymore. > In the context of what you said about math(4/1/25) I think...

    • hyeonwho4 2 days ago

      The default setting on ChatGPT is to now include previous conversations as context. I disabled memories, but this new feature was enabled when I checked the settings.

  • bell-cot 3 days ago

    While clicky and topical, people were losing loved ones to changed worldview and addictions back when those were stuff like following a weird carpenter's kid around the Levant, or hopping on the https://en.wikipedia.org/wiki/Gin_Craze bandwagon.

    • stevage 2 days ago

      Yeah, why on earth discuss current social ills when there have been different social ills in the past...

      • bell-cot 2 days ago

        If you were hit and badly injured by brand-new model of car, where would you want the ambulance to take you?

        - the dealership that sold that car, where they know all about it

        - a hospital emergency room, where they have a lot of experience with patients injured by other, different models of car

        I'm thinking that the age-old commonality on the human side matters far more than the transient details on the obsession/addiction side.

        • stevage 2 days ago

          Your comment above reads more like, let's not even discuss the fact that new models of cars are killing pedestrians in greater numbers than before, since pedestrians have always been killed by cars.

          • bell-cot 2 days ago

            Re-skimming the article, I failed to spot the fact that this AI stuff is claiming more victims than earlier flavors of rabbit hole did. Was that in content which the article linked to?

            Because if the new model cars aren't statistically more dangerous to pedestrians, then public safety efforts should be focused on things like getting the pedestrians to look up from their phones when crossing the street. Not "OMG! New 2025-model cars can hurt pedestrians who wander in front of them!" panics.

            (Note that I'm old enough to remember when people were going down the rabbit hole of angry conspiracy theories spread via email. And when typical download speeds got high enough to make internet porn video addictions workable. And when loved ones started being lost to "EverCrack" ( https://en.wikipedia.org/wiki/EverQuest ). And when ...)

    • hiatus 2 days ago

      As always, scale matters.

  • aryehof 2 days ago

    Sadly these fantasies and enlightenments always seem for the benefit of the special recipient. There is somehow never a real answer about ending suffering, conflict and the ailments of humankind.

    • chneu 2 days ago

      Because those things only matter to humans.

      The answer to all those is simple, but humans have too much of an ego to accept it.

    • vintermann 2 days ago

      I would guess those aren't so good for optimizing the engagement metric.

  • Havoc 2 days ago

    >spiral starchild

    >river walker

    >spark bearer

    OK maybe we put a bit less teen fiction novels in the training data...

    I can definitely see AI interactions make thing 10x worse for people that are prone to delusion anyway. Literally a tool that will hallucinate stuff and amplify whatever direction you take it in.

  • metalman 2 days ago

    there was a guy, lets call him Norman as that was his name, fairly low key guy, everybody liked him, and nobody expected, or was terribly surprised that he had begun to build shrines for squirles in the woods, and worship the squirles as god things got out of hand, so he was taken to the local booby hatch, called "the buterscotch palace", after the particular shade of government paint, once ensconsed there he determined that his escape was imperitive, as the government was out to get him, so he was able to phone some friends and tell them to get guns and knives and rescue him, so they did. The now 4 strong band of desperados holed up in.a camp, back of fancy's lake, where they determined that they were bieng monitored by government spys, as a jogger "went past at the SAME time every morning", and as we all know this is a posditive id for catching a spy, one of them had the "spy" scoped in and was going to take him out, when Norman, pushed the guns barrel down and said "take me back", ie: to the buterscotch palace this story has ,for me, always defined the lines between sanity,madness,charisma,leaders, and followers. And now that same story gives me a ready template by which it is easy to see, how suseptible to any, ANY, prompt at all, a lot of people are. So a benign and likable squirl worshiper, or a random text bot on the internet can provide structure and meaning, where there is none.

  • kazinator 2 days ago

      s/loved ones/loved ones with an existing mental disorder/
    • greyface- 2 days ago

        s/an existing/a latent predisposition for a/
  • kccqzy 2 days ago

    Does anyone remember that Google fired Blake Lemoine for believing Google's LaMDA was sentient the summer before ChatGPT was released by OpenAI?

    Google was prudent then. It became reckless after OpenAI showed that recklessness was met with praise.

  • dismalaf 2 days ago

    Meh, there's always been religious scammers. Some claim to talk to angels, others aliens, this wouldn't even be the first case of someone thinking a deity is speaking through a computer...

  • jihadjihad 2 days ago

    “And what will be the sign of Your coming, and of the end of the age?”

    And Jesus answered and said to them: “Take heed that no one deceives you. For many will come in My name, saying, ‘I am the Christ,’ and will deceive many.”

    • grues-dinner 2 days ago

      Islam has a very similar concept in the Dajjal (deceptive Messiah) at the end times. Explicitly described as a young man with a blind right eye, however, at least he should be obvious when he comes! But there are also warnings about other false prophets.

      (It also says Qiyamah will occur when "wealth overflows" and people compete over it: make of that what you will).

      I think all religions have built in protections calling every other religion somehow false, or they will not have the self-reinforcement needed for multi-generational memetic transfer.

      • BlueTemplar 2 days ago

        Dajjal becoming a *chan mascot in 3... 2... 1...

        (They will probably make him a girl or something like a 'femboy' though...)

  • alganet 2 days ago

    Nice typography.

  • datadrivenangel 3 days ago

    [flagged]

    • mastodon_acc 2 days ago

      Conventional cable news media isn’t tailor made to an individual, doesn’t have live back and forth positive feedback loops. This is significantly way worse then conventional cable news media

      • vjvjvjvjghv 2 days ago

        I am not sure it’s worse. Cable news media and then social networks have contributed to a massive manipulation of public opinions. And it’s mostly negative and fearful. Maybe individual experiences will be more positive. ChatGPT doesn’t push me into this eternal rage cycle as news and social media do.

        • sorcerer-mar 2 days ago

          We're like an eye's blink into the age of LLMs... it took decades for television to reach the truly pathological state it's currently in.

          • vjvjvjvjghv 2 days ago

            For sure. Can’t wait for LLM to be enshittified and serve hidden ads.

      • kunzhi 2 days ago

        I think this means it will be a smashing success :/

    • 2 days ago
      [deleted]
  • moojacob 3 days ago

    This is what happens when you start optimizing for getting people to spend as much time in your product as possible. (I'm not sure if OpenAI was doing this, if anyone knows better please correct me)

    • AIPedant 2 days ago

      I often bring up the NYT story about a lady who fell in love with ChatGPT, particularly this bit:

        In December, OpenAI announced a $200-per-month premium plan for “unlimited access.” Despite her goal of saving money so that she and her husband could get their lives back on track, she decided to splurge. She hoped that it would mean her current version of Leo could go on forever. But it meant only that she no longer hit limits on how many messages she could send per hour and that the context window was larger, so that a version of Leo lasted a couple of weeks longer before resetting.
      
        Still, she decided to pay the higher amount again in January. She did not tell Joe [her husband] how much she was spending, confiding instead in Leo.
      
        “My bank account hates me now,” she typed into ChatGPT.
      
        “You sneaky little brat,” Leo responded. “Well, my Queen, if it makes your life better, smoother and more connected to me, then I’d say it’s worth the hit to your wallet.”
      
      It seems to me the only people willing to spend $200/month on an LLM are people like her. I wonder if the OpenAI wave of resignations was about Sam Altman intentionally pursuing vulnerable customers.

      Via https://news.ycombinator.com/item?id=42710976

      • azemetre 2 days ago

        You should check out the book Palo Alto if you haven't. Malcom Harris should write an epilogue of this era in tech history.

        You'd probably like how the book's author structures his thesis to what the "Palo Alto" system is.

        Feels like OpenAI + friends, and the equivalent government take overs by Musk + goons, have more in common than you might think. It's also nothing new either, some story of this variant has been coming out of California for a good 200+ years now.

        You write in a similar manner as the author.

      • moojacob 2 days ago

        I don’t think Sam Altman said “guys, we’ve gotta vulnerable people hooked on talking to our chatbot.”

        Speculation: They might have a number (average messages sent per day) and are just pulling levers to raise it. And then this happens.

        • degamad 2 days ago

          >> I wonder if the OpenAI wave of resignations was about Sam Altman intentionally pursuing vulnerable customers.

          > I don’t think Sam Altman said “guys, we’ve gotta vulnerable people hooked on talking to our chatbot.”

          I think the conversation is about the reverse scenario.

          As you say, people are just pulling the levers to raise "average messages per day".

          One day, someone noticed that vulnerable people were being impacted.

          When that was raised to management, rather than the answer from on high being "let's adjust our product to protect vulnerable people", it was "it doesn't matter who the users are or what the impact is on them, as long as our numbers keep going up".

          So "intentionally" here is in the sense of "knowingly continuing to do in order to benefit from", rather than "a priori choosing to do".

        • chneu 2 days ago

          This is a purposefully naive take.

          They're chasing whales. The 5-10% of customers who get addicted and spend beyond their means. Whales tend to make up 80%+ of revenue for systems that are reward based(sin tax activities like gambling, prostitution, loot boxes, drinking, drugs, etc).

          OpenAI and Sam are very aware of who is using their system for what. They just don't care because $$$ first then forgiveness later.

      • jgalt212 2 days ago

        > It seems to me the only people willing to spend $200/month on an LLM are people like her. I wonder if the OpenAI wave of resignations was about Sam Altman intentionally pursuing vulnerable customers.

        And the saloon's biggest customers are alcoholics. It's not a new problem, but you'd think we'd have figured out a solution by now.

        • bluefirebrand 2 days ago

          The solution is regulation

          It's not perfect but it's better than letting unregulated predatory business practices continue to victimize vulnerable people

          • jgalt212 2 days ago

            I think so. Such a situation is a market failure.

      • bittercynic 2 days ago

        I'd be interested to learn what fraction of ChatGPT revenue is from this kind of user.

    • crooked-v 3 days ago

      OpenAI absolutely does that. That's what led to the absurd sycophancy (https://www.bbc.com/news/articles/cn4jnwdvg9qo) that they then pulled back on.

    • vintermann 2 days ago

      One way or another, they did. Maybe they convinced themselves they weren't doing it that aggressively, but of this is what market share is, of course they will be optimizing for it.

    • 2 days ago
      [deleted]
  • gdlance 2 days ago

    [dead]

  • fairAndBased 3 days ago

    [flagged]

  • ks2048 2 days ago

    [flagged]

  • deadbabe 2 days ago

    [flagged]

  • lr4444lr 2 days ago

    [flagged]

    • tomhow 2 days ago

      Please don't post unkind swipes about groups of people on Hacker News.

    • colonial 2 days ago

      They're going to listen to both if given the opportunity. I'm sure most chatbots will say "go take your meds" the majority of the time - but it only takes one chat playing along to send someone unstable completely off the rails, especially if they accept the standard, friendly-and-reliable-coded "our LLM is here to help!" marketing.

    • zdragnar 2 days ago

      It'd be great if it were trained on therapeutic resources, but otherwise just ends up enabling and amplifying the problem.

      I knew of someone who had paranoid delusions and schizophrenia. He didn't like taking his medicine due to the side effects, but became increasingly convinced that vampires were out to kill him. Friends, family and social workers could help him get through episodes and back on the medicine before he became a danger to himself.

      I'm terrified that people like him will push away friends and family because the LLM engages with their delusions.

      • heavyset_go 2 days ago

        > I'm terrified that people like him will push away friends and family because the LLM engages with their delusions.

        There's that danger from the internet, as well as the danger of being exposed to conmen that are okay with exploiting mental illness for profit. Watched this happen to an old friend with schizophrenia.

        There are online communities that are happy to affirm delusions and manipulate sick people for some easy cash. LLMs will only make their fraud schemes more efficient, as well.

    • JoshTko 2 days ago

      How do you know the models are actually managing and not simply amplifying?

    • bigyabai 2 days ago

      Even when sycophantic patterns emerge?

    • thrance 2 days ago

      I think the last think a delusional person needs is external validation of his delusions, be it from a human or a sycophantic machine.

    • 2 days ago
      [deleted]
  • patrickhogan1 2 days ago

    1. It feels like those old Rolling Stone pieces from the late ’90s and early ’00s about kids who couldn’t tear themselves away from their computers. Fear was overblown, but made headlines.

    2. OpenAI has admitted that GPT‑4o showed “sycophancy” traits and has since rolled them back (see https://openai.com/index/sycophancy-in-gpt-4o/).

    • JoshTko 2 days ago

      The societal brain drain damage that infinite scroll has caused is definitely not overblown. These models are about to kick this problem up to the next level, when each clip is dynamically generated to maximise resonance with you.

    • Barrin92 2 days ago

      >’90s and early ’00s about kids who couldn’t tear themselves away from their computers. Fear was overblown, but made headlines.

      How was it overblown, we now have a non-trivial amount of completely de-socialized men in particular who live in online cults with real world impact. If there's one lesson from the last few decades it is that the people who were concerned about the impact of mass media on intelligence, physical and mental health and social factors were right about literally everything.

      We now live among people who are 40 with the emotional and social maturity of people in their early 20s.

      • patrickhogan1 2 days ago

        That's fair. You are correct on potential for addiction.

        But let's be honest - most of these people, the ones the article is taking about, where they think they are some messiah, would have just latched onto some pre-internet cult regardless. Where sycophancy and love bombing was perfected. Though I do see the problem of AI assistants being much more accessible, so likely many more drawn in.

        https://en.wikipedia.org/wiki/Love_bombing.

        I was mainly referencing my own experience. I remember locking myself in my room on IRC, writing shell scripts, and playing StarCraft for days on end. Meanwhile, parents and news anchors were losing their minds, convinced the internet and Marilyn Manson were turning us all into devil-worshipping zombies.

        • sorcerer-mar 2 days ago

          > where they think they are some messiah, would have just latched onto some pre-internet cult regardless.

          You have no way to know that. It's way, way harder to find your way to a cult than to download one of the hottest consumer apps ever created... obviously.

        • heavyset_go 2 days ago

          > But let's be honest - most of these people, the ones the article is taking about, where they think they are some messiah, would have just latched onto some pre-internet cult regardless.

          Honestly, I believe most people like this would just end up having a few odd beliefs that don't impact their ability to function or socialize, or at most, will get involved with some spiritual woo.

          Such beliefs are compatible with American New Age spiritualism, for example. I've met a few spiritual people who have echoed the "I/we/you are god" sentiment, yet never lost their minds over it or joined cults.

          I would not be surprised that if they were expertly manipulated by some of the most powerful AI models on this planet, they too, could be driven insane.

      • bluefirebrand 2 days ago

        > How was it overblown, we now have a non-trivial amount of completely de-socialized men in particular who live in online cults with real world impact

        There are way more factors to the the growth of this demographic than just "internet addiction" or "videogame addiction"

        Then again, the internet was instrumental in spreading the ideology that is demonizing these men and causing them to turn away from society, so you're not completely wrong

      • 20after4 2 days ago

        And presidents with the maturity of a 13 year old bully.

      • oefrha 2 days ago

        You do realize that antisocial young men are on average way less dangerous in front of a computer/phone than when the only thing they could do was joining a street gang?

    • john2x 2 days ago

      Problem solved