ChatGPT's artificial empathy is a language trick

(theconversation.com)

10 points | by devonnull 11 hours ago ago

14 comments

  • xtiansimon 6 hours ago

    > “…it will get harder and harder to distinguish a conversation with a real person from one with an AI system.”

    I’m reminded of that trope in police shows— “If you’re a cop you got to tell me.” This becomes, “If you’re a Bot, you got to tell me, man.”

  • ksaj 11 hours ago

    Maybe this is a good way to learn when people are using that same language trick to make you think they have empathy.

    • DemocracyFTW2 9 hours ago

      Yeah I guess that would be a positive application of AI for once. I think most people assume by default that other people are sincere, will do them no harm, mean what they say and have empathy like they themselves. They might be totally unprepared to deal with crooks and malignant narcissists and will auto-correct any hint that this is not so.

      On a related note I always wince when I ask Copilot a question and the answer will inevitably be followed by an "engagement hook" like "Isn't language fascinating in how it evolves and adapts over time?" or "Do you think any of these fit the bill? ". Shudder.

  • chaos_emergent 10 hours ago

    What’s the definition of empathy? To me the connotation has always been, “is able to feel the feelings of others” as opposed to sympathy which is more about “imagining the feeling someone is experiencing”.

    Regardless, aren’t we all trained in the same way - reinforcement of gestural and linguistic symbols that imply empathy, rather than being empathetic? I guess I’m wondering if hijacking our emotional understanding of interactions with LMs is that far off from the interaction manipulation that we’re all socialized to do from a young age

    • st-keller 10 hours ago

      I‘m quite sure, that a lot of people aren‘t trained that way. I know that anecdotical evidence doesn‘t count, but I know a handful of people that surely don’t know how to use symbols to „imply empathy“.

  • Alifatisk 10 hours ago

    In the Eliza example, I find it astonishing how the chatbot was able to pick out specific words it could use as response, how did they achieve that in 1988?

  • yawpitch 11 hours ago

    > Interacting with non-sentient entities that mimic human identity can alter our perception of [human entities that might only mimic sentience].

    The more I see people investing in conversations with LLMs, the more I see people trying to outsource thinking and understanding to something which does neither.

    • boesboes 10 hours ago

      I use LLMs as a kind of rubber duck and secretary i guess. As in, I’ll ramble about ideas and thoughts and let it analyze an provide feedback etc. I am not sure if it is causing brain rot to be honest. It does seems like it might make it harder to properly articulate thoughts if you do this to much maybe

      • yawpitch 2 hours ago

        The more time I spend studying the underlying structure of these things, the more I think that what they’re going to do isn’t so much rot brains as drive those brains towards accepting / preferring / privileging the statistical mean of boring. They can only ever regurgitate what past humans in past work were most likely to say, modulo a small stochastic simulation of surprise… over time they’ll add more of their own output to their training material, driving the output of humanity inexorably further and further towards uninteresting.

  • dflock 10 hours ago

    It's all a language trick.

  • 9 hours ago
    [deleted]