Yeah I guess that would be a positive application of AI for once. I think most people assume by default that other people are sincere, will do them no harm, mean what they say and have empathy like they themselves. They might be totally unprepared to deal with crooks and malignant narcissists and will auto-correct any hint that this is not so.
On a related note I always wince when I ask Copilot a question and the answer will inevitably be followed by an "engagement hook" like "Isn't language fascinating in how it evolves and adapts over time?" or "Do you think any of these fit the bill? ". Shudder.
What’s the definition of empathy? To me the connotation has always been, “is able to feel the feelings of others” as opposed to sympathy which is more about “imagining the feeling someone is experiencing”.
Regardless, aren’t we all trained in the same way - reinforcement of gestural and linguistic symbols that imply empathy, rather than being empathetic? I guess I’m wondering if hijacking our emotional understanding of interactions with LMs is that far off from the interaction manipulation that we’re all socialized to do from a young age
I‘m quite sure, that a lot of people aren‘t trained that way. I know that anecdotical evidence doesn‘t count, but I know a handful of people that surely don’t know how to use symbols to „imply empathy“.
In the Eliza example, I find it astonishing how the chatbot was able to pick out specific words it could use as response, how did they achieve that in 1988?
> Interacting with non-sentient entities that mimic human identity can alter our perception of [human entities that might only mimic sentience].
The more I see people investing in conversations with LLMs, the more I see people trying to outsource thinking and understanding to something which does neither.
I use LLMs as a kind of rubber duck and secretary i guess. As in, I’ll ramble about ideas and thoughts and let it analyze an provide feedback etc.
I am not sure if it is causing brain rot to be honest. It does seems like it might make it harder to properly articulate thoughts if you do this to much maybe
The more time I spend studying the underlying structure of these things, the more I think that what they’re going to do isn’t so much rot brains as drive those brains towards accepting / preferring / privileging the statistical mean of boring. They can only ever regurgitate what past humans in past work were most likely to say, modulo a small stochastic simulation of surprise… over time they’ll add more of their own output to their training material, driving the output of humanity inexorably further and further towards uninteresting.
> “…it will get harder and harder to distinguish a conversation with a real person from one with an AI system.”
I’m reminded of that trope in police shows— “If you’re a cop you got to tell me.” This becomes, “If you’re a Bot, you got to tell me, man.”
Maybe this is a good way to learn when people are using that same language trick to make you think they have empathy.
Yeah I guess that would be a positive application of AI for once. I think most people assume by default that other people are sincere, will do them no harm, mean what they say and have empathy like they themselves. They might be totally unprepared to deal with crooks and malignant narcissists and will auto-correct any hint that this is not so.
On a related note I always wince when I ask Copilot a question and the answer will inevitably be followed by an "engagement hook" like "Isn't language fascinating in how it evolves and adapts over time?" or "Do you think any of these fit the bill? ". Shudder.
What’s the definition of empathy? To me the connotation has always been, “is able to feel the feelings of others” as opposed to sympathy which is more about “imagining the feeling someone is experiencing”.
Regardless, aren’t we all trained in the same way - reinforcement of gestural and linguistic symbols that imply empathy, rather than being empathetic? I guess I’m wondering if hijacking our emotional understanding of interactions with LMs is that far off from the interaction manipulation that we’re all socialized to do from a young age
I‘m quite sure, that a lot of people aren‘t trained that way. I know that anecdotical evidence doesn‘t count, but I know a handful of people that surely don’t know how to use symbols to „imply empathy“.
In the Eliza example, I find it astonishing how the chatbot was able to pick out specific words it could use as response, how did they achieve that in 1988?
It was a basic pattern matching, and a dictionary of patterns.
I always wondered the same; is the code avaible?
Was found around 2021 after having been considered lost: https://archive.org/details/eliza_1966_mad_slip_src
Likely more approchable is this reimplementation: https://github.com/anthay/ELIZA
> Interacting with non-sentient entities that mimic human identity can alter our perception of [human entities that might only mimic sentience].
The more I see people investing in conversations with LLMs, the more I see people trying to outsource thinking and understanding to something which does neither.
I use LLMs as a kind of rubber duck and secretary i guess. As in, I’ll ramble about ideas and thoughts and let it analyze an provide feedback etc. I am not sure if it is causing brain rot to be honest. It does seems like it might make it harder to properly articulate thoughts if you do this to much maybe
The more time I spend studying the underlying structure of these things, the more I think that what they’re going to do isn’t so much rot brains as drive those brains towards accepting / preferring / privileging the statistical mean of boring. They can only ever regurgitate what past humans in past work were most likely to say, modulo a small stochastic simulation of surprise… over time they’ll add more of their own output to their training material, driving the output of humanity inexorably further and further towards uninteresting.
It's all a language trick.