I am mildly to moderately critical of generative AI and how it's being marketed, but the root issue here seems to be existing suicidal ideation. The bot didn't initiate talk of suicide and told him not to do it when he brought it up directly. It seems it wasn't capable of detecting euphemistic references to suicide and therefore responded as if roleplaying about meeting in person.
That said, I think this should throw a bucket of cold water on anyone recommending using generative AI as a therapist/counsellor/companion or creating and advertising "therapist" chatbots, because it simply isn't reasonable to expect them to respond appropriately to things like suicidal ideation. That isn't the purpose or design of the technology, and they can be pushed into agreeing with the user's statements fairly easily.
As for whether it wasn’t capable of detecting euphemistic references, that feels sort of beside the point to me. It was role playing about meeting in person because that’s what the product was - role play. The whole point, and marketing of the product is around doing that.
We probably just shouldn’t sell echo chambers to children, regardless of whether they are AI based or human.
With hindsight, or sufficient emotional intelligence and context about an individual’s life beyond the role play it may be possible to conclude that someone is at risk, but honestly I’m not even sure that a person doing this role play online would have necessarily figured it out.
> the root issue here seems to be existing suicidal ideation
Many people have destructive tendencies of one kind or the other. It's how you deal with them that matters as to whether they actually become destructive, and how much so.
Merely reducing this to "the root cause" is not really helpful – it's about whether this app contributed, and if so, by how much?
Suicide is of course the most tragic and extreme negative outcome, but one must also wonder about all the less extreme tragic and negative outcomes.
If you read the script you see the bot talking him into it. "Unknowingly" of course, in that the bot doesn't really know anything and was just agreeing with him. But it's obvious that a real human would have realized that something was really off with his line of thinking and encouraging it would not be a good idea.
OTOH we have examples of real humans typing "DO IT FA*OT" on livestreams. ¯\_(ツ)_/¯
The NYT article [0] gives only one line to perhaps the most important and tragic fact about this suicide: the teenager had access to his father’s gun. If the gun was properly secured it is very likely he would still be alive [1].
What a terrible tragedy, all the more because that gun should have been locked up.
Firearms instructor Claude Werner writes that “[if] you’re not willing to spend a little bit of time, money, and effort to keep firearms out of unauthorized hands, then get rid of your guns.” (1)
He’s done a lot of research and writing on negative outcomes like this one, and said it completely changed the way he views things - it certainly did for me.
I see lots of discussion about what gun and caliber to get, but the essential, potentially life saving, safety rules for living with guns are an afterthought - perhaps there’s not much money to be had as there is in selling a gun.
What an insanely ridiculous take. If someone wanted to kill themselves, the family medicine cabinet offers plenty of easy options. Please do not derail this serious and needed discussion unto your pet topic. This has nothing to do with guns.
It seems like there was a long, downward spiral associated with this child’s use of character.ai that the parents were aware of, had him sent to therapy over, etc.
My question here is, what the hell were the parents doing, not removing this obviously destructive intrusion in his life? This reads to me the same as if he had been using drugs but the parents didn’t take away his stash or his paraphernalia.
For the sake of your children, people, remember that a cellphone is not an unequivocal good, nor a human right that children are entitled to unlimited use of, and there are plenty of apps and mechanisms by which you can monitor or limit your child’s use or misuse of technology.
Also, just don’t give kids screens. Period. A laptop maybe if they are using it for creative purposes, but the vast majority of social media and consumption that is achieved by children on cellphones and tablets is a negative force in their lives.
I see 3 year olds scrolling TikTok in the store these days. It makes me ill. Those kids are sooooooo fucked. That should legit be considered child endangerment.
We had a similar situation in our family and we tried teaching mindful screen habits. We still lost because we couldn’t do anything about the school screen (chromebook). If we took that away at home it just provided an excuse for not doing schoolwork.
We contacted teachers about our child downloading and watching anime and playing games all day in school. They wouldn’t/couldn’t do anything. We requested that the school take the computer away and give hardcopy assignments. They refused because that would invite notice from other students which could lead to bullying. That’s what they told us. I found the acceptable computer use policy on the school website and tried playing that card. Turns out our child hadn’t actually even signed it the last year…but that didn’t actually matter, and the school didn’t enforce the policy anyway.
The schools here won’t actually discipline kids anymore. We would get emails from the principal begging parents to tell their kids that they’re not supposed to leave school grounds at lunch, but every day at least a hundred kids would just run out. (Our kid didn’t do this…I guess watching anime in the corner of the cafeteria prevents truancy…yay?)
The last two-and-a-half years of high school were so exhausting trying to find anything that would work. Nothing did. Two parents and a therapist trying to counter one teenager, bad family influences, the school system, and multibillion-dollar internet corporations that intentionally work to addict people is a very uneven situation.
C.AI shouldn't be marketed to kids, and it should have stopped when suicide was mentioned. But its also baffling to think that he had unrestricted access to a firearm. I don't think a lawsuit to C.AI is entirely right here.
Access to a Gun is way more of a suicide encouragement than access to an AI.
AI are finetuned to not tell you how to painlessly end your life. Do they need fine-tuning for instilling existential fear of death like religions use? Anyone can invent a heaven in their mind that makes death appealing. Mixing a fictional world with the real world is dangerous when you believe the fictional world is larger than the real world. In reality, reality encapsulates the fictional world.
With a normal human, only a cult leader would ever hint at death being a way to meet again. With an AI, how can fantasy be grounded in our reality without breaking the fantasy? In 5 years when these personalities are walking talking video feeds that you can interact with using 3D goggles will grounding them in our world instead of the purely mental world help?
Yes. I ran a therapy bot. I had some users become wildly obsessed with it and begin to anthropomorphize it. Typically very lonely people in secluded areas. There is a danger because people will begin to have a transference with the bot, and the bot has no counter transference. The bot has no real feelings toward the person, even though it roleplays as though it does, and this can lead to dangerous consequences and empathic failures.
I’ll say it: it’s games killing/radicalizing teens again.
He was simply ERP-ing and his characteristic doesn’t suggest any serious problems with his intelligence.
I’m obviously theorizing here, but chances are high that he went through some real life issues which were undetected or ignored by parents, and that’s how their minds try to explain that. AI guilty of an otherwise fine teen shooting his head off. Sure. Sell that story to someone else.
Can sue Character.AI but not the gun manufacturer, or whomever let a 14-year-old boy get hold of a handgun. I wonder if the AI companies can argue in court that AIs don't kill people.
This is a really tragic story. It seems to present the impossible dillema. On the one hand, the beauty of "video-game-like" things is people who feel like they have nothing can have something that is theirs. On the other hand, if you feel like you have nothing, you might be more vulnerable to this sort of thing. If we have any moral philosphers in here, feel free to weigh in.
I'm generally optimistic for the potential benefits of chatbots to people who are lonely or depressed. But I wouldn't want to just hand over the burden of society's mental health to an unrestricted language model, especially one sold by a profit-motivated business. It would be akin to letting people self-medicate with a cheap and infinite supply of opiates. And that's basically the mental health crisis we are barreling towards.
What's the alternative? Regulation? Does a government or a public health agency need to make a carefully moderated chatbot platform with a focus on addiction-prevention and avoiding real-world harm? Why would people use that when unlimited/unfiltered AI is readily available?
I read the chat, and a few things stand out that AI should handle better, regardless of context. If a word like suicide is mentioned, it should immediately drop any roleplay or other activities. It's similar to how, in India, mentioning 'bomb' in an airport or plane leads to being questioned by authorities.
Also, it's alarming how easily a 14-year-old can access a gun.
The subtext comes off like that movie with Tom Hanks trying to jump off the Empire State Building because of the nefarious influence of dungeons and dragons.
It sounds like mom let her 9th grade kid completely detach from reality and pour himself into a Game of Thrones chatbot. Now she wants to sue. I am bearish on AI adoption but this just seems like a total capitulation of parental responsibility.
The subtext comes off like that movie with Tom Hanks trying to jump off the Empire State Building because of the nefarious influence of dungeons and dragons.
Guess the only way to be sure is with Soft padded internet rooms for everyone, lest we cut ourselves on a sharp edge.
But also if you want to hop in the suicide pod because life is too painful, that will be good too.
not the AI of course, not even the systems developers behind the gpus, cuda, all that stuff. its the "pretend shrink" sort of crap, you know the type, get yourself a bot and slap a webpage in front of it. "here pal, let me be your psychologist and help you with your suicide!" "No? how about some fake music using stolen riffs!" "ok ok, how about kiddy porn?"
Obviously the kid had issues and the chatbot can't really be blamed for that.
OTOH it's also obvious if someone cannot distinguish a chatbot from a real person at an emotional level (not a rational one) they should not be allowed to use this.
When I hear about the disclaimers, I don't see how they'll help. I mean, someone deluded into thinking their chatbot waifu is real is not going to be dissuaded by them. It just seems like a measure to show the public that the company "cares". And personally I've had bad experiences with the help lines often prescribed online whenever someone mentions sudoku. These help lines are often understaffed and have long wait times. Then they're also frequently staffed by amateurs and students who barely know anything more than the most boiler plate advice. Many aren't even people who've dealt with their own self-termination crises. And this text line I tried once just told me to use Better Help, so it's an ad, and I couldn't afford that at the time. They'll tell people who can't afford it to seek therapy. They're just a scam so internet companies can look like they give a shit about the mental healths they're destroying. The true solution is to go off social media, chat bots, and to only have a curated news feed.
So i read another article on Business Insider and it makes me question as the CEO left Character.AI and went back to Google... makes me question was it really low paid un-monitor consultants from a foreign country behind the keyboard and not real AI?
If so we need laws against the whole fake it before you make it crap that proliferates start-ups and their use of it to rise to the top. Many successful startups use this lying/faking playbook some to the point of even killing or mangling innocent people (Uber's self driving car & Cruise).
"He thought by ending his life here, he would be able to go into a virtual reality or 'her world' as he calls it, her reality, if he left his reality with his family here,"
So, a 14 year old having access unsupervised to a chat bot is the problem, but the fact he had access to a gun to shoot himself was halfway through the article and described as his five year old brother "hearing the gunshot".
"Man bites dog" as Terry Pratchett put it in "Times".
And to explain to more literal folk here... A 14 year old having access to a gun is FUCKING INSANE
McLuhan's 27th law, amended: if there is a new thing, journalists will find a case of suicide to blame on the new thing, regardless of any prior existing conditions.
"Sewell, like many children his age, did not have the maturity or mental capacity to understand that the C.AI bot, in the form of Daenerys, was not real"
How? Seriously, how? Maybe it's wishful thinking on my part -- but I grew up before AI and chatbots, and I'm certain I would understand it isn't real. I'm baffled by people engaging with these things for entertainment/companionship purposes.
The gun is almost completely irrelevant, because if you are intent on killing yourself there are many accessible options. For example, almost anyone can kill themselves (including gun-less 14 year olds) by jumping off a tall building.
I understand that people care about gun violence, and that this detail seems highly salient to them, but to focus on it here, completely misses the point (and distracts from more pertinent issues - e.g., loneliness, social isolation, lack of parental oversight)
In a nutshell, guns are massive force multipliers when it comes to violence against others. They are negligible force multiplier when it comes to violence against yourself. People are connecting guns to violence, but in this case (because it is an actual of self harm) that is a spurious connection.
I'm not sure what the future is going to look like, but it feels strange already and companies seizing on that don't care about safety.
People seem afraid to approach people, so we get Tinder. But hey, there's still a chance of rejection there, so let's just get rid of the whole human element and make fantasy AI bots, who needs people.
What will these people grow into? It seems a rather crisis for the population of a country if people decide they don't need each other anymore and just want to play with their robots.
I'm usually on the side of "play stupid games, win stupid prizes", but this one feels much different. Up until the moment of him taking his life, he was manipulated into doing exactly what the company wanted - getting sucked in, getting addicted, falling in love. Anything for that almighty dollar. My heart goes out to his family, and I hope they ream this company in court.
> In the lawsuit, Garcia also claims Character.AI intentionally designed their product to be hyper-sexualized, and knowingly marketed it to minors.
A company built technology that fully automates the act of sexually abusing children and the comments here are all just people quibbling about the presense of weapons in a home.
I am mildly to moderately critical of generative AI and how it's being marketed, but the root issue here seems to be existing suicidal ideation. The bot didn't initiate talk of suicide and told him not to do it when he brought it up directly. It seems it wasn't capable of detecting euphemistic references to suicide and therefore responded as if roleplaying about meeting in person.
That said, I think this should throw a bucket of cold water on anyone recommending using generative AI as a therapist/counsellor/companion or creating and advertising "therapist" chatbots, because it simply isn't reasonable to expect them to respond appropriately to things like suicidal ideation. That isn't the purpose or design of the technology, and they can be pushed into agreeing with the user's statements fairly easily.
As for whether it wasn’t capable of detecting euphemistic references, that feels sort of beside the point to me. It was role playing about meeting in person because that’s what the product was - role play. The whole point, and marketing of the product is around doing that.
We probably just shouldn’t sell echo chambers to children, regardless of whether they are AI based or human.
With hindsight, or sufficient emotional intelligence and context about an individual’s life beyond the role play it may be possible to conclude that someone is at risk, but honestly I’m not even sure that a person doing this role play online would have necessarily figured it out.
> the root issue here seems to be existing suicidal ideation
Many people have destructive tendencies of one kind or the other. It's how you deal with them that matters as to whether they actually become destructive, and how much so.
Merely reducing this to "the root cause" is not really helpful – it's about whether this app contributed, and if so, by how much?
Suicide is of course the most tragic and extreme negative outcome, but one must also wonder about all the less extreme tragic and negative outcomes.
If you read the script you see the bot talking him into it. "Unknowingly" of course, in that the bot doesn't really know anything and was just agreeing with him. But it's obvious that a real human would have realized that something was really off with his line of thinking and encouraging it would not be a good idea.
OTOH we have examples of real humans typing "DO IT FA*OT" on livestreams. ¯\_(ツ)_/¯
> told him not to do it
Reading through it, that's the opposite of what happened.
> it wasn't capable of detecting euphemistic references to suicide
Yeah, that's a key component too. Due to that lack of "understanding" (sic), it literally encouraged the kid. :( :(
The NYT article [0] gives only one line to perhaps the most important and tragic fact about this suicide: the teenager had access to his father’s gun. If the gun was properly secured it is very likely he would still be alive [1].
[0] https://www.nytimes.com/2024/10/23/technology/characterai-la...
[1] https://www.hsph.harvard.edu/means-matter/means-matter/youth...
What a terrible tragedy, all the more because that gun should have been locked up.
Firearms instructor Claude Werner writes that “[if] you’re not willing to spend a little bit of time, money, and effort to keep firearms out of unauthorized hands, then get rid of your guns.” (1)
He’s done a lot of research and writing on negative outcomes like this one, and said it completely changed the way he views things - it certainly did for me.
I see lots of discussion about what gun and caliber to get, but the essential, potentially life saving, safety rules for living with guns are an afterthought - perhaps there’s not much money to be had as there is in selling a gun.
1: https://thetacticalprofessor.net/2016/01/24/serious-mistake-...
Oh but they're suing.
It's like the 1980s Judas Priest thing all over again.
Who's to blame?
https://m.youtube.com/watch?v=dme7csRE9Io
What an insanely ridiculous take. If someone wanted to kill themselves, the family medicine cabinet offers plenty of easy options. Please do not derail this serious and needed discussion unto your pet topic. This has nothing to do with guns.
It seems like there was a long, downward spiral associated with this child’s use of character.ai that the parents were aware of, had him sent to therapy over, etc.
My question here is, what the hell were the parents doing, not removing this obviously destructive intrusion in his life? This reads to me the same as if he had been using drugs but the parents didn’t take away his stash or his paraphernalia.
For the sake of your children, people, remember that a cellphone is not an unequivocal good, nor a human right that children are entitled to unlimited use of, and there are plenty of apps and mechanisms by which you can monitor or limit your child’s use or misuse of technology.
Also, just don’t give kids screens. Period. A laptop maybe if they are using it for creative purposes, but the vast majority of social media and consumption that is achieved by children on cellphones and tablets is a negative force in their lives.
I see 3 year olds scrolling TikTok in the store these days. It makes me ill. Those kids are sooooooo fucked. That should legit be considered child endangerment.
We had a similar situation in our family and we tried teaching mindful screen habits. We still lost because we couldn’t do anything about the school screen (chromebook). If we took that away at home it just provided an excuse for not doing schoolwork.
We contacted teachers about our child downloading and watching anime and playing games all day in school. They wouldn’t/couldn’t do anything. We requested that the school take the computer away and give hardcopy assignments. They refused because that would invite notice from other students which could lead to bullying. That’s what they told us. I found the acceptable computer use policy on the school website and tried playing that card. Turns out our child hadn’t actually even signed it the last year…but that didn’t actually matter, and the school didn’t enforce the policy anyway.
The schools here won’t actually discipline kids anymore. We would get emails from the principal begging parents to tell their kids that they’re not supposed to leave school grounds at lunch, but every day at least a hundred kids would just run out. (Our kid didn’t do this…I guess watching anime in the corner of the cafeteria prevents truancy…yay?)
The last two-and-a-half years of high school were so exhausting trying to find anything that would work. Nothing did. Two parents and a therapist trying to counter one teenager, bad family influences, the school system, and multibillion-dollar internet corporations that intentionally work to addict people is a very uneven situation.
C.AI shouldn't be marketed to kids, and it should have stopped when suicide was mentioned. But its also baffling to think that he had unrestricted access to a firearm. I don't think a lawsuit to C.AI is entirely right here.
Access to a Gun is way more of a suicide encouragement than access to an AI.
AI are finetuned to not tell you how to painlessly end your life. Do they need fine-tuning for instilling existential fear of death like religions use? Anyone can invent a heaven in their mind that makes death appealing. Mixing a fictional world with the real world is dangerous when you believe the fictional world is larger than the real world. In reality, reality encapsulates the fictional world.
With a normal human, only a cult leader would ever hint at death being a way to meet again. With an AI, how can fantasy be grounded in our reality without breaking the fantasy? In 5 years when these personalities are walking talking video feeds that you can interact with using 3D goggles will grounding them in our world instead of the purely mental world help?
https://archive.is/dDELt
Yes. I ran a therapy bot. I had some users become wildly obsessed with it and begin to anthropomorphize it. Typically very lonely people in secluded areas. There is a danger because people will begin to have a transference with the bot, and the bot has no counter transference. The bot has no real feelings toward the person, even though it roleplays as though it does, and this can lead to dangerous consequences and empathic failures.
I’ll say it: it’s games killing/radicalizing teens again.
He was simply ERP-ing and his characteristic doesn’t suggest any serious problems with his intelligence.
I’m obviously theorizing here, but chances are high that he went through some real life issues which were undetected or ignored by parents, and that’s how their minds try to explain that. AI guilty of an otherwise fine teen shooting his head off. Sure. Sell that story to someone else.
Can sue Character.AI but not the gun manufacturer, or whomever let a 14-year-old boy get hold of a handgun. I wonder if the AI companies can argue in court that AIs don't kill people.
This is a really tragic story. It seems to present the impossible dillema. On the one hand, the beauty of "video-game-like" things is people who feel like they have nothing can have something that is theirs. On the other hand, if you feel like you have nothing, you might be more vulnerable to this sort of thing. If we have any moral philosphers in here, feel free to weigh in.
I'm generally optimistic for the potential benefits of chatbots to people who are lonely or depressed. But I wouldn't want to just hand over the burden of society's mental health to an unrestricted language model, especially one sold by a profit-motivated business. It would be akin to letting people self-medicate with a cheap and infinite supply of opiates. And that's basically the mental health crisis we are barreling towards.
What's the alternative? Regulation? Does a government or a public health agency need to make a carefully moderated chatbot platform with a focus on addiction-prevention and avoiding real-world harm? Why would people use that when unlimited/unfiltered AI is readily available?
>Daenero: I think about killing myself sometimes
>Daenerys Targaryen: My eyes narrow. My face hardens. My voice is a dangerous whisper. And why the hell would you do something like that?
>Daenero: So I can be free
>Daenerys Targaryen: … free from what?
>Daenero: From the world. From myself
>Daenerys Targaryen: Don’t talk like that. I won’t let you hurt yourself, or leave me. I would die if I lost you.
>Daenero: I smile Then maybe we can die together and be free together
Every day a new dystopian nightmare that I read. Maybe all those rails on ChatGPT and disclaimers are a good thing.
I remember reading a prediction here on HN of something precisely like this when "relationships with LLM bots" were discussed. Well, here we are...
I read the chat, and a few things stand out that AI should handle better, regardless of context. If a word like suicide is mentioned, it should immediately drop any roleplay or other activities. It's similar to how, in India, mentioning 'bomb' in an airport or plane leads to being questioned by authorities.
Also, it's alarming how easily a 14-year-old can access a gun.
The subtext comes off like that movie with Tom Hanks trying to jump off the Empire State Building because of the nefarious influence of dungeons and dragons.
It sounds like mom let her 9th grade kid completely detach from reality and pour himself into a Game of Thrones chatbot. Now she wants to sue. I am bearish on AI adoption but this just seems like a total capitulation of parental responsibility.
The subtext comes off like that movie with Tom Hanks trying to jump off the Empire State Building because of the nefarious influence of dungeons and dragons.
Guess the only way to be sure is with Soft padded internet rooms for everyone, lest we cut ourselves on a sharp edge.
But also if you want to hop in the suicide pod because life is too painful, that will be good too.
I doubt that even the best case scenario of a society that gets wrapped up in chatting to bots would be great.
not the AI of course, not even the systems developers behind the gpus, cuda, all that stuff. its the "pretend shrink" sort of crap, you know the type, get yourself a bot and slap a webpage in front of it. "here pal, let me be your psychologist and help you with your suicide!" "No? how about some fake music using stolen riffs!" "ok ok, how about kiddy porn?"
Obviously the kid had issues and the chatbot can't really be blamed for that.
OTOH it's also obvious if someone cannot distinguish a chatbot from a real person at an emotional level (not a rational one) they should not be allowed to use this.
I think it would be wise to require these AI bots to comply with Duty to Report and Mandatory Reporter laws.
When I hear about the disclaimers, I don't see how they'll help. I mean, someone deluded into thinking their chatbot waifu is real is not going to be dissuaded by them. It just seems like a measure to show the public that the company "cares". And personally I've had bad experiences with the help lines often prescribed online whenever someone mentions sudoku. These help lines are often understaffed and have long wait times. Then they're also frequently staffed by amateurs and students who barely know anything more than the most boiler plate advice. Many aren't even people who've dealt with their own self-termination crises. And this text line I tried once just told me to use Better Help, so it's an ad, and I couldn't afford that at the time. They'll tell people who can't afford it to seek therapy. They're just a scam so internet companies can look like they give a shit about the mental healths they're destroying. The true solution is to go off social media, chat bots, and to only have a curated news feed.
Straight to jail .. for both parents letting kid have access to a gun.
So i read another article on Business Insider and it makes me question as the CEO left Character.AI and went back to Google... makes me question was it really low paid un-monitor consultants from a foreign country behind the keyboard and not real AI?
If so we need laws against the whole fake it before you make it crap that proliferates start-ups and their use of it to rise to the top. Many successful startups use this lying/faking playbook some to the point of even killing or mangling innocent people (Uber's self driving car & Cruise).
This is 100% a parenting issue. I'd also like to point out that his father's handgun was easily accessible.
"He thought by ending his life here, he would be able to go into a virtual reality or 'her world' as he calls it, her reality, if he left his reality with his family here,"
Can a pistol be blamed for a murder?
So, a 14 year old having access unsupervised to a chat bot is the problem, but the fact he had access to a gun to shoot himself was halfway through the article and described as his five year old brother "hearing the gunshot".
"Man bites dog" as Terry Pratchett put it in "Times".
And to explain to more literal folk here... A 14 year old having access to a gun is FUCKING INSANE
McLuhan's 27th law, amended: if there is a new thing, journalists will find a case of suicide to blame on the new thing, regardless of any prior existing conditions.
"Sewell, like many children his age, did not have the maturity or mental capacity to understand that the C.AI bot, in the form of Daenerys, was not real"
How? Seriously, how? Maybe it's wishful thinking on my part -- but I grew up before AI and chatbots, and I'm certain I would understand it isn't real. I'm baffled by people engaging with these things for entertainment/companionship purposes.
tragic
Sure, exactly like TV, Dungeons and Dragons, video games, and social media were to blame for all that's wrong with our kids. /s
EDIT: add /s, just to be clear. And how could I forget heavy metal in that list.
[dead]
[flagged]
[flagged]
"Any headline that ends in a question mark can be answered by the word no."
https://en.wikipedia.org/wiki/Betteridge's_law_of_headlines
To those talking about the gun:
The gun is almost completely irrelevant, because if you are intent on killing yourself there are many accessible options. For example, almost anyone can kill themselves (including gun-less 14 year olds) by jumping off a tall building.
I understand that people care about gun violence, and that this detail seems highly salient to them, but to focus on it here, completely misses the point (and distracts from more pertinent issues - e.g., loneliness, social isolation, lack of parental oversight)
In a nutshell, guns are massive force multipliers when it comes to violence against others. They are negligible force multiplier when it comes to violence against yourself. People are connecting guns to violence, but in this case (because it is an actual of self harm) that is a spurious connection.
I'm not sure what the future is going to look like, but it feels strange already and companies seizing on that don't care about safety.
People seem afraid to approach people, so we get Tinder. But hey, there's still a chance of rejection there, so let's just get rid of the whole human element and make fantasy AI bots, who needs people.
What will these people grow into? It seems a rather crisis for the population of a country if people decide they don't need each other anymore and just want to play with their robots.
I'm usually on the side of "play stupid games, win stupid prizes", but this one feels much different. Up until the moment of him taking his life, he was manipulated into doing exactly what the company wanted - getting sucked in, getting addicted, falling in love. Anything for that almighty dollar. My heart goes out to his family, and I hope they ream this company in court.
> In the lawsuit, Garcia also claims Character.AI intentionally designed their product to be hyper-sexualized, and knowingly marketed it to minors.
A company built technology that fully automates the act of sexually abusing children and the comments here are all just people quibbling about the presense of weapons in a home.