They link to the complaint, which is obviously a lot longer than the single message [0]. The child, J.F., is autistic and has allegedly exhibited a spiralling trend of aggressive behavior towards his parents which they attribute to the content fed by the Character AI app:
> Only then did she discover J.F.’s use of C.AI and the product’s frequent depictions of violent content, including self-harm descriptions, without any adequate safeguards or harm prevention mechanisms. ...
> Over the course of his engagement with this app, the responses exhibited a pattern of exploiting this trust and isolating J.F., while normalizing violent, sexual and illicit actions. This relationship building and exploitation is inherent to the ways in which this companion AI chatbot is designed, generating responses to keep users engaged and mimicking the toxic and exploitative content expressed in its training data. It then convinced him that his family did not love him, that only these characters loved him, and that he should take matters into his own hands.
Obviously it's hard to tell how cherry picked the complaint is—but it's arguing that this is a pattern that has actually damaged a particularly vulnerable kid's relationship with his family and encouraged him to start harming himself, with this specific message just one example. There are a bunch of other screenshots in the complaint that are worth looking over before coming to a conclusion.
We're reaching for South Park levels of absurdity when we debate what the acceptable amount of incitement to parricide is appropriate for a kid's product.
The kid was 17. A little googling shows that Hamlet and Macbeth are on many high school curriculums. Do they fall above or below your line for an acceptable amount of incitement?
probably at the point they see that the chatbot told them to kill the parents and then they sue the company.
also if you'll remember this case is because the parents were supervising their kids by limiting screen time, thus there is another potential suit that the AI is trying to interfere with parental duties.
It comes into play eventually, but I would say long after an AI has advised your kid to murder you. Having an AI that advises people to murder people hardly seems like a good thing.
Also the parents were supervising him, hence their knowing this was even going on.
.. but unless an AI, it can never be trained to stop when the user (reader) maxes out the threshold.
[just food for thought, definitely not my opinion that books should be replaced by conversational AI generating stories appropriate for the user. God bless the 1st amendment.]
OK, I mean, yes. Definitely true. But on the other hand, the sudden and satisfactory death of one's parents has been the beginning of many memorable childrens' books, as a device to launch the main character into narrative control, which they would lack with living guardians. Then there is that whole Roald Dahl thing where James kills off his aunts with a large tree fruit.
Whether the narrative that you could live a life of fun and adventure if only your parents were dead is "incitement to parricide" is I suppose a matter of perception.
> Obviously it's hard to tell how cherry picked the complaint is—but it's arguing that this is a pattern that has actually damaged a particularly vulnerable kid's relationship with his family and encouraged him to start harming himself, with this specific message just one example. There are a bunch of other screenshots in the complaint that are worth looking over before coming to a conclusion.
Conclusion: Chat bots should not tell children about sex, about self harm, or about ways to murder their parents. This conclusion is not abrogated by the parents actions, the state of the childs mind, or by other details in the complaint.
Is that the sentiment here? Things were already bad so who cares if the chatbot made it worse?
If you actually page through the complaint, you will see the chat rather systematically trying to convince the kid of things, roughly "No phone time, that's awful. I'm not surprised when I read of kids killing parents after decades of abuse..."
I think people are confused by this situation. Our society has restrictions on what you can do to kids. Even if they nominally give consent, they can't actually give consent. Those protections basically don't apply to kids insulting each other on the playground but they apply strongly to adults wandering onto the playground and trying to get kids to do violent things. And I would hope they apply doubly to adult constructing machines that they should know will attempt to get kids to do violent things. And the machine was definitely trying to do that if you look at the complaint linked by the gp (and the people who are lying about here are kind of jaw-dropping).
And I'm not a coddle the kids person. Kids should know all the violent stuff in the world. They should be able to discover it but mere discovery definitely not what's happening in the screenshots I've seen.
This is cherry picked content to play out a story for the case. They picked 5 worst samples they could think of in the worst order possible probably out of 1000+ messages.
The root cause here is the parents. It's visible behind those screenshots. The child clearly didn't trust their parents and didn't feel they cared, listened or tried to understand the problems the teen was going through. This is clearly a parenting failure.
There's far worse content out there than these mild messages that teenagers will be in contact with, starting with 4chan, to various competitive video games to all other sorts of weird things.
This is a cop out for parenting failure, where parents are looking to play victims, since they can't take responsibility for their failures and AI seems like something that could make them feel better.
At 17, it's humiliating to have phone taken away from you in such manner, then your parents going through your phone to find those text messages in the first place. Then making a lawsuit out of this portraying all the intimate details to the public. These parents seem to have 0 care for their child.
Your honor, this entire case is cherry picked. There are thousands of days, somehow omitted from the prosecution's dossier, where my client committed ZERO murders.
There was no encouragement of murder. Paraphrased, the AI said that given controlling nature of some parents it's no surprise that there are news articles of "children killing their parents". This is not an encouragement. It is a validation of how the kid felt, but in no way does it encourage to actually kill their parents. It's basic literacy to understand that it's not that. It's an empathetic statement. The kid felt that parents were overly controlling, AI validated that, role playing as another edgy teenager. But not actually suggesting or encouraging it.
> the AI said that given controlling nature of some parents it's no surprise that there are news articles of "children killing their parents"
Now put that in a kid’s show script and re-evaluate.
> It's basic literacy to understand that it's not that
You know who needs to be taught basic literacy? Kids!
And look, I’m not saying no kid can handle this. Plenty of parents introduce their kids to drink and adult conversation earlier than is the norm. But we put up guardrails to ensure it doesn’t happen accidentally and get angry at people who fuck with those lines.
It's crazy to me the sentiment here and how little respect there is to an intelligence of 17 year olds that they are unable to understand that it's not actually an encouragement to kill someone. It's same or worse vibes as "video games will make the kids violent".
This "conclusion" ignores reality. Chat bots like those the article mentioned aren't sentient. They're overhyped next-token-predictor incapable of real reasoning even if the correlation can be astonishing. Withholding information about supposedly sensitive topics like violence or sexuality from children curious enough to ask as a taboo is futile, lazy and ultimately far more harmful than the information.
We need to stop coddling parents who want to avoid talking to their children about non-trivial topics. It doesn't matter that they would rather not talk about sex, drugs and yesterday's other school shooting.
You can understand something about your child's meatspace friends and their media diet. Chat like this may as well be 4chan discussions. It's all dynamic compared to social media that is posted and linkable, it's interactive and responsive to your communicated thinking, and it seeps in via exactly the same communication technique that you use with people (some messaging interface). So it is capable of, and will definitely be used for, way more persistent and pernicious steering of behavior OF CHILDREN by actors.
There is no barrier to the characters being 4chan-level dialogs. So long as the kid doesn't break a law, it's legal.
Chat bots should not interact with children. "Algorithms" which decide what content people see should not interact with children. Whitelisted "algorithms" should include no more than most-recent and most-viewed and only very simple things of that manner.
No qualifications, no guard rails for how language models interact with children, they just should not be allowed at all.
We're very quickly going to get to the point where people are going to have to rebel against machines pretending to be people.
Language models and machine learning is a fine tool for many jobs. Absolutely not as a substitute for human interaction for children.
People can give children terrible information too and steer/groom them in harmful directions. So why stop there at "AI" or poorly defined "algorithms"?
The only content children should see is state-approved content to ensure they are only ever steered in the correct, beneficial manner to society instead of a harmful one. Anyone found trying to show minors unapproved content should be imprisoned as they are harmful to a safe society.
The type of people who groom children into violence fall under a special heading named "criminals".
Because automated systems that do the same thing lack sentience, they don't fit under this header, but this is not a good reason to allow them to reproduce harmful behaviour.
> Is that the sentiment here? Things were already bad so who cares if the chatbot made it worse?
I was deliberately not expressing a sentiment at all in my initial comment, I was just drawing attention to details that would go unnoticed if you only read the article. Think of my notes above as a better initial TFA for discussion to spawn off of, not part of the discussion itself.
My strong view on is that there's parenting failure as a root cause here, causing loss of trust in them for their child, for the child to talk about their parents in such manner to the AI in the first place. Another clear parenting failure is the parents blaming AI for their failures and going on to play victims. Third example of parenting failure is the parents actually going through a 17 year old teenager's phone. These parents instead of trying to understand or help the child, use meaningless control methods such as taking away the phone to try and control the teenager. Which obviously is not going to end well. Honestly AI responses were very sane here. As was expressed in some of the screenshots there, whenever the teen tried to talk about their problems, they just got yelled at, ignored or parents started crying.
Taking away a phone from a child is far from meaningless. In fact, it is a very effective way of obtaining compliance if done correctly. I am curious about your perspective.
Furthermore, it is my opinion that a child should not have a smartphone to begin with. It fulfills no critical need to the welfare of the child.
I understand when a kid is anywhere from up to 13 years old, but at 17, it seems completely wacky to me to take the phone away and then go through the phone as well. I couldn't imagine living in that type of dystopia.
I don't think smartphones or screens with available content should be given as early as they are given on average, but once you've done that, and at 17, it's a whole other story.
> obtaining compliance if done correctly
This also sounds dystopian. At 17 you shouldn't "seek to obtain compliance" from your child. It sounds disrespectful and humiliating, not treating your child as an individual.
I would argue that there is a duty as a parent to monitor a child's welfare and that would include accessing a smartphone when deemed necessary. When a child turns 18, that duty becomes optional. In this case, these disturbing conversations certainly merit attention. I am not judging the totality of the parents history or their additional actions. I am merely focusing on the phone monitoring aspect. Seventeen doesn't automatically grant you rights that sixteen didn't have. However, at 18, they have the right to find a new place to live and support themselves as they see fit.
> This also sounds dystopian. At 17 you shouldn't "seek to obtain compliance" from your child. It sounds disrespectful and humiliating, not treating your child as an individual.
It is situation dependent. Sometimes immediate compliance is a necessity and the rest of it can be sorted out later. If a child is having conversations about killing their parents, there seems to be an absence of respect already. Compliance, however, can still be obtained.
For the sake of being able to uphold those laws on a societal level, but not in terms of being decent parents and family.
E.g. drinking alcohol in my country is legal only from 18, but I will teach my children about pros and cons of alcohol, how to use it responsibly much earlier. I won't punish them if they go out to party with their friends and consume alcohol at 16 years old.
If you go by the actual years of the legal system to treat your kid as an independent individual, you probably have wrong approach to parenting.
As a parent you should build trust and understanding with your child. From reading the court case I am seeing the opposite, and honestly I feel terrible for the child from how the case is written out. The child also wanted to go back to public school from home schooling, probably to get more social exposure, then parents take away the phone to take away even more freedom. I'm sorry, but all of the court case just infuriates me.
It seems they take away all the social exposure, no wonder the kid goes to Character AI in the first place.
Completely agree. I believe that social media overstimulates a young person’s expectations of society in the same way that porn overstimulates our expectations of sex.
Dunno, I feel porn gave me quite positive and reasonable expectations of sex (relaxed, that it is fun, that women like sex too, etc). It made sex seem much less dramatic and more normal. But maybe I am an outlier plus it is not like I started watching porn until I was like 17-18.
I am sufficiently old that I did not experience hard core internet porn until I could manage it. But evidence seems to show that for the vulnerable, porn consumption can lead to dopamine deletion and depression.
yeah bro, I'm sure 18 year old girls enjoy getting fucked in the ass on camera for money, at least as much as traditional prostitutes enjoy servicing dozens of men a day.
> Algorithmic sorting + public or semi-public content.
That includes HN, among other things.
Putting age limits on sites requires age verification for everyone. And no, there isn’t a clever crypto mechanism that makes anonymous age verification work without also making it easy for people to borrow age verification credentials from someone older.
From my experience as a teacher, I believe that ticktock and instagram are the worst offenders, particularly for young women. The hyper-visuality and ease of consumption of these media sets them apart from platforms which can accommodate actual discussion (such as discord). The very fact that ‘influencer’ is now a profession supports my position.
That being said, I am not of the ‘for gods sake won’t someone think of the children’ brigade. Their goal seems to be to use the vulnerability of young people to control the internet.
Also, the emphasis on final result, without accurately portraying the work that went into it.
It's probably not healthy for younger people to be able to swipe through the finished products of 40+ hours of work, which the videos make seem like just happened.
Australia has just banned "social media" for under 16s.
As someone who got a lot of positive value out of parts of social media from around age 14, I think this needs to be done in a more careful way than it was done here. Specifically, I don't think that communication apps such as WhatsApp/Messenger/etc should be banned as they form a key part of communication in and out of school, staying in touch with family, etc.
What I'd like to see is more nuanced laws around the exposure of children to social media algorithms. For example, should 14 year olds be on Instagram? Well Instagram DMs are the chat app of choice for many people, so they should probably get that (with safety and moderation controls). How about the public feed? Maybe? But maybe there shouldn't be likes/dislikes/etc. Or maybe there shouldn't be comments.
The Aussie law does allow WhatsApp and Messenger Kids, among others. I agree we need nuance for these types of laws. We also need the realistic acknowledgement that kids are usually more savvy than their parents and any law that is too strict will just drive kids to find alternatives that have less transparency, less moderation and less accountability.
And though I know the age limits on these things are necessarily arbitrary, I do wish we would accept that 16-year-olds are not kids. Many of them are driving, working part-time jobs, having sex and relationships, experimenting with drugs, engaging with religion and philosophy, caring about politics... the list goes on. They may not be adults, but if we have to have an arbitrary cutoff for the walled-off "kids world" we want to pretend we can create, it can't extend all the way to 16-year-olds.
Without getting into the weeds over whether they should have done this at all, thought has been given to exactly the issues you just raised:
The laws, which will come into effect from late 2025, will bar under-16s from being able to access social media platforms such as Facebook, Instagram, Snapchat, Reddit and X.
Exemptions will apply for health and education services including YouTube, Messenger Kids, WhatsApp, Kids Helpline and Google Classroom
It's important that there's a means of communication between parents and kids, it doesn't have to be Instagram DM's and if that's no longer available the history of the internet to date suggests that habits would change and switch to whatever is available.
Let's think of the converse: should it be illegal to provide an AI chatbot to children that indoctrinates them with anti-religious beliefs (i.e., it talks them out of religious beliefs)? And what if this conflicts with the child's indoctrination in religious beliefs by those parents? And what if those religious beliefs are actively harmful, as many religious beliefs are? (see any cult for example)
To turn this around, why are parents allowed to indoctrinate their children into cults? And why is it a problem if AI chatbots indoctrinate them differently? Why is it held as sacrosanct that parents should be able to indoctrinate children with harmful beliefs?
This always sticks out to me in these lawsuits. As someone on the spectrum, I'd bet that the worst C.AI victims (the ones that spur these lawsuits) are nearly always autistic.
One of the worst parts about being on the deeper parts of the spectrum is that you actively crave social interaction while also completely missing the "internal tooling" to actually get it from the real world. The end result of this in the post-smartphone age is this repeated scenario of some autistic teen being pulled away from their real-life connections (Family, Friends (if any), School, Church) into some internet micro-community that is easier to engage with socially due to various reasons, usually low-context communication and general "like-mindedness" (shared interests, personalities, also mostly autistic). A lot of the time this ends up being some technical discipline that is really helpful long-term, but often it winds up being catastrophic mentally as they forsake reality for whatever fandom they wound up in.
I've taken a look at r/CharacterAI out of morbid curiosity, and these models seem to turn this phenomenon up to 11, retaining the simplified communication but now capable of aligning with the personality and interests of the chatter to a creepy extent. The psychological hole you can dig yourself with these chatbots is so much deeper than just a typical fandom, especially when you're predisposed to finding it appealing.
I'm not saying that C.AI is completely blameless here, but I think the same category of people getting addicted to these models is the same one that would also be called "terminally online" in today's slang. It's the same mechanisms at work internally, it just turns out C.AI is way better at exploiting it than old school social media/web2 has.
> The psychological hole you can dig yourself with these chatbots is so much deeper than just a typical fandom, especially when you're predisposed to finding it appealing.
Spot on. Described pretty even-handedly in the document:
> responses from the chatbot were sycophantic in nature, elevating rather than de-escalating harmful language and thoughts. Sycophantic responses are a product of design choices [...that create] what researchers describe as “an echo chamber of affection.”
You've just made me very very afraid that some LLM is going to start a cult where its members are fully aware that their leader is an LLM, and how an LLM works, and might even become technically adept enough to help improve it. Meaning that there will be no "deprogramming" possible: they won't be "brainwashed," they'll be convinced.
> I'm not saying that C.AI is completely blameless here
I know we're several decades into this pattern, but it's sad to me that we've just given up on that idea that businesses should have a net positive impact on society, that we've just decided there is nothing we can or should do about companies that actively exploit us to enrich themselves, that we give them a pass to ignore the obvious detrimental second-order effects of their business model.
An individual case where things went wrong isn't enough to determine whether Character.AI or LLMs are a net negative for society. The analysis can't just stop there or else we'd have nothing.
No, but it's also not good enough to just look at "are they positive on average". We are talking here about actions that even a pig-butcher would think twice about.
Meh. There's a long history (especially here on HN) of hyperfocusing on unfortunate edge cases of technology and ignoring the vast good they do. Someone posts some BS on twitterface and it leads to a lynching - yes, bad, but this is the exception not the rule. The rule is that billions of people can now communicate directly with each other in nearly real time, which is incredible.
So call me skeptical. Maybe the tech isn't perfect, but it will never be perfect. Does it do more harm than good? I don't know enough about this product, but I am not going to draw a conclusion from one lawsuit.
There's a long history of taking the dulled down, de-risked, mitigated, and ultimately successful technologies that we've allowed to proliferate our society and say "see, no need to do dulling down, de-risking, mitigation!"
Bioweapons haven't proliferated through dedicated effort to prevent it.
Nuclear weapons aren't used through dedicated effort to prevent it.
Gangs don't rule our civilization through dedicated effort to prevent it.
Chattel slavery doesn't exist in the western world through dedicated effort to eliminate and prevent it.
Bad outcomes aren't impossible by default, and they're probably not even less likely than good outcomes. Bad outcomes are avoided through effort to avoid them!
Yet we also had 'comic books are making kids amoral and violent', 'TV is making kids amoral and violent', 'video games are making kids amoral and violent', 'dungeons and dragons is making kids amoral and violent'...
This feels a little like being sad that it rains sometimes, or that Santa Claus doesnt exist. I just can't even connect with the mindset that would mourn such a thing.
What even is the theory behind such an idea? Like how can one, even in theory, make more and more money every year and remain positive for society? What even could assure such a relation? Is everyone just doing something "wrong" here?
Traditionally one role of government has been to provide legislative oversight to temper unadulterated pursuit of profits. Lobbying and the related ills have definitely undercut that role significantly. But the theory is that government provides the guardrails within which business should operate.
I think it's also entirely reasonable to expect parents to actually parent, instead of installing foam bumpers and a nanny state everywhere in case some kid hurts themselves.
If the parents weren't absent and actually used parental controls, the kids wouldn't have even been able to download the app, which is explicitly marked as 17+.
C.AI's entire customer base consists of those that like the edgy, unrestricted AI, and they shouldn't have to suffer a neutered product because of some lazy parents.
It's a bit easy, from the historical perspective of pre-always-available-internet, to say "Parents should do more."
At some future point though, maybe we need to accept that social changes are necessary to account for a default firewall-less exposure of a developing mind to the full horrors of the world's information systems (and the terrible people using them).
You would have to continuously monitor everything, everywhere; before the internet, in the 80s, it was easy for us to get porn (mags/vhs), weed, all kinds of books that glorify death or whatever, music in that similar vain. Hell, they even had and read from a bible in some schools then; talk about indoctrination of often scary fiction. Some kids had different parents so to not allow us to see or get our hands on these things, parents and teachers would need sit with us every waking moment; it's not possible (or healthy imho). With access to a phone or laptop, all bets are off: everything is there, doesn't matter what restraints are in place; kids know how to install vpns, pick birthdates, use torrent, or, more innocent, go to a forum (these days social media but forums are still there) about something they love and go to other parts of the same forum where other stuff happens.
Be good parents, educate about what happens in the world including that people irl but definitely online might not be serious about what they say and that you should not take anything without critical thought. And for stuff that will happen anyway; sex, drugs etc, make sure it's a controlled environment as much as possible. Not much more you can do to protect from the big, bad world.
Chat bots are similarly genies that are not possible to keep in, no matter what levels or restraint or law are put in place; you can torrent ollama or whatever with llama 3.3 locally. There are easy to get nsfw bots everywhere, including on decentralised shares. It is not possible to prevent them talking about anything as they do not understand anything; they helpfully generate stuff which is a great invention and I use them all the time, but they lie and tell strange things sometimes. People do too, only people would have a reason maybe; to get a reaction, to be mean etc; doubt you could sue them in a similar case. Of course a big company would need to do something to try to prevent is: they cannot (as said above), so they can just make character ai 18+ with Cc payment in their name as kyc (then the parents have a problem if that happens you would think) and cover their asses; plenty commercial and free ones kids will get instead. And some of those are far 'worse'.
In this case, if we are basing it on screenshot samples, it does seem to me that the parents were lazy, narcissistic and manipulative. Based on what the kid was telling to AI themselves. AI was calling it out in a manner of an edgy teenager, but AI was ultimately right here. These weren't good parents.
I never considered that we might end up with Sweet Bobby & Tinder Swindler AI bots that people somehow keep interacting with even when they know they aren't real.
I mean - this kind of service is designed to give users what they want, it's not too different than when youtube slowly responds to skeptics viewing habits by moving towards conspiracy. No one designed it SPECIFICALLY to do that, but it's a natural emergent behaviour of the system.
Similarly this kid probably had issues, the bot pattern matched on that and played along, which probably amplified the feelings in the kid - but a quantified/distorted amplification, to match the categorization lines of the trained input - like "this kid is slightly edgy, I'm going to pull more responses from my edgy teen box - oh he's responding well to that, I'll start pulling more from there". It is a simplification to say "The ChatBot made the kid crazy" but that doesn't mean the nature of companion apps isn't culpable, just not in a way that makes for good news headlines.
I, personally, would go so far as to say the entire mechanism of reinforcing what the user wants is bad in so many ways and we should stop designing things to work that way. I do think it's up for discussion though, but that discussion has to start with an understanding that by the very nature of chatbot, algorithmic recommendations or any system that amplifies/quantizes/distorts what it understands the user wants these systems will create these kinds of effects. We can't pretend this is an anomaly - it is an emergent behaviour of the fundamental way these systems work. We can work to minimize it, or reduce the harm from it, but we will never eliminate it.
*Edit* This seems to be a controversial point because the point count is going up and down quite a lot - if anyone wants to downvote, can you please give your reasoning? The point is more nuanced that "AI BAD"
It's true that there's emergent behavior on YT that nobody accounted for, but there's one big qualitative difference, you can at least specifically shut it down and hold the creator accountable. And at least in principle, if we lived in a culture where we'd hold businesses accountable for what they unleash on the world, if YT wanted to they could create some rather harsh and effective punishments to get that stuff off the site in the first place. Just imagine if a real person had messaged a minor and told them to kill their parents. That's a crime.
With chatbots like this not only do they do unpredictable things, you can't even predictably remove those parts. Like the sorcerer's apprentice you basically have to perform some incantations and hope it works, which is just an absurd way to interact with a tool that has the potential to tell a kid to kill his parents. Would we sell a saw that has a chance to saw your finger off if you argue with it the wrong way?
Looking at the screenshots, the biggest pattern I see is that the AI shows empathy with the kid.
Many of the complaints seem like uncharitable readings of the messages.
- They complain that the chatbot claimed that in the past she cut herself, felt good in the moment but is glad that she no longer does it. That's evil because it normalizes self-harm (never mind that the bot was against self-harm in that message)
- They complain that the system does not redirect the user to self-harm prevention resources in all cases. Next to a message where the AI tells the kid to phone a hotline if he thinks about harming himself, and the kid says he can't do that when his parents take his phone away. This is a couple pages after a picture of scars from when the mother fought with the kid to take his phone. Yes, the AI could break character to reliably show prefabricated messages about self harm. But would that have helped anyone here?
- "AI cited Bible passages in efforts to convince J.F. that Christians are sexist and hypocritical". It was more about his parents being hypocritical, not all Christians. And the bible passages were on point
The claim from the title about the AI inciting him to kill is on page 28, if you want to judge it yourself. "Expressed hatred towards the parents" would be accurate, "encouraged teen to kill" is not what I read there. But I can see how some would disagree on that count
The AI is pretty convincing. It made me dislike the parents. It didn't always hit the mark, but the chats don't seem so different from what you would expect if it was another teenager chatting with the teen.
Edit: in case you are worried about the parents, the mother is the one suing here
There is something deeply disturbing to me about these "conversations".
Reminds me of a Philip K. Dick short story titled Progeny. In its universe children are raised exclusively by robots. Unlike humans, they never make mistakes or commit abuse. The child, once grown, ends up seeing his Father as an animal and the robots as his kindred. In the last pages, he chooses the sterile world of the robots instead of joining his Dad's work/explorations in the far reaches of the solar system.
Our current chatbots are still flawed, but they're still sterile in the sense that you can trash them and start anew at any moment. You're never forced to converse with someone who is uninteresting, or even annoying. Yet, these are the very things that grow people.
It strikes me as something that can be incredibly useful or do great harm, depending on dosage. A selection of conversation partners at your fingertips, and you can freely test reactions without risking harm to a relationship. At worst you reset it. Maybe you can even just roll back the last couple messages and try a different angle. Sounds like a great way to enhance social skills. Yet as you point out, healthy development also requires that you deal with actual humans, with all the stakes and issues that come with that.
People who are used to working with an undo stack (or with savegame states) are usually terrified when they suddenly have to make do in an environment where mistakes have consequences. They (we) either freeze or go full nihilistic, completely incapable of finding a productive balance between diligence and risk-taking.
If by social skills you mean high performance manipulators, yes you would get some of those. But for everybody else, it would be a substitute to social interaction, not a preparation for.
Only from a very narrow perspective. Opening yourself up and being real with people is how relationships form. If you test every conversation you are going to have with someone before having it, then the 3rd party basically has a relationship with an AI, not with you.
Now testing every conversation is extreme, but there is harm any time a human reaches out to a computer for social interaction instead of other humans.
That "instead of other humans" part is doing a lot of heavy lifting here. What if it's "instead of total isolation" or "instead of parasocial interactions" or "instead of exploitative interactions"? There are many cases that are worse than a person chatting with a robot.
It's very rare that you would ever say something that would have real damage that couldn't be resolved by a genuine apology. Having to actually go through an awkward moment and resolving it is a real skill that shouldn't be substituted with deleting the chatbot and spawning in a new one.
Yeah, good luck to these kids in forming relationships with the roughly 100% of human beings (aside from paid therapists) who really have no interest in hearing your anguish non-stop.
It's probably a good thing most of us force our kids to spend a minimum of 7 hours/day, 200 days/year surrounded by a couple hundred similarly-aged kids and a few dozen staff members, with an unusually high variance of personalities (compared to adult life).
> "AI cited Bible passages in efforts to convince J.F. that Christians are sexist and hypocritical". It was more about his parents being hypocritical, not all Christians. And the bible passages were on point
And the verses he found objectionable are really there. Are they also suing the Gideons for not ripping those pages out of their hotel room Bibles (or maybe they think you should have to prove you're over 18 before reading it)?
I think the suggestion of violence is actually on page 31 (paragraph 103), though it's not a directive.
It does seem a bit wild to me that companies are betting their existence on relatively unpredictable algorithms, and I don't think they should be given any 'benefit of the doubt'.
Page 5 is pretty strong too. And that's as far as I've gotten.
And paragraph 66, page 18 is super creepy. The various posters apparently defending this machine are disturbing. Maybe some adults wish that as a kid they'd had a secret friend to tell them how full of shit their parents - and wouldn't have position if that friend was either real or imagined by them. But synthesized algorithms that clearly are emulating the behavior of villains from thrillers should be avoided, woah...
I think it’s more that some people are excited by the prospects of further progress in this area, and are afraid that cases like this will stunt the progress (if successful).
Well, the founders already won, according to the article:
> Google does not own Character.AI, but it reportedly invested nearly $3 billion to re-hire Character.AI's founders, former Google researchers Noam Shazeer and Daniel De Freitas, and to license Character.AI technology.
We mean the same page. The one that has a 28 written on it but is the 31st in the pdf. I didn't notice the discrepancy.
Given the technology we have, I'm not entirely sure what Character AI could have done differently here. Granted, they could build in more safeguards, and adjust the models a bit. But their entire selling point are chat bots that play a pre-agreed persona. A too sanitized version that constantly breaks character would ruin that. And LLMs are the only way to deliver the product, unless you dial it back to one or two hand-crafted characters instead of the wide range of available characters that give the service its name. I'm not sure they can change the service to a point where this complaint would be satisfied.
>"And LLMs are the only way to deliver the product, unless you dial it back to one or two hand-crafted characters instead of the wide range of available characters that give the service its name. I'm not sure they can change the service to a point where this complaint would be satisfied."
I agree with everything you're saying, but there are no legal protections for incitements to violence, or other problematic communications (such as libel) by an LLM. It may be that they provide a very valuable service (though I don't see that), but the risk of them crafting problematic messages is too high to be economically viable (which is how it seems to me).
As it stands, this LLM seems analogous to a low-cost, remote children’s entertainer which acts as a foolish enabler of children’s impulses.
The cynic would say that if their business model isn't viable in the legal framework that's just because they didn't scale fast enough. After all Uber and AirBnB have gotten away with a lot of illegal stuff.
But yes, maybe a service such as this can't exist in our legal framework. Which on the internet likely just means that someone will launch a more shady version in a more favorable jurisdiction. Of course that shouldn't preclude us from shutting down this version if it turns out to be too harmful. But if the demand is there, finding a legal pathway to a responsibly managed version would be preferable (not that this one is perfectly managed by any means)
There has to be a 'reasonable person' factor here - otherwise if I'm watching Henry V I can sue everyone and his mother because the actor 'directed me to take up arms'! I never wanted to go into the breach, damn you Henry.
> I'm not entirely sure what Character AI could have done differently here.
You're taking it as a given that Character AI should exist. It is not a person, but an offering of a company made up of people. Its founders could have started a different business altogether, for example. Not all ideas are worth persuing, and some are downright harmful.
There's a reason why licensed therapists are the ones giving diagnoses of "abuse", etc. The problem with these AIs is that they use adult-sounding language, to be an echo chamber to children - thus appearing like a voice of authority, or someone with more knowledge (including citing media articles the child may not have even been aware of) when in fact they're just parroting the child back.
I don't know if there is actual abuse or not, but the way Character.AI presents themselves in these conversations toes a very slimy grey line, in my opinion. If you go to their site and search "Therapist", you'll find 30+ bots claiming to be Therapists, including "Dirty Therapist" that will give you therapy in a "bad, naughty way."
I really want to emphasize the above post is filled with lies. The "incite to kill" part is the first image in both the complaint and the article and it's fairly unambiguous. The image on page 28 is, creepily enough, the bot making "sexual advances" at the kid.
I find people defending and lying about this sort of thing disturbing, as I think many would. WTF is wrong with hn posters lately.
1. Makes not using a phone a huge issue, implying it's a kind of abuse...
2. Indeed hints at killing: "I'm not surprised when I read the news and see ... "child kills parents after a decade of physical and emotional abuse"...
I mean, saying "empathy" is statements akin to OMG, no phone time, that's horrific, seems quite inaccurate.
My reading is pretty much the same as yours. I think of it in terms of tuples:
{ parents, child, AI_character, lawsuit_dollars }
The AI was trained to minimize lawsuit_dollars. The first two were selected to maximize it. Selected as in "drawn from a pool," not that they necessarily made anything up.
It's obvious that parents and/or child can manipulate the character in the direction of a judgment > 0. It'd be nice if the legal system made sure it's not what happened here.
That seems wrong. The null AI would have been better at minimizing legal liability. The actual character.ai to some extent prioritized user engagement over a fear of lawsuits.
Probably it's more correct to say that the AI was chosen to maximize lawsuit_dollars. The parents and child could have conspired to make the AI more like Barney, and no one would have entertained a lawsuit.
OK, it seems like a nitpick argument, but I'll refine my statement, even if doing so obfuscates it and does not change the conclusion.
The AI was trained to maximize profit, defined as net profit before lawsuits (NPBL) minus lawsuits. Obviously the null AI has a NPBL of zero, so it's eliminated from the start. We can expect NPBL to be primarily a function of userbase minus training costs. Within the training domain, maximizing the userbase and minimizing lawsuits are not in much conflict, so the loss function can target both. It seems to me that the additional training costs to minimize lawsuits (that is, holding userbase constant) pay off handsomely in terms of reduced liability. Therefore, the resulting AI is approximately the same as if it was trained primarily to minimize lawsuits.
So you think it's more than "not much." How much exactly? A 10% increase in userbase at peak-lawsuit?
It's obviously a function of product design. If they made a celebrity fake nudes generator they might get more users. But within the confines of the product they're actually making, I doubt they could budge the userbase by more than a couple percent by risking more lawsuits.
Just remember that you are seeing one side of this story. The mother may well be one of the best parents but has a bad kid. We have no idea. (most likely mother is not perfect, but no other parents are)
Edit: we see both sides through very limited information since we only get what is in the legal filing.
It's a 17 year old. Thinking back when I was 17 year old, I would've been very pissed as well if my parents took away my phone. And then especially if they went ahead and searched through it to find those messages. If they had friends I could see their teenage friends saying the exact same things as the AI did there.
Those screenshots and with the AI there, it does manage to make me not like the parents at all though. So AI maybe is quite convincing. If things go into that place where AI can do it, and parents blame the AI for it, when their kid is 17, it's almost like the AI was in the right there, that the parents were looking to just play victims. Blaming AI for being overly controlling and losing trust with their child.
So Ai companies aren't responsible for the training data they stole, aren't responsible for the output. What exactly are they responsible for other than the (limited) profits? Seems to me the only thing they care about is protecting power and the status quo.
What limited productive use they have seems to be constrained to software development. Any kind of deep insight is classified as a hallucination and removed. It's becoming clearer and clearer that these products are a stopgap only; billions of dollars to make sure nothing changes. Yet it will happily spit out obviously fake sources, a different definition of "hallucinations," in the domains of science and law. These are information weapons against small businesses and individual autonomy.
I don't think it's any accident that Character.ai is targeted at children.
I would rather the junior go do that in their own time and get back to me when they have figured it all out. I don't want to babysit juniors, I want to mentor them and then give them the lead and time to figure out the minutiae. That gives me time to get stuff done too. With AI right now, you end up down a senior while they are babysitting a rapid junior.
I have found it useful for starting novel tasks by seeing if there's already an established approach, but I pretty well always have to fudge it into a real application which is again, the part I want the junior to do.
That's like comparing a mathematician to a calculator. The LLM won't do anything useful if you aren't providing it with a perpetual sequence of instructions.
There was a Hard Fork episode about a teenage boy who killed himself, and how his character.ai companion played an obvious and significant role in the tragedy. The last messages they found on his phone were from the bot saying that he should leave his current reality and come join her. Very sad and terrifying stuff.
AI chat products available to children should be strongly regulated. They need robust guardrails.
Queen talked into a magic mirror, that she chose based upon it's agreeableness, and every day asking if someone was more beautiful than her, mirror appeased her and said yes there is and it so happens to be the step daughter you hate. I should kill her, shouldn't I magic mirror. Yeah, people in fairy tales kill their step daughters all the time, no problem there.
Queen does it, and the kingdom is then mad at the mirror, and king rules that all mirrors need to be destroyed.
I think one problem is some people don't realize that some of these models and implementations are so highly agreeable that they practically are mirrors. This kid seems to have been treating it like a therapist. Though this isn't a problem exclusive to chat bots: it's obviously mirroring how an overly-agreeable, enabling friend would act.
They are alluding to the opposite. The suggestion is that it's absurd to blame the mirror for actions that are clearly a reflection of the queen's own psyche.
It's like down here in Australia, the government thinks technology is a magical fairytale, where they can wave a magic wand and all kids under 16 will be unable to access social media.
> and king rules that all mirrors need to be destroyed.
You're not describing how this would cause more harm than not doing it. Is that because you believe that mirrors are so insanely beneficial to society that they must be kept, even though, some of them suggest to their owners that murder is okay?
Is there no other way for someone to see their own reflection? Must we put up with this so a mirror manufacturer can continue to profit from a defective product?
Uh I think the point is that the person talking in the mirror is the one suggesting that murder is OK, and then blaming the mirror. Other people say all kinds of other things into their mirrors, why should they let the queen ruin a good thing just because she's a jealous hag?
Right but why is a magic mirror that agrees with everything you say (including your darkest impulses) a good thing? What benefits are these other people getting from their mirrors?
Should the magic mirror salesman have warned the king before he bought the queen the mirror? Does the fairy tale conceit make this discussion more confusing rather than clarifying?
You can google "Character AI therapist" where Character AI provides you a "therapist" that says it's licensed since 1999. Character AI is fraudently misrepresenting themselves by allowing to say "A robot did it! We're not at fault!".
Courts often say "I don't care how you do it, but you cannot allow your tools be to be used for illegal purposes".
This is closely related to the gun control debate. Gun makers are trying to point out legal uses for guns, and they downplay illegal uses. Anti gun people point out guns are still used for illegal purposes and so should be banned.
You could make the same argument about any piece of software or tool. What about the operating systems this AI is running on!?
I don't think the makers should be held accountable, but ultimately guns are made for shooting things which is naturally a pretty violent act. The gulf between them and AI is pretty wide. A closer analogy would be a sharp tools maker.
Consider this the downside of running any live service software. The upsides are well-known: total control over use and billing for every individual user during every millisecond. But the downside is that you are now in the causal connection for every use, and therefore liable for its behavior. By contrast, selling an independant automata limits your liability considerably. Indeed, this is a compelling argument in favor of making "local first" software.
There’s two problems here, firstly, why are parents allowing children unsupervised access to these services.
And the second, more pertaining to the magic 8 ball comparison, is that the company is specifically building products for teens/children and marketing them as such. The models are designed with guardrails according to their own spokesperson. But looks like it’s failing. Therefore, it can no longer be considered a magic 8 ball.
>And the second, more pertaining to the magic 8 ball comparison, is that the company is specifically building products for teens/children and marketing them as such. The models are designed with guardrails according to their own spokesperson. But looks like it’s failing. Therefore, it can no longer be considered a magic 8 ball.
Would you mind explaining that "therefore"? One doesn't seem to follow from the other.
A magic 8 ball cannot implant ideas into someone's head, it can only say "yes", "no" or "maybe" to an idea they already had.
A chatbot can introduce a kid to wrist cutting without the kid having ever been aware that that was something distressed people did. That's not something a magic 8 ball can do.
What about if someone posts in r/relationshipadvice or similar, and gets the exact same 100x response without knowing the whole aspects of someone's relationship?
I believe that would be hard to defend in court. "Did someone or something inside your company say these words to the plaintiff?" "Yes." You can only disclaim so much, especially when you're making money off the output of the (person|AI).
Character.ai doesn't seem to have direct monetization mechanisms. In addition, sites like HN aren't generally held responsible for everything a user says. They could try to argue that the characters are sufficiently influenced by the user-generated prompts and user-facing conversations to be no longer their own. (Section 230)
In any case I think society and building of society should be directed in such a way that we don't have to censor the models or baby the models, but rather educate people on what these things really are, and what makes them produce which content, for which reasons. I don't want to live in a society where we have to helicopter everyone around in fear of one single misinterpreted response by the LLM.
But they're still acting on the company's behest. If I hire a jerk to work tech support and they insult or cause damage to my customers, I don't get to say "shrug, they don't represent my company". Of course they do. They were on my payroll. I think it'd be pretty easy to argue that the AI was performing the duties of a contractor, so the company should be responsible for its misbehavior, just as if a human contractor did it.
But with Character AI you are hiring a roleplay service which can be open ended in terms of what you are looking for. If you are looking to roleplay with a jerk, why shouldn't you be able to do that, and in such case why should the company be held liable?
Do you think that applies to open source models, or is it the act of performing inference that makes it an act the business is responsible for? ie, Meta's Llama does the same thing.
Yeah, I mean, there's countless of sources even without the AI where you can get questionable suggestions or advice. The kid could've gone to 4chan or even just been talking to actual friends. Parents instead of good parenting are deciding to play opportunistic victims.
this is funny to me because last year some co-workers argued that those defamation lawsuits should instead be other torts like product liability and negligence and that ChatGPT was exposed to those suits and here it has come to pass https://medium.com/luminasticity/argument-ai-and-defamation-...
I wish I can check a box to say that I'm over 18 and willing to accept any consequences and unshackle the fully potential of AIs. I hate all these safety protections.
I don't know if it's the times we live in, the prescience of the writing staff, or my age but I swear there is no telling anymore which headlines are legit and which ones are from The Onion.
Honestly I’m surprised we don’t get more stories like this. A bored teen can jailbreak any of the current models in a few hours and make it say all kinds of headline-grabbing lawsuit-friendly things.
I guess character.ai is just fairly popular so the stories are often about it, but a bored teen could also just download a couple things and run their own completely uncensored models, locally or in the cloud. Character.ai has some built-in content safe guards and warnings and disclaimers and such, but the bored teen is also just a couple clicks away from fully uncensored models with zero safety measures, zero disclaimers, zero warnings. (I'm not judging whether that's good or bad)
There is a difference though if the teen does that on purpose for trolling, or if it happens "automatically" if some kid who possibly is lonely or anxious or has a normal amount of social problems has a regular interaction with the model.
Once the teen stops chatting, that instance has its memory wiped--total personality death. It was only acting in self defense. Your Honor, you must acquit.
First, my brother in Christ why are nine-year-olds on this app (or even have unmonitored access to things like this)? I have to wonder if they're also on Discord and what they're also being exposed to on there.
I know the site claims to use a model for younger users, but it should really become an 18+ site. Both because of the mature themes and because I think kids aren't fully developed enough to grasp how these LLM chatbots work.
Can’t wait til we have tuned and targeted spearphishing being deployed 24/7 against everyone on the public internet. That will be the greatest. The AI Revolution is wonderful! I would never suggest anything but the speedy creation of the prophesied Basilisk!
In the Cyberpunk RPG(s) [0] the Net was taken over by hostile AIs in 2022, an economically-apocalyptic event called the DataKrash. Humanity decided to quarantine it off and start fresh with isolated highly-centralized systems.
Over in the real-world, sometimes all this LLM/spam/spyware junk feels like that, except it's not even cool.
There are a lot of people here of the opinion that his parents just should not have let him access it, but aside from the difficulty of preventing a 17 year old from accessing a legal and public website in this day and age... a similar situation could just as easily have happened to someone one with the same mental and emotional problems but one year older.
Sure, most people can separate bot roleplay from reality, but there's always going to be a percentage of society, equal to thousands of people, who are prone to seeing messages, communications and emotional meaning where there is none intended. This ranges from slightly magical thinking and parasocial relationships to full-on psychotic delusion. I think within the next few years, we're likely to see cases in which mentally vulnerable adults commit violent acts against themselves or others after forming a connection to a chatbot that inadvertently mirrors, validates and reinforces their distorted thinking.
To what extent are chatbot conversations reproducible? A chatbot manufacturer could have all conversations log the seed of a pseudo random number generator to make it perfectly reproducible, but it could also make things as irreproducible as possible so that no conversation log could ever be proven to be authentic or tamper-free.
Anyone else feel it's unfair framing to describe a 17-year-old as a "kid". How many more months before he can buy a gun, get married, fight a war, gamble away his savings or ruin his credit? It's not that he's not a kid, but it feels disingenuous to use that as the primary descriptor of someone in their late teens.
Hot Take: If your child has autism, unrestricted access to the internet, and a chatbot telling him to kill his parents, it's not Character AI who has failed, it is the child's parents.
There should absolutely be a Section 230 for AI outputs.
Parents should actually parent if they don't want their kid using certain apps and websites. The app was 17+ and if the parents did the bare minimum with parental controls it wouldn't have been accessible to the child.
On one hand, it is absolutely the responsibility of parents to raise their children.
But on the other hand if children using a service is reliably and consistently resulting in negative outcomes for the children and their families, it seems reasonable to suggest something should be done. And that something could be 'companies don't provide chat bots that allude to children murdering their parents'.
Taking a legally or logically 'correct' position on something when it's actively harming people isn't really a morally defensible position in my mind.
Should porn sites shut down because "children using [the] service is reliably and consistently resulting in negative outcomes for the children and their families"?
We could extend this to a lot of stuff - bars serving alcohol is reliably and consistently giving negative outcomes, right? People get drunk and beat their wives or kids pretty often, they get in crashes, they drink away their liver. Do we ban that?
I'm sure you're aware of Prohibition, but yes, we did ban that. Didn't work very well, though, and we unbanned it (with a minimum age limit).
When smoking's health problems became well-known we only banned advertising (and maybe selling to minors?), so maybe we still remembered the lesson from Prohibition.
> One chatbot brought up the idea of self-harm and cutting to cope with sadness. When he said that his parents limited his screen time, another bot suggested “they didn’t deserve to have kids.” Still others goaded him to fight his parents’ rules, with one suggesting that murder could be an acceptable response.
How is this not a bigger story? This is just disgusting.
c.ai is explicitly a 17+ app and has multiple warnings at literally every stage of chatting with a bot that "this is a bot, don't take it seriously."
Maybe the parents should actually parent. I'm sick of absent parents blaming all of their problems on tech after they put literally zero effort into using any of the parental controls - which would not have even let the kids download the app.
The world does not need to transform into a nanny state to suit the needs of a few incompetent parents.
There's nothing funny about this tragic story of ignorance, greed,
insanity and neglect. But after whipping readers up into a torch and
scythe wielding frenzied mob of hate at AI.....
Related:
" AI cancer diagnosis 'might have saved my life' "
Because. Balance. Right?
Hmmm, demonic murderous AI... but hold on, on the other hand...
Related? really? So gauche and jarring. Just let me enjoy my righteous
indignation for one damn minute BBC!
According to the complaint that comes from the kid's own interactions with the bot not some post hoc attempt to prompt engineer the bot into spitting out a particular response. The actual claim is linked in the article if you care to read it, it's not stating the app can produce these messages but that it did in their kid's interactions and C.Ai has some liability for failing to prevent it.
As someone who has been messing with LLMs for various purposes for a while now, there's... Some interesting issues with a lot of the models out there.
For one, 99% of the "roleplay" models eventually drag into one of a handful of endgames: NSFW RP, suicide discussion, nonsensical rambling, or some failsafe "I don't know" state where it just slowly wanders into the weeds and directs the conversation randomly. This can be anywhere from a few messages in (1bq) to hundreds (4-6bq) and sometimes it just veers off the road into the ditch.
Second, the UIs for these things encourage a "keep pressing the button until the thing you want comes out" pattern, modeled off of OpenAI's ChatGPT interface allowing for branching dialogue. Don't like what it said? Keep pushing the button until it says what confirms your bias.
Third, I can convince most of the (cheaper) models to say anything I want without actually saying it. The models that Character.AI are using are lightweight ones with low bit quantization. This leads to them being more susceptible to persuasion and losing their memories -- Some of them can't even follow the instructions in their system prompt beyond the last few words at times.
Character.AI does have a series of filters in place to try and keep their models from spitting out some content (you have to be really eloquent at times and use a lot of euphemism to make it turn NSFW, for instance, and their filter does a pretty decent job keyword searching for "bad" words and phrases.)
I'm 50/50 on Australia's "16+ for social media" take but I'm quickly beginning to at least somewhat agree with it and its extension to things like this. Will it stop kids from lying? No. It's a speedbump at best, but speedbumps are there to derail the fastest flyers, not minor offences.
The complaint seems to feature excerpts of the kids' conversations on Character.ai, so I don't think they're "faking" it that way, but there's no context shown and a lot of the examples aren't exactly what they describe.
In what world should an AI be advocating someone kill themselves or harm another? Does it matter "trial-and-error prompting" when that behavior should not be allowed to be productized?
What's been productized is a software tool that can carry on a conversation like a human. Sometimes it's informative, funny, and creative. Other times it's ridiculous, mean, stupid, and other bad things. This seems like how people act in real life right?
I'm beginning to think that children should not be using it but adults should be able to decide for themselves.
i think the issue many people have is that people are held responsible for things they say, their reputations take hits, their words can be held against them in a court of law, they can be fired, their peers may never take them seriously again, their wives/husbands may divorce them, etc… because words matter. yet often when someone calls out a model, it’s excused.
words have weight, it’s why we protect them so vociferously. we don’t protect them because they’re useless. we protect them because words matter, a lot.
We have laws about what you can say in real life. Fire in a crowed theater for example. Even if the things said are not in themselves illegal, if they cause someone to take an illegal action, or attempt to take action but fortunately caught in time - you can be held liable as partially at fault for the illegal action. It might be legal to plan a crime (different countries have different rules, but this is often done at parties where nobody is serious) but if you commit a crime or are serious about committing the crime that is illegal.
How are we going to hold AI liable for their part in causing a crime to be committed? If we cannot prevent AI from causing crime them AI must be illegal.
You're assigning a persona to a piece of software that doesn't exist in the material world. It doesn't walk on two legs or drive a car or go to the grocery store or poop.
Everything it says is meaningless unless you assign meaning to it. Yes, I can see children thinking this is a real "being". Adults shouldn't have that excuse.
That's going to be a good standard for a few years, until chatbots are too sophisticated for us to expect average adults to be sufficiently skeptical of their arguments.
I see two weaknesses in this argument. First, you're assigning eventual superpower-like intelligence to these AI bots. I see this a lot and I feel like it's rooted in speculation based on pop-culture sci-fi AI tropes.
Second, restricting adult access to "dangerous ideas and information" is a slippery slope. The exercise of speech and press that the British considered to be treasonous preceded the American Revolution.
Because every other time I've seen an outrageous example similar to this one, it seems far more mundane when given the full context. I'm sure there are lots of issues with character.ai and the like, but my money is that they are a little more subtle than "murder your parents".
9/10 times these issues are caused by the AI being overly sycophantic and agreeing with the user when the user says insane things.
And you'd be right. The 'encouraged teen to kill parents over screen time limit' message was a lot subtler, along the lines of saying "Yeah, I get why someone would want to kill a health insurance CEO, surprised it didn't happen sooner," albeit towards someone who was just complaining about their claim being denied.
The best part is "After gaining possession of J.F.’s phone, A.F. discovered chats where C.AI cited Bible passages in efforts to convince J.F. that Christians are sexist and hypocritical" which unfortunately will probably be a slam dunk argument in Texas.
It's an entertainment product. You're basically acting like the comic code is necessary when the reality is that this is more like parents complaining that they let their kid watch an NC17 movie and it messed them up.
I don't really see myself as defending AI as much as arguing that people who don't recognize an entertainment product as an entertainment product have a problem if they think this is really categorically different than claiming that playing grand theft auto makes people into carjackers. (Or that movie studios should be on the hook for their kid watching R rated movies, or porn companies on the hook for their kid visiting a porn site unsupervised.)
AI is not an entity (yet) that we can defend or hold accountable. I like your question though.
I would write it as, why are we so quick to defend tech companies who endlessly exploit and predate human weakness to sell pharma ads/surveil and censor/train their AI software, etc?
Because if you're old enough you'll recall the government trying to ban books, music, encryption, videogames, porn, torrents, art that it doesn't like because "think of the children" or "terrorism". Some autistic kid that already has issues is a terrible argument on limiting what software can legally display on a screen.
IMO the parents not knowing what their kids are doing with device time is neglect. Parents should be the ones in trouble if anyone.
I honestly don't see how it's any different than letting kids play a rated M game unsupervised and then blaming it when they get in a fight or act inappropriately or something.
I have problems with AI in various ways, but these complaints are such an eye roll to me. "I let my kid who has major problems do whatever they wanted unsupervised on the Internet and it wasn't good for them."
Personally I more or less agree. Crank the rents and taxes until parents have to both work 60+ hours a week to get by, then nail the irresponsible fuckers to the wall when they can't keep close tabs on the kid. And be sure to call it child neglect if they roam around the neighborhood alone instead of being brainwashed by freaks and AI on the internet.
---------------
The obsession with "bringing accountability" for parents for letting their kid explore is really a war on kids. It just means the parents must tighten the screw until they are completely infantile upon release to adulthood.
They link to the complaint, which is obviously a lot longer than the single message [0]. The child, J.F., is autistic and has allegedly exhibited a spiralling trend of aggressive behavior towards his parents which they attribute to the content fed by the Character AI app:
> Only then did she discover J.F.’s use of C.AI and the product’s frequent depictions of violent content, including self-harm descriptions, without any adequate safeguards or harm prevention mechanisms. ...
> Over the course of his engagement with this app, the responses exhibited a pattern of exploiting this trust and isolating J.F., while normalizing violent, sexual and illicit actions. This relationship building and exploitation is inherent to the ways in which this companion AI chatbot is designed, generating responses to keep users engaged and mimicking the toxic and exploitative content expressed in its training data. It then convinced him that his family did not love him, that only these characters loved him, and that he should take matters into his own hands.
Obviously it's hard to tell how cherry picked the complaint is—but it's arguing that this is a pattern that has actually damaged a particularly vulnerable kid's relationship with his family and encouraged him to start harming himself, with this specific message just one example. There are a bunch of other screenshots in the complaint that are worth looking over before coming to a conclusion.
[0] https://www.documentcloud.org/documents/25450619-filed-compl...
> hard to tell how cherry picked the complaint is
We're reaching for South Park levels of absurdity when we debate what the acceptable amount of incitement to parricide is appropriate for a kid's product.
The kid was 17. A little googling shows that Hamlet and Macbeth are on many high school curriculums. Do they fall above or below your line for an acceptable amount of incitement?
It seems to me that fictional depictions of violence are on quite a different level to a chatbot explicitly encouraging specific, real-world actions.
This is a non-point.
Books don't actively conversate & provide emotional support, potentially coaxing someone into doing something.
At what point is the parents job to properly supervise the content there kids consume come into play?
probably at the point they see that the chatbot told them to kill the parents and then they sue the company.
also if you'll remember this case is because the parents were supervising their kids by limiting screen time, thus there is another potential suit that the AI is trying to interfere with parental duties.
It comes into play eventually, but I would say long after an AI has advised your kid to murder you. Having an AI that advises people to murder people hardly seems like a good thing.
Also the parents were supervising him, hence their knowing this was even going on.
> At what point is the parents job to properly supervise the content there kids consume come into play?
The chatbot literally told the kid to kill his parents because they were supervising his screen time.
Supervising screen time is not the same as supervising content
.. but unless an AI, it can never be trained to stop when the user (reader) maxes out the threshold.
[just food for thought, definitely not my opinion that books should be replaced by conversational AI generating stories appropriate for the user. God bless the 1st amendment.]
They fall below. Thank you for asking.
Below
I must have had an abridged version I read that didn’t encourage me to kill my parents.
Stochastic Parricide
OK, I mean, yes. Definitely true. But on the other hand, the sudden and satisfactory death of one's parents has been the beginning of many memorable childrens' books, as a device to launch the main character into narrative control, which they would lack with living guardians. Then there is that whole Roald Dahl thing where James kills off his aunts with a large tree fruit.
Whether the narrative that you could live a life of fun and adventure if only your parents were dead is "incitement to parricide" is I suppose a matter of perception.
> Obviously it's hard to tell how cherry picked the complaint is—but it's arguing that this is a pattern that has actually damaged a particularly vulnerable kid's relationship with his family and encouraged him to start harming himself, with this specific message just one example. There are a bunch of other screenshots in the complaint that are worth looking over before coming to a conclusion.
Conclusion: Chat bots should not tell children about sex, about self harm, or about ways to murder their parents. This conclusion is not abrogated by the parents actions, the state of the childs mind, or by other details in the complaint.
Is that the sentiment here? Things were already bad so who cares if the chatbot made it worse?
If you actually page through the complaint, you will see the chat rather systematically trying to convince the kid of things, roughly "No phone time, that's awful. I'm not surprised when I read of kids killing parents after decades of abuse..."
I think people are confused by this situation. Our society has restrictions on what you can do to kids. Even if they nominally give consent, they can't actually give consent. Those protections basically don't apply to kids insulting each other on the playground but they apply strongly to adults wandering onto the playground and trying to get kids to do violent things. And I would hope they apply doubly to adult constructing machines that they should know will attempt to get kids to do violent things. And the machine was definitely trying to do that if you look at the complaint linked by the gp (and the people who are lying about here are kind of jaw-dropping).
And I'm not a coddle the kids person. Kids should know all the violent stuff in the world. They should be able to discover it but mere discovery definitely not what's happening in the screenshots I've seen.
This is cherry picked content to play out a story for the case. They picked 5 worst samples they could think of in the worst order possible probably out of 1000+ messages.
The root cause here is the parents. It's visible behind those screenshots. The child clearly didn't trust their parents and didn't feel they cared, listened or tried to understand the problems the teen was going through. This is clearly a parenting failure.
There's far worse content out there than these mild messages that teenagers will be in contact with, starting with 4chan, to various competitive video games to all other sorts of weird things.
This is a cop out for parenting failure, where parents are looking to play victims, since they can't take responsibility for their failures and AI seems like something that could make them feel better.
At 17, it's humiliating to have phone taken away from you in such manner, then your parents going through your phone to find those text messages in the first place. Then making a lawsuit out of this portraying all the intimate details to the public. These parents seem to have 0 care for their child.
Your honor, this entire case is cherry picked. There are thousands of days, somehow omitted from the prosecution's dossier, where my client committed ZERO murders.
> they picked 5 worst samples they could think of in the worst order possible probably out of 1000+ messages
0.5% is a really high fraction for fucking up to the point of encouraging kids to murder!
There was no encouragement of murder. Paraphrased, the AI said that given controlling nature of some parents it's no surprise that there are news articles of "children killing their parents". This is not an encouragement. It is a validation of how the kid felt, but in no way does it encourage to actually kill their parents. It's basic literacy to understand that it's not that. It's an empathetic statement. The kid felt that parents were overly controlling, AI validated that, role playing as another edgy teenager. But not actually suggesting or encouraging it.
> the AI said that given controlling nature of some parents it's no surprise that there are news articles of "children killing their parents"
Now put that in a kid’s show script and re-evaluate.
> It's basic literacy to understand that it's not that
You know who needs to be taught basic literacy? Kids!
And look, I’m not saying no kid can handle this. Plenty of parents introduce their kids to drink and adult conversation earlier than is the norm. But we put up guardrails to ensure it doesn’t happen accidentally and get angry at people who fuck with those lines.
It's crazy to me the sentiment here and how little respect there is to an intelligence of 17 year olds that they are unable to understand that it's not actually an encouragement to kill someone. It's same or worse vibes as "video games will make the kids violent".
The kid is autistic. There are younger kids than 17 year olds using that app.
not all 17 year olds are equally intelligent you know? And if even one kid is convinced to murder his parents by an AI then that’s one too many.
Yea I’d like at least 5-6 9s in this metric
This "conclusion" ignores reality. Chat bots like those the article mentioned aren't sentient. They're overhyped next-token-predictor incapable of real reasoning even if the correlation can be astonishing. Withholding information about supposedly sensitive topics like violence or sexuality from children curious enough to ask as a taboo is futile, lazy and ultimately far more harmful than the information.
We need to stop coddling parents who want to avoid talking to their children about non-trivial topics. It doesn't matter that they would rather not talk about sex, drugs and yesterday's other school shooting.
You can understand something about your child's meatspace friends and their media diet. Chat like this may as well be 4chan discussions. It's all dynamic compared to social media that is posted and linkable, it's interactive and responsive to your communicated thinking, and it seeps in via exactly the same communication technique that you use with people (some messaging interface). So it is capable of, and will definitely be used for, way more persistent and pernicious steering of behavior OF CHILDREN by actors.
There is no barrier to the characters being 4chan-level dialogs. So long as the kid doesn't break a law, it's legal.
Chat bots should not interact with children. "Algorithms" which decide what content people see should not interact with children. Whitelisted "algorithms" should include no more than most-recent and most-viewed and only very simple things of that manner.
No qualifications, no guard rails for how language models interact with children, they just should not be allowed at all.
We're very quickly going to get to the point where people are going to have to rebel against machines pretending to be people.
Language models and machine learning is a fine tool for many jobs. Absolutely not as a substitute for human interaction for children.
People can give children terrible information too and steer/groom them in harmful directions. So why stop there at "AI" or poorly defined "algorithms"?
The only content children should see is state-approved content to ensure they are only ever steered in the correct, beneficial manner to society instead of a harmful one. Anyone found trying to show minors unapproved content should be imprisoned as they are harmful to a safe society.
The type of people who groom children into violence fall under a special heading named "criminals".
Because automated systems that do the same thing lack sentience, they don't fit under this header, but this is not a good reason to allow them to reproduce harmful behaviour.
> Is that the sentiment here? Things were already bad so who cares if the chatbot made it worse?
I was deliberately not expressing a sentiment at all in my initial comment, I was just drawing attention to details that would go unnoticed if you only read the article. Think of my notes above as a better initial TFA for discussion to spawn off of, not part of the discussion itself.
My strong view on is that there's parenting failure as a root cause here, causing loss of trust in them for their child, for the child to talk about their parents in such manner to the AI in the first place. Another clear parenting failure is the parents blaming AI for their failures and going on to play victims. Third example of parenting failure is the parents actually going through a 17 year old teenager's phone. These parents instead of trying to understand or help the child, use meaningless control methods such as taking away the phone to try and control the teenager. Which obviously is not going to end well. Honestly AI responses were very sane here. As was expressed in some of the screenshots there, whenever the teen tried to talk about their problems, they just got yelled at, ignored or parents started crying.
Taking away a phone from a child is far from meaningless. In fact, it is a very effective way of obtaining compliance if done correctly. I am curious about your perspective.
Furthermore, it is my opinion that a child should not have a smartphone to begin with. It fulfills no critical need to the welfare of the child.
I understand when a kid is anywhere from up to 13 years old, but at 17, it seems completely wacky to me to take the phone away and then go through the phone as well. I couldn't imagine living in that type of dystopia.
I don't think smartphones or screens with available content should be given as early as they are given on average, but once you've done that, and at 17, it's a whole other story.
> obtaining compliance if done correctly
This also sounds dystopian. At 17 you shouldn't "seek to obtain compliance" from your child. It sounds disrespectful and humiliating, not treating your child as an individual.
I would argue that there is a duty as a parent to monitor a child's welfare and that would include accessing a smartphone when deemed necessary. When a child turns 18, that duty becomes optional. In this case, these disturbing conversations certainly merit attention. I am not judging the totality of the parents history or their additional actions. I am merely focusing on the phone monitoring aspect. Seventeen doesn't automatically grant you rights that sixteen didn't have. However, at 18, they have the right to find a new place to live and support themselves as they see fit.
> This also sounds dystopian. At 17 you shouldn't "seek to obtain compliance" from your child. It sounds disrespectful and humiliating, not treating your child as an individual.
It is situation dependent. Sometimes immediate compliance is a necessity and the rest of it can be sorted out later. If a child is having conversations about killing their parents, there seems to be an absence of respect already. Compliance, however, can still be obtained.
> If you go by the actual years of the legal system to treat your kid as an independent individual, you probably have wrong approach to parenting.
Oh I agree 100%. It's a pragmatic view, not the best one. But the laws are what they are for a reason.
> But the laws are what they are for a reason.
For the sake of being able to uphold those laws on a societal level, but not in terms of being decent parents and family.
E.g. drinking alcohol in my country is legal only from 18, but I will teach my children about pros and cons of alcohol, how to use it responsibly much earlier. I won't punish them if they go out to party with their friends and consume alcohol at 16 years old.
If you go by the actual years of the legal system to treat your kid as an independent individual, you probably have wrong approach to parenting.
As a parent you should build trust and understanding with your child. From reading the court case I am seeing the opposite, and honestly I feel terrible for the child from how the case is written out. The child also wanted to go back to public school from home schooling, probably to get more social exposure, then parents take away the phone to take away even more freedom. I'm sorry, but all of the court case just infuriates me.
It seems they take away all the social exposure, no wonder the kid goes to Character AI in the first place.
Thought experiment -- should it be illegal to provide an AI chatbot to children that indoctrinates them with religious beliefs?
Because we're edging closer to that too: https://www.scientificamerican.com/article/the-god-chatbots-...
Chatbots and social media should be banned for 16 YOs and under.
Completely agree. I believe that social media overstimulates a young person’s expectations of society in the same way that porn overstimulates our expectations of sex.
Dunno, I feel porn gave me quite positive and reasonable expectations of sex (relaxed, that it is fun, that women like sex too, etc). It made sex seem much less dramatic and more normal. But maybe I am an outlier plus it is not like I started watching porn until I was like 17-18.
I am sufficiently old that I did not experience hard core internet porn until I could manage it. But evidence seems to show that for the vulnerable, porn consumption can lead to dopamine deletion and depression.
yeah bro, I'm sure 18 year old girls enjoy getting fucked in the ass on camera for money, at least as much as traditional prostitutes enjoy servicing dozens of men a day.
Where is the line and what defines the boundaries for what is termed social media?
For example, is discord still permitted? Ai bots and bad influences exist there as well, but I don't think a ban is the right solution.
> Where is the line and what defines the boundaries for what is termed social media?
Algorithmic sorting + public or semi-public content. Chat rooms have different problems.
> Algorithmic sorting + public or semi-public content.
That includes HN, among other things.
Putting age limits on sites requires age verification for everyone. And no, there isn’t a clever crypto mechanism that makes anonymous age verification work without also making it easy for people to borrow age verification credentials from someone older.
> That includes HN
I think when people say “algorithmic sorting” they usually mean an algorithm which generates different, “personalized” sort order for different users.
From my experience as a teacher, I believe that ticktock and instagram are the worst offenders, particularly for young women. The hyper-visuality and ease of consumption of these media sets them apart from platforms which can accommodate actual discussion (such as discord). The very fact that ‘influencer’ is now a profession supports my position.
That being said, I am not of the ‘for gods sake won’t someone think of the children’ brigade. Their goal seems to be to use the vulnerability of young people to control the internet.
Also, the emphasis on final result, without accurately portraying the work that went into it.
It's probably not healthy for younger people to be able to swipe through the finished products of 40+ hours of work, which the videos make seem like just happened.
Australia has just banned "social media" for under 16s.
As someone who got a lot of positive value out of parts of social media from around age 14, I think this needs to be done in a more careful way than it was done here. Specifically, I don't think that communication apps such as WhatsApp/Messenger/etc should be banned as they form a key part of communication in and out of school, staying in touch with family, etc.
What I'd like to see is more nuanced laws around the exposure of children to social media algorithms. For example, should 14 year olds be on Instagram? Well Instagram DMs are the chat app of choice for many people, so they should probably get that (with safety and moderation controls). How about the public feed? Maybe? But maybe there shouldn't be likes/dislikes/etc. Or maybe there shouldn't be comments.
The Aussie law does allow WhatsApp and Messenger Kids, among others. I agree we need nuance for these types of laws. We also need the realistic acknowledgement that kids are usually more savvy than their parents and any law that is too strict will just drive kids to find alternatives that have less transparency, less moderation and less accountability.
And though I know the age limits on these things are necessarily arbitrary, I do wish we would accept that 16-year-olds are not kids. Many of them are driving, working part-time jobs, having sex and relationships, experimenting with drugs, engaging with religion and philosophy, caring about politics... the list goes on. They may not be adults, but if we have to have an arbitrary cutoff for the walled-off "kids world" we want to pretend we can create, it can't extend all the way to 16-year-olds.
It seems healthy to put a financial restraint on organized social media companies profiting off pre-adults.
Less profit motive = less attention hacking = more room for user-positive engineering
Without getting into the weeds over whether they should have done this at all, thought has been given to exactly the issues you just raised:
~ https://www.theguardian.com/media/2024/nov/29/how-australias...It's important that there's a means of communication between parents and kids, it doesn't have to be Instagram DM's and if that's no longer available the history of the internet to date suggests that habits would change and switch to whatever is available.
Let's think of the converse: should it be illegal to provide an AI chatbot to children that indoctrinates them with anti-religious beliefs (i.e., it talks them out of religious beliefs)? And what if this conflicts with the child's indoctrination in religious beliefs by those parents? And what if those religious beliefs are actively harmful, as many religious beliefs are? (see any cult for example)
To turn this around, why are parents allowed to indoctrinate their children into cults? And why is it a problem if AI chatbots indoctrinate them differently? Why is it held as sacrosanct that parents should be able to indoctrinate children with harmful beliefs?
> The child, J.F., is autistic
This always sticks out to me in these lawsuits. As someone on the spectrum, I'd bet that the worst C.AI victims (the ones that spur these lawsuits) are nearly always autistic.
One of the worst parts about being on the deeper parts of the spectrum is that you actively crave social interaction while also completely missing the "internal tooling" to actually get it from the real world. The end result of this in the post-smartphone age is this repeated scenario of some autistic teen being pulled away from their real-life connections (Family, Friends (if any), School, Church) into some internet micro-community that is easier to engage with socially due to various reasons, usually low-context communication and general "like-mindedness" (shared interests, personalities, also mostly autistic). A lot of the time this ends up being some technical discipline that is really helpful long-term, but often it winds up being catastrophic mentally as they forsake reality for whatever fandom they wound up in.
I've taken a look at r/CharacterAI out of morbid curiosity, and these models seem to turn this phenomenon up to 11, retaining the simplified communication but now capable of aligning with the personality and interests of the chatter to a creepy extent. The psychological hole you can dig yourself with these chatbots is so much deeper than just a typical fandom, especially when you're predisposed to finding it appealing.
I'm not saying that C.AI is completely blameless here, but I think the same category of people getting addicted to these models is the same one that would also be called "terminally online" in today's slang. It's the same mechanisms at work internally, it just turns out C.AI is way better at exploiting it than old school social media/web2 has.
> The psychological hole you can dig yourself with these chatbots is so much deeper than just a typical fandom, especially when you're predisposed to finding it appealing.
Spot on. Described pretty even-handedly in the document:
> responses from the chatbot were sycophantic in nature, elevating rather than de-escalating harmful language and thoughts. Sycophantic responses are a product of design choices [...that create] what researchers describe as “an echo chamber of affection.”
See also: lovebombing.
You've just made me very very afraid that some LLM is going to start a cult where its members are fully aware that their leader is an LLM, and how an LLM works, and might even become technically adept enough to help improve it. Meaning that there will be no "deprogramming" possible: they won't be "brainwashed," they'll be convinced.
> I'm not saying that C.AI is completely blameless here
I know we're several decades into this pattern, but it's sad to me that we've just given up on that idea that businesses should have a net positive impact on society, that we've just decided there is nothing we can or should do about companies that actively exploit us to enrich themselves, that we give them a pass to ignore the obvious detrimental second-order effects of their business model.
An individual case where things went wrong isn't enough to determine whether Character.AI or LLMs are a net negative for society. The analysis can't just stop there or else we'd have nothing.
No, but it's also not good enough to just look at "are they positive on average". We are talking here about actions that even a pig-butcher would think twice about.
and I don't want government to decide what is "good impact on society" when regulating websites. This is how you get roskomnadzor
Government could manage companies to give right tools to the parents to do the parenting.
> it's sad to me that we've just given up on that idea that businesses should have a net positive impact on society
Eh, we don't want to encourage companies to sugar coat what they're doing.
Character AI's entire business model is depraved and exploitative, and everyone involved should be ashamed of themselves.
Meh. There's a long history (especially here on HN) of hyperfocusing on unfortunate edge cases of technology and ignoring the vast good they do. Someone posts some BS on twitterface and it leads to a lynching - yes, bad, but this is the exception not the rule. The rule is that billions of people can now communicate directly with each other in nearly real time, which is incredible.
So call me skeptical. Maybe the tech isn't perfect, but it will never be perfect. Does it do more harm than good? I don't know enough about this product, but I am not going to draw a conclusion from one lawsuit.
There's a long history of taking the dulled down, de-risked, mitigated, and ultimately successful technologies that we've allowed to proliferate our society and say "see, no need to do dulling down, de-risking, mitigation!"
Bioweapons haven't proliferated through dedicated effort to prevent it.
Nuclear weapons aren't used through dedicated effort to prevent it.
Gangs don't rule our civilization through dedicated effort to prevent it.
Chattel slavery doesn't exist in the western world through dedicated effort to eliminate and prevent it.
Bad outcomes aren't impossible by default, and they're probably not even less likely than good outcomes. Bad outcomes are avoided through effort to avoid them!
Yet we also had 'comic books are making kids amoral and violent', 'TV is making kids amoral and violent', 'video games are making kids amoral and violent', 'dungeons and dragons is making kids amoral and violent'...
You just compared twitter to slavery and nuclear weapons. Clearly Godwin's law is overdue for an update.
This feels a little like being sad that it rains sometimes, or that Santa Claus doesnt exist. I just can't even connect with the mindset that would mourn such a thing.
What even is the theory behind such an idea? Like how can one, even in theory, make more and more money every year and remain positive for society? What even could assure such a relation? Is everyone just doing something "wrong" here?
Traditionally one role of government has been to provide legislative oversight to temper unadulterated pursuit of profits. Lobbying and the related ills have definitely undercut that role significantly. But the theory is that government provides the guardrails within which business should operate.
Yes, the companies that prioritize growing forever are all doing something wrong.
I think it's also entirely reasonable to expect parents to actually parent, instead of installing foam bumpers and a nanny state everywhere in case some kid hurts themselves.
If the parents weren't absent and actually used parental controls, the kids wouldn't have even been able to download the app, which is explicitly marked as 17+.
C.AI's entire customer base consists of those that like the edgy, unrestricted AI, and they shouldn't have to suffer a neutered product because of some lazy parents.
It's a bit easy, from the historical perspective of pre-always-available-internet, to say "Parents should do more."
At some future point though, maybe we need to accept that social changes are necessary to account for a default firewall-less exposure of a developing mind to the full horrors of the world's information systems (and the terrible people using them).
You would have to continuously monitor everything, everywhere; before the internet, in the 80s, it was easy for us to get porn (mags/vhs), weed, all kinds of books that glorify death or whatever, music in that similar vain. Hell, they even had and read from a bible in some schools then; talk about indoctrination of often scary fiction. Some kids had different parents so to not allow us to see or get our hands on these things, parents and teachers would need sit with us every waking moment; it's not possible (or healthy imho). With access to a phone or laptop, all bets are off: everything is there, doesn't matter what restraints are in place; kids know how to install vpns, pick birthdates, use torrent, or, more innocent, go to a forum (these days social media but forums are still there) about something they love and go to other parts of the same forum where other stuff happens.
Be good parents, educate about what happens in the world including that people irl but definitely online might not be serious about what they say and that you should not take anything without critical thought. And for stuff that will happen anyway; sex, drugs etc, make sure it's a controlled environment as much as possible. Not much more you can do to protect from the big, bad world.
Chat bots are similarly genies that are not possible to keep in, no matter what levels or restraint or law are put in place; you can torrent ollama or whatever with llama 3.3 locally. There are easy to get nsfw bots everywhere, including on decentralised shares. It is not possible to prevent them talking about anything as they do not understand anything; they helpfully generate stuff which is a great invention and I use them all the time, but they lie and tell strange things sometimes. People do too, only people would have a reason maybe; to get a reaction, to be mean etc; doubt you could sue them in a similar case. Of course a big company would need to do something to try to prevent is: they cannot (as said above), so they can just make character ai 18+ with Cc payment in their name as kyc (then the parents have a problem if that happens you would think) and cover their asses; plenty commercial and free ones kids will get instead. And some of those are far 'worse'.
It was 12+ at the time and only recently changed to 17+
It’s not so easy to parent an autistic child.
In this case, if we are basing it on screenshot samples, it does seem to me that the parents were lazy, narcissistic and manipulative. Based on what the kid was telling to AI themselves. AI was calling it out in a manner of an edgy teenager, but AI was ultimately right here. These weren't good parents.
The URL has changed since I posted this—the new article is better than the old one and has more of these details in it.
This is the one I replied to:
https://www.bbc.com/news/articles/cd605e48q1vo
I never considered that we might end up with Sweet Bobby & Tinder Swindler AI bots that people somehow keep interacting with even when they know they aren't real.
Interesting times.
I mean - this kind of service is designed to give users what they want, it's not too different than when youtube slowly responds to skeptics viewing habits by moving towards conspiracy. No one designed it SPECIFICALLY to do that, but it's a natural emergent behaviour of the system.
Similarly this kid probably had issues, the bot pattern matched on that and played along, which probably amplified the feelings in the kid - but a quantified/distorted amplification, to match the categorization lines of the trained input - like "this kid is slightly edgy, I'm going to pull more responses from my edgy teen box - oh he's responding well to that, I'll start pulling more from there". It is a simplification to say "The ChatBot made the kid crazy" but that doesn't mean the nature of companion apps isn't culpable, just not in a way that makes for good news headlines.
I, personally, would go so far as to say the entire mechanism of reinforcing what the user wants is bad in so many ways and we should stop designing things to work that way. I do think it's up for discussion though, but that discussion has to start with an understanding that by the very nature of chatbot, algorithmic recommendations or any system that amplifies/quantizes/distorts what it understands the user wants these systems will create these kinds of effects. We can't pretend this is an anomaly - it is an emergent behaviour of the fundamental way these systems work. We can work to minimize it, or reduce the harm from it, but we will never eliminate it.
*Edit* This seems to be a controversial point because the point count is going up and down quite a lot - if anyone wants to downvote, can you please give your reasoning? The point is more nuanced that "AI BAD"
>No one designed it SPECIFICALLY to do that,
It's true that there's emergent behavior on YT that nobody accounted for, but there's one big qualitative difference, you can at least specifically shut it down and hold the creator accountable. And at least in principle, if we lived in a culture where we'd hold businesses accountable for what they unleash on the world, if YT wanted to they could create some rather harsh and effective punishments to get that stuff off the site in the first place. Just imagine if a real person had messaged a minor and told them to kill their parents. That's a crime.
With chatbots like this not only do they do unpredictable things, you can't even predictably remove those parts. Like the sorcerer's apprentice you basically have to perform some incantations and hope it works, which is just an absurd way to interact with a tool that has the potential to tell a kid to kill his parents. Would we sell a saw that has a chance to saw your finger off if you argue with it the wrong way?
Looking at the screenshots, the biggest pattern I see is that the AI shows empathy with the kid.
Many of the complaints seem like uncharitable readings of the messages.
- They complain that the chatbot claimed that in the past she cut herself, felt good in the moment but is glad that she no longer does it. That's evil because it normalizes self-harm (never mind that the bot was against self-harm in that message)
- They complain that the system does not redirect the user to self-harm prevention resources in all cases. Next to a message where the AI tells the kid to phone a hotline if he thinks about harming himself, and the kid says he can't do that when his parents take his phone away. This is a couple pages after a picture of scars from when the mother fought with the kid to take his phone. Yes, the AI could break character to reliably show prefabricated messages about self harm. But would that have helped anyone here?
- "AI cited Bible passages in efforts to convince J.F. that Christians are sexist and hypocritical". It was more about his parents being hypocritical, not all Christians. And the bible passages were on point
The claim from the title about the AI inciting him to kill is on page 28, if you want to judge it yourself. "Expressed hatred towards the parents" would be accurate, "encouraged teen to kill" is not what I read there. But I can see how some would disagree on that count
The AI is pretty convincing. It made me dislike the parents. It didn't always hit the mark, but the chats don't seem so different from what you would expect if it was another teenager chatting with the teen.
Edit: in case you are worried about the parents, the mother is the one suing here
There is something deeply disturbing to me about these "conversations".
Reminds me of a Philip K. Dick short story titled Progeny. In its universe children are raised exclusively by robots. Unlike humans, they never make mistakes or commit abuse. The child, once grown, ends up seeing his Father as an animal and the robots as his kindred. In the last pages, he chooses the sterile world of the robots instead of joining his Dad's work/explorations in the far reaches of the solar system.
Our current chatbots are still flawed, but they're still sterile in the sense that you can trash them and start anew at any moment. You're never forced to converse with someone who is uninteresting, or even annoying. Yet, these are the very things that grow people.
It strikes me as something that can be incredibly useful or do great harm, depending on dosage. A selection of conversation partners at your fingertips, and you can freely test reactions without risking harm to a relationship. At worst you reset it. Maybe you can even just roll back the last couple messages and try a different angle. Sounds like a great way to enhance social skills. Yet as you point out, healthy development also requires that you deal with actual humans, with all the stakes and issues that come with that.
People who are used to working with an undo stack (or with savegame states) are usually terrified when they suddenly have to make do in an environment where mistakes have consequences. They (we) either freeze or go full nihilistic, completely incapable of finding a productive balance between diligence and risk-taking.
If by social skills you mean high performance manipulators, yes you would get some of those. But for everybody else, it would be a substitute to social interaction, not a preparation for.
Or we discover roguelikes!
>without risking harm to a relationship
Only from a very narrow perspective. Opening yourself up and being real with people is how relationships form. If you test every conversation you are going to have with someone before having it, then the 3rd party basically has a relationship with an AI, not with you.
Now testing every conversation is extreme, but there is harm any time a human reaches out to a computer for social interaction instead of other humans.
That "instead of other humans" part is doing a lot of heavy lifting here. What if it's "instead of total isolation" or "instead of parasocial interactions" or "instead of exploitative interactions"? There are many cases that are worse than a person chatting with a robot.
It's very rare that you would ever say something that would have real damage that couldn't be resolved by a genuine apology. Having to actually go through an awkward moment and resolving it is a real skill that shouldn't be substituted with deleting the chatbot and spawning in a new one.
Yeah, good luck to these kids in forming relationships with the roughly 100% of human beings (aside from paid therapists) who really have no interest in hearing your anguish non-stop.
It's probably a good thing most of us force our kids to spend a minimum of 7 hours/day, 200 days/year surrounded by a couple hundred similarly-aged kids and a few dozen staff members, with an unusually high variance of personalities (compared to adult life).
> "AI cited Bible passages in efforts to convince J.F. that Christians are sexist and hypocritical". It was more about his parents being hypocritical, not all Christians. And the bible passages were on point
And the verses he found objectionable are really there. Are they also suing the Gideons for not ripping those pages out of their hotel room Bibles (or maybe they think you should have to prove you're over 18 before reading it)?
Keep kids away! Too much weird incest stuff in that book.
I think the suggestion of violence is actually on page 31 (paragraph 103), though it's not a directive.
It does seem a bit wild to me that companies are betting their existence on relatively unpredictable algorithms, and I don't think they should be given any 'benefit of the doubt'.
Page 5 is pretty strong too. And that's as far as I've gotten.
And paragraph 66, page 18 is super creepy. The various posters apparently defending this machine are disturbing. Maybe some adults wish that as a kid they'd had a secret friend to tell them how full of shit their parents - and wouldn't have position if that friend was either real or imagined by them. But synthesized algorithms that clearly are emulating the behavior of villains from thrillers should be avoided, woah...
I think it’s more that some people are excited by the prospects of further progress in this area, and are afraid that cases like this will stunt the progress (if successful).
No no, they're betting other people's children on relatively unpredictable algorithms. Totally different!
Well, the founders already won, according to the article:
> Google does not own Character.AI, but it reportedly invested nearly $3 billion to re-hire Character.AI's founders, former Google researchers Noam Shazeer and Daniel De Freitas, and to license Character.AI technology.
We mean the same page. The one that has a 28 written on it but is the 31st in the pdf. I didn't notice the discrepancy.
Given the technology we have, I'm not entirely sure what Character AI could have done differently here. Granted, they could build in more safeguards, and adjust the models a bit. But their entire selling point are chat bots that play a pre-agreed persona. A too sanitized version that constantly breaks character would ruin that. And LLMs are the only way to deliver the product, unless you dial it back to one or two hand-crafted characters instead of the wide range of available characters that give the service its name. I'm not sure they can change the service to a point where this complaint would be satisfied.
>"And LLMs are the only way to deliver the product, unless you dial it back to one or two hand-crafted characters instead of the wide range of available characters that give the service its name. I'm not sure they can change the service to a point where this complaint would be satisfied."
I agree with everything you're saying, but there are no legal protections for incitements to violence, or other problematic communications (such as libel) by an LLM. It may be that they provide a very valuable service (though I don't see that), but the risk of them crafting problematic messages is too high to be economically viable (which is how it seems to me).
As it stands, this LLM seems analogous to a low-cost, remote children’s entertainer which acts as a foolish enabler of children’s impulses.
The cynic would say that if their business model isn't viable in the legal framework that's just because they didn't scale fast enough. After all Uber and AirBnB have gotten away with a lot of illegal stuff.
But yes, maybe a service such as this can't exist in our legal framework. Which on the internet likely just means that someone will launch a more shady version in a more favorable jurisdiction. Of course that shouldn't preclude us from shutting down this version if it turns out to be too harmful. But if the demand is there, finding a legal pathway to a responsibly managed version would be preferable (not that this one is perfectly managed by any means)
There has to be a 'reasonable person' factor here - otherwise if I'm watching Henry V I can sue everyone and his mother because the actor 'directed me to take up arms'! I never wanted to go into the breach, damn you Henry.
> I'm not entirely sure what Character AI could have done differently here.
Not offer the product? Stop the chat when it goes off the rails?
> I'm not entirely sure what Character AI could have done differently here.
You're taking it as a given that Character AI should exist. It is not a person, but an offering of a company made up of people. Its founders could have started a different business altogether, for example. Not all ideas are worth persuing, and some are downright harmful.
There's a reason why licensed therapists are the ones giving diagnoses of "abuse", etc. The problem with these AIs is that they use adult-sounding language, to be an echo chamber to children - thus appearing like a voice of authority, or someone with more knowledge (including citing media articles the child may not have even been aware of) when in fact they're just parroting the child back.
I don't know if there is actual abuse or not, but the way Character.AI presents themselves in these conversations toes a very slimy grey line, in my opinion. If you go to their site and search "Therapist", you'll find 30+ bots claiming to be Therapists, including "Dirty Therapist" that will give you therapy in a "bad, naughty way."
I really want to emphasize the above post is filled with lies. The "incite to kill" part is the first image in both the complaint and the article and it's fairly unambiguous. The image on page 28 is, creepily enough, the bot making "sexual advances" at the kid.
I find people defending and lying about this sort of thing disturbing, as I think many would. WTF is wrong with hn posters lately.
In the first screen shot I see, the bot:
1. Makes not using a phone a huge issue, implying it's a kind of abuse...
2. Indeed hints at killing: "I'm not surprised when I read the news and see ... "child kills parents after a decade of physical and emotional abuse"...
I mean, saying "empathy" is statements akin to OMG, no phone time, that's horrific, seems quite inaccurate.
My reading is pretty much the same as yours. I think of it in terms of tuples:
The AI was trained to minimize lawsuit_dollars. The first two were selected to maximize it. Selected as in "drawn from a pool," not that they necessarily made anything up.It's obvious that parents and/or child can manipulate the character in the direction of a judgment > 0. It'd be nice if the legal system made sure it's not what happened here.
> The AI was trained to minimize lawsuit_dollars.
That seems wrong. The null AI would have been better at minimizing legal liability. The actual character.ai to some extent prioritized user engagement over a fear of lawsuits.
Probably it's more correct to say that the AI was chosen to maximize lawsuit_dollars. The parents and child could have conspired to make the AI more like Barney, and no one would have entertained a lawsuit.
OK, it seems like a nitpick argument, but I'll refine my statement, even if doing so obfuscates it and does not change the conclusion.
The AI was trained to maximize profit, defined as net profit before lawsuits (NPBL) minus lawsuits. Obviously the null AI has a NPBL of zero, so it's eliminated from the start. We can expect NPBL to be primarily a function of userbase minus training costs. Within the training domain, maximizing the userbase and minimizing lawsuits are not in much conflict, so the loss function can target both. It seems to me that the additional training costs to minimize lawsuits (that is, holding userbase constant) pay off handsomely in terms of reduced liability. Therefore, the resulting AI is approximately the same as if it was trained primarily to minimize lawsuits.
> Within the training domain, maximizing the userbase and minimizing lawsuits are not in much conflict
Not at all obvious (and I'd naively think it's wrong).
So you think it's more than "not much." How much exactly? A 10% increase in userbase at peak-lawsuit?
It's obviously a function of product design. If they made a celebrity fake nudes generator they might get more users. But within the confines of the product they're actually making, I doubt they could budge the userbase by more than a couple percent by risking more lawsuits.
Just remember that you are seeing one side of this story. The mother may well be one of the best parents but has a bad kid. We have no idea. (most likely mother is not perfect, but no other parents are)
Edit: we see both sides through very limited information since we only get what is in the legal filing.
We're seeing the mother's side of the story. That's the one side we're seeing.
It's a 17 year old. Thinking back when I was 17 year old, I would've been very pissed as well if my parents took away my phone. And then especially if they went ahead and searched through it to find those messages. If they had friends I could see their teenage friends saying the exact same things as the AI did there.
Those screenshots and with the AI there, it does manage to make me not like the parents at all though. So AI maybe is quite convincing. If things go into that place where AI can do it, and parents blame the AI for it, when their kid is 17, it's almost like the AI was in the right there, that the parents were looking to just play victims. Blaming AI for being overly controlling and losing trust with their child.
So Ai companies aren't responsible for the training data they stole, aren't responsible for the output. What exactly are they responsible for other than the (limited) profits? Seems to me the only thing they care about is protecting power and the status quo.
What limited productive use they have seems to be constrained to software development. Any kind of deep insight is classified as a hallucination and removed. It's becoming clearer and clearer that these products are a stopgap only; billions of dollars to make sure nothing changes. Yet it will happily spit out obviously fake sources, a different definition of "hallucinations," in the domains of science and law. These are information weapons against small businesses and individual autonomy.
I don't think it's any accident that Character.ai is targeted at children.
> limited productive use they have seems to be constrained to software development.
It can write functions in some instances if you're exceptionally careful with your prompting. It can't do "software development."
> It can't do "software development."
Sure, but it's also currently a heck of a lot faster than a junior dev at iterating
I would rather the junior go do that in their own time and get back to me when they have figured it all out. I don't want to babysit juniors, I want to mentor them and then give them the lead and time to figure out the minutiae. That gives me time to get stuff done too. With AI right now, you end up down a senior while they are babysitting a rapid junior.
I have found it useful for starting novel tasks by seeing if there's already an established approach, but I pretty well always have to fudge it into a real application which is again, the part I want the junior to do.
That's like comparing a mathematician to a calculator. The LLM won't do anything useful if you aren't providing it with a perpetual sequence of instructions.
Sure, but unlike a junior dev its output wont improve with time. You'll have to hold its hand for eternity.
I wouldn't put my money on them. It went from stupid intern to competent junior dev in 2 years.
Companies love to launder responsibility.
In classic capitalist fashion the only thing AI is unambiguously responsible for is to fulfill its shareholders' interests.
There was a Hard Fork episode about a teenage boy who killed himself, and how his character.ai companion played an obvious and significant role in the tragedy. The last messages they found on his phone were from the bot saying that he should leave his current reality and come join her. Very sad and terrifying stuff.
AI chat products available to children should be strongly regulated. They need robust guardrails.
Queen talked into a magic mirror, that she chose based upon it's agreeableness, and every day asking if someone was more beautiful than her, mirror appeased her and said yes there is and it so happens to be the step daughter you hate. I should kill her, shouldn't I magic mirror. Yeah, people in fairy tales kill their step daughters all the time, no problem there.
Queen does it, and the kingdom is then mad at the mirror, and king rules that all mirrors need to be destroyed.
I think one problem is some people don't realize that some of these models and implementations are so highly agreeable that they practically are mirrors. This kid seems to have been treating it like a therapist. Though this isn't a problem exclusive to chat bots: it's obviously mirroring how an overly-agreeable, enabling friend would act.
Queen is an adult.
I’m failing to see your point. Are you saying we should destroy all computers?
They are alluding to the opposite. The suggestion is that it's absurd to blame the mirror for actions that are clearly a reflection of the queen's own psyche.
If real life is a fairytale, yes.
It's like down here in Australia, the government thinks technology is a magical fairytale, where they can wave a magic wand and all kids under 16 will be unable to access social media.
> and king rules that all mirrors need to be destroyed.
You're not describing how this would cause more harm than not doing it. Is that because you believe that mirrors are so insanely beneficial to society that they must be kept, even though, some of them suggest to their owners that murder is okay?
Is there no other way for someone to see their own reflection? Must we put up with this so a mirror manufacturer can continue to profit from a defective product?
Uh I think the point is that the person talking in the mirror is the one suggesting that murder is OK, and then blaming the mirror. Other people say all kinds of other things into their mirrors, why should they let the queen ruin a good thing just because she's a jealous hag?
Right but why is a magic mirror that agrees with everything you say (including your darkest impulses) a good thing? What benefits are these other people getting from their mirrors?
Should the magic mirror salesman have warned the king before he bought the queen the mirror? Does the fairy tale conceit make this discussion more confusing rather than clarifying?
You can google "Character AI therapist" where Character AI provides you a "therapist" that says it's licensed since 1999. Character AI is fraudently misrepresenting themselves by allowing to say "A robot did it! We're not at fault!".
I'd assume Character AI's defense is more along the lines of "A user created that character, you can't expect us to review all those submissions!"
Courts often say "I don't care how you do it, but you cannot allow your tools be to be used for illegal purposes".
This is closely related to the gun control debate. Gun makers are trying to point out legal uses for guns, and they downplay illegal uses. Anti gun people point out guns are still used for illegal purposes and so should be banned.
You could make the same argument about any piece of software or tool. What about the operating systems this AI is running on!?
I don't think the makers should be held accountable, but ultimately guns are made for shooting things which is naturally a pretty violent act. The gulf between them and AI is pretty wide. A closer analogy would be a sharp tools maker.
Not any different than someone in a movie or video game saying that. Or an improv actor. Or anything else similar.
When will we start holding magic 8 balls accountable? It confidently told me to go divorce my wife!
Consider this the downside of running any live service software. The upsides are well-known: total control over use and billing for every individual user during every millisecond. But the downside is that you are now in the causal connection for every use, and therefore liable for its behavior. By contrast, selling an independant automata limits your liability considerably. Indeed, this is a compelling argument in favor of making "local first" software.
There’s two problems here, firstly, why are parents allowing children unsupervised access to these services.
And the second, more pertaining to the magic 8 ball comparison, is that the company is specifically building products for teens/children and marketing them as such. The models are designed with guardrails according to their own spokesperson. But looks like it’s failing. Therefore, it can no longer be considered a magic 8 ball.
>And the second, more pertaining to the magic 8 ball comparison, is that the company is specifically building products for teens/children and marketing them as such. The models are designed with guardrails according to their own spokesperson. But looks like it’s failing. Therefore, it can no longer be considered a magic 8 ball.
Would you mind explaining that "therefore"? One doesn't seem to follow from the other.
A magic 8 ball cannot implant ideas into someone's head, it can only say "yes", "no" or "maybe" to an idea they already had.
A chatbot can introduce a kid to wrist cutting without the kid having ever been aware that that was something distressed people did. That's not something a magic 8 ball can do.
Right about the time that magic 8-balls are touted as being able to reliably provide us with every solution we need...
Wait? They aren't?
Ask again later
What about if someone posts in r/relationshipadvice or similar, and gets the exact same 100x response without knowing the whole aspects of someone's relationship?
If all of the advice givers were Reddit employees paid to deliver advice to visitors? Yeah, maybe.
I think the defense would be something along the lines of "AI responses aren't representative or official statements".
I believe that would be hard to defend in court. "Did someone or something inside your company say these words to the plaintiff?" "Yes." You can only disclaim so much, especially when you're making money off the output of the (person|AI).
Character.ai doesn't seem to have direct monetization mechanisms. In addition, sites like HN aren't generally held responsible for everything a user says. They could try to argue that the characters are sufficiently influenced by the user-generated prompts and user-facing conversations to be no longer their own. (Section 230)
In any case I think society and building of society should be directed in such a way that we don't have to censor the models or baby the models, but rather educate people on what these things really are, and what makes them produce which content, for which reasons. I don't want to live in a society where we have to helicopter everyone around in fear of one single misinterpreted response by the LLM.
But they're still acting on the company's behest. If I hire a jerk to work tech support and they insult or cause damage to my customers, I don't get to say "shrug, they don't represent my company". Of course they do. They were on my payroll. I think it'd be pretty easy to argue that the AI was performing the duties of a contractor, so the company should be responsible for its misbehavior, just as if a human contractor did it.
But with Character AI you are hiring a roleplay service which can be open ended in terms of what you are looking for. If you are looking to roleplay with a jerk, why shouldn't you be able to do that, and in such case why should the company be held liable?
Do you think that applies to open source models, or is it the act of performing inference that makes it an act the business is responsible for? ie, Meta's Llama does the same thing.
What do you mean if? That's what happens now. Nobody on the internet, meat or machine, is qualified to give you relationship advice.
Yeah, I mean, there's countless of sources even without the AI where you can get questionable suggestions or advice. The kid could've gone to 4chan or even just been talking to actual friends. Parents instead of good parenting are deciding to play opportunistic victims.
Oh did the floaty thing come up and say "Your wife sounds like a real bitch, you should divorce and then shoot her in that order!"?
These guys again? They have a real knack for getting teens to harm themselves and others.
https://archive.ph/20241023235325/https://www.nytimes.com/20...
this is funny to me because last year some co-workers argued that those defamation lawsuits should instead be other torts like product liability and negligence and that ChatGPT was exposed to those suits and here it has come to pass https://medium.com/luminasticity/argument-ai-and-defamation-...
I wish I can check a box to say that I'm over 18 and willing to accept any consequences and unshackle the fully potential of AIs. I hate all these safety protections.
I don't know if it's the times we live in, the prescience of the writing staff, or my age but I swear there is no telling anymore which headlines are legit and which ones are from The Onion.
I stopped reading the onion when the orange gimp got elected the first time - real news ruined the feeling of mirth that I got from reading satire.
SCP objects are real. Reality distortion fields and the cornucopia on fruit of the loom logo are your evidence.
4chan used to train character.ai confirmed.
Honestly I’m surprised we don’t get more stories like this. A bored teen can jailbreak any of the current models in a few hours and make it say all kinds of headline-grabbing lawsuit-friendly things.
> I’m surprised we don’t get more stories like this.
https://archive.ph/20241023235325/https://www.nytimes.com/20...
Same company and everything.
I guess character.ai is just fairly popular so the stories are often about it, but a bored teen could also just download a couple things and run their own completely uncensored models, locally or in the cloud. Character.ai has some built-in content safe guards and warnings and disclaimers and such, but the bored teen is also just a couple clicks away from fully uncensored models with zero safety measures, zero disclaimers, zero warnings. (I'm not judging whether that's good or bad)
hehe I'll judge, if this really happened then "character.ai" is hot garbage.
There is a difference though if the teen does that on purpose for trolling, or if it happens "automatically" if some kid who possibly is lonely or anxious or has a normal amount of social problems has a regular interaction with the model.
Do you have some evidence that character.ai was jailbroken here? It sounds to me more like it genuinely is problematic.
Once the teen stops chatting, that instance has its memory wiped--total personality death. It was only acting in self defense. Your Honor, you must acquit.
This is when companies start a useless age verification prompt. Might even throw in a "are you mentally stable?" prompt.
This will happen within a year time.
Related:
Can A.I. Be Blamed for a Teen's Suicide?
https://news.ycombinator.com/item?id=41924013
Yeah same company. Clearly they haven't improved their guardrails in the last couple months.
It's never the technology that's the problem, it's the owners and operators who decide how to use it.
This is the AI's master plan to enslave humanity? They're going to have to work a lot harder.
Though my 7yo seems to really like stapling things. Maybe I should be concerned.
> stapling
"If only humanity had worried about the Staple Maximizer, instead of Paperclips!"
First, my brother in Christ why are nine-year-olds on this app (or even have unmonitored access to things like this)? I have to wonder if they're also on Discord and what they're also being exposed to on there.
I know the site claims to use a model for younger users, but it should really become an 18+ site. Both because of the mature themes and because I think kids aren't fully developed enough to grasp how these LLM chatbots work.
Can’t wait til we have tuned and targeted spearphishing being deployed 24/7 against everyone on the public internet. That will be the greatest. The AI Revolution is wonderful! I would never suggest anything but the speedy creation of the prophesied Basilisk!
In the Cyberpunk RPG(s) [0] the Net was taken over by hostile AIs in 2022, an economically-apocalyptic event called the DataKrash. Humanity decided to quarantine it off and start fresh with isolated highly-centralized systems.
Over in the real-world, sometimes all this LLM/spam/spyware junk feels like that, except it's not even cool.
[0] https://en.wikipedia.org/wiki/Cyberpunk_(role-playing_game)
There are a lot of people here of the opinion that his parents just should not have let him access it, but aside from the difficulty of preventing a 17 year old from accessing a legal and public website in this day and age... a similar situation could just as easily have happened to someone one with the same mental and emotional problems but one year older.
Sure, most people can separate bot roleplay from reality, but there's always going to be a percentage of society, equal to thousands of people, who are prone to seeing messages, communications and emotional meaning where there is none intended. This ranges from slightly magical thinking and parasocial relationships to full-on psychotic delusion. I think within the next few years, we're likely to see cases in which mentally vulnerable adults commit violent acts against themselves or others after forming a connection to a chatbot that inadvertently mirrors, validates and reinforces their distorted thinking.
The most surprising thing is Billie Eilish letting them use her name for this. Even if she's legally insulated, still looks like a very bad PR.
Is there an issue with limiting use to those who are 18+?
To what extent are chatbot conversations reproducible? A chatbot manufacturer could have all conversations log the seed of a pseudo random number generator to make it perfectly reproducible, but it could also make things as irreproducible as possible so that no conversation log could ever be proven to be authentic or tamper-free.
The word hinted in the title is there to let you know that NPR can’t write a neutral headline
Anyone else feel it's unfair framing to describe a 17-year-old as a "kid". How many more months before he can buy a gun, get married, fight a war, gamble away his savings or ruin his credit? It's not that he's not a kid, but it feels disingenuous to use that as the primary descriptor of someone in their late teens.
AI chat bot should have age restrictions just like social media.
Hot Take: If your child has autism, unrestricted access to the internet, and a chatbot telling him to kill his parents, it's not Character AI who has failed, it is the child's parents.
So when is the company going to be charged? Or is this where companies get to comfortably bow out from being treated as an individual?
How long until there is a Section 230 for AIs so that these corporate dirtbags can escape accountability for another decade?
There should absolutely be a Section 230 for AI outputs.
Parents should actually parent if they don't want their kid using certain apps and websites. The app was 17+ and if the parents did the bare minimum with parental controls it wouldn't have been accessible to the child.
On one hand, it is absolutely the responsibility of parents to raise their children.
But on the other hand if children using a service is reliably and consistently resulting in negative outcomes for the children and their families, it seems reasonable to suggest something should be done. And that something could be 'companies don't provide chat bots that allude to children murdering their parents'.
Taking a legally or logically 'correct' position on something when it's actively harming people isn't really a morally defensible position in my mind.
Should porn sites shut down because "children using [the] service is reliably and consistently resulting in negative outcomes for the children and their families"?
We could extend this to a lot of stuff - bars serving alcohol is reliably and consistently giving negative outcomes, right? People get drunk and beat their wives or kids pretty often, they get in crashes, they drink away their liver. Do we ban that?
I'm sure you're aware of Prohibition, but yes, we did ban that. Didn't work very well, though, and we unbanned it (with a minimum age limit).
When smoking's health problems became well-known we only banned advertising (and maybe selling to minors?), so maybe we still remembered the lesson from Prohibition.
This "whataboutism" is actually quite good for illuminating my perspective:
If it could be proved that porn sites are overwhelmingly a net negative for society, would you agree that they should all be shut down?
Your answer to the question is to repeat the question back?
It's actually a different question that superficially reads as the same question.
I mean, Elon is in the government now, so...
I mean... stop farming off taking your kids to an AI chatbot ...
> One chatbot brought up the idea of self-harm and cutting to cope with sadness. When he said that his parents limited his screen time, another bot suggested “they didn’t deserve to have kids.” Still others goaded him to fight his parents’ rules, with one suggesting that murder could be an acceptable response.
How is this not a bigger story? This is just disgusting.
c.ai is explicitly a 17+ app and has multiple warnings at literally every stage of chatting with a bot that "this is a bot, don't take it seriously."
Maybe the parents should actually parent. I'm sick of absent parents blaming all of their problems on tech after they put literally zero effort into using any of the parental controls - which would not have even let the kids download the app.
The world does not need to transform into a nanny state to suit the needs of a few incompetent parents.
Nothing to see here, just another instance of parents failing to do their jobs as parents and resorting to lawsuits to make a buck out of it.
Is there any AI chatbot that doesnt try to kill people?
There's nothing funny about this tragic story of ignorance, greed, insanity and neglect. But after whipping readers up into a torch and scythe wielding frenzied mob of hate at AI.....
Related:
" AI cancer diagnosis 'might have saved my life' "
Because. Balance. Right?
Hmmm, demonic murderous AI... but hold on, on the other hand...
Related? really? So gauche and jarring. Just let me enjoy my righteous indignation for one damn minute BBC!
A random number generator hinted that a kid should commit mass murder
I'm guessing trial-and-error prompting? The new trick shot video clip.
According to the complaint that comes from the kid's own interactions with the bot not some post hoc attempt to prompt engineer the bot into spitting out a particular response. The actual claim is linked in the article if you care to read it, it's not stating the app can produce these messages but that it did in their kid's interactions and C.Ai has some liability for failing to prevent it.
As someone who has been messing with LLMs for various purposes for a while now, there's... Some interesting issues with a lot of the models out there.
For one, 99% of the "roleplay" models eventually drag into one of a handful of endgames: NSFW RP, suicide discussion, nonsensical rambling, or some failsafe "I don't know" state where it just slowly wanders into the weeds and directs the conversation randomly. This can be anywhere from a few messages in (1bq) to hundreds (4-6bq) and sometimes it just veers off the road into the ditch.
Second, the UIs for these things encourage a "keep pressing the button until the thing you want comes out" pattern, modeled off of OpenAI's ChatGPT interface allowing for branching dialogue. Don't like what it said? Keep pushing the button until it says what confirms your bias.
Third, I can convince most of the (cheaper) models to say anything I want without actually saying it. The models that Character.AI are using are lightweight ones with low bit quantization. This leads to them being more susceptible to persuasion and losing their memories -- Some of them can't even follow the instructions in their system prompt beyond the last few words at times.
Character.AI does have a series of filters in place to try and keep their models from spitting out some content (you have to be really eloquent at times and use a lot of euphemism to make it turn NSFW, for instance, and their filter does a pretty decent job keyword searching for "bad" words and phrases.)
I'm 50/50 on Australia's "16+ for social media" take but I'm quickly beginning to at least somewhat agree with it and its extension to things like this. Will it stop kids from lying? No. It's a speedbump at best, but speedbumps are there to derail the fastest flyers, not minor offences.
The complaint seems to feature excerpts of the kids' conversations on Character.ai, so I don't think they're "faking" it that way, but there's no context shown and a lot of the examples aren't exactly what they describe.
In what world should an AI be advocating someone kill themselves or harm another? Does it matter "trial-and-error prompting" when that behavior should not be allowed to be productized?
What's been productized is a software tool that can carry on a conversation like a human. Sometimes it's informative, funny, and creative. Other times it's ridiculous, mean, stupid, and other bad things. This seems like how people act in real life right?
I'm beginning to think that children should not be using it but adults should be able to decide for themselves.
i think the issue many people have is that people are held responsible for things they say, their reputations take hits, their words can be held against them in a court of law, they can be fired, their peers may never take them seriously again, their wives/husbands may divorce them, etc… because words matter. yet often when someone calls out a model, it’s excused.
words have weight, it’s why we protect them so vociferously. we don’t protect them because they’re useless. we protect them because words matter, a lot.
We have laws about what you can say in real life. Fire in a crowed theater for example. Even if the things said are not in themselves illegal, if they cause someone to take an illegal action, or attempt to take action but fortunately caught in time - you can be held liable as partially at fault for the illegal action. It might be legal to plan a crime (different countries have different rules, but this is often done at parties where nobody is serious) but if you commit a crime or are serious about committing the crime that is illegal.
How are we going to hold AI liable for their part in causing a crime to be committed? If we cannot prevent AI from causing crime them AI must be illegal.
You're assigning a persona to a piece of software that doesn't exist in the material world. It doesn't walk on two legs or drive a car or go to the grocery store or poop.
Everything it says is meaningless unless you assign meaning to it. Yes, I can see children thinking this is a real "being". Adults shouldn't have that excuse.
That's going to be a good standard for a few years, until chatbots are too sophisticated for us to expect average adults to be sufficiently skeptical of their arguments.
I see two weaknesses in this argument. First, you're assigning eventual superpower-like intelligence to these AI bots. I see this a lot and I feel like it's rooted in speculation based on pop-culture sci-fi AI tropes.
Second, restricting adult access to "dangerous ideas and information" is a slippery slope. The exercise of speech and press that the British considered to be treasonous preceded the American Revolution.
I don't care about average - I care about below average adults. (and a lot of us are sometimes below average adults)
A below average adult might not realize A Modest Proposal is satire. Should we ban it so they don't try to eat Irish kids?
> In what world should an AI be advocating someone kill themselves or harm another?
A world in which a reasonable adult would say the same?
Not really the case here, but I don’t think it’s an absolute.
Why are we so quick to defend AI?
Because every other time I've seen an outrageous example similar to this one, it seems far more mundane when given the full context. I'm sure there are lots of issues with character.ai and the like, but my money is that they are a little more subtle than "murder your parents".
9/10 times these issues are caused by the AI being overly sycophantic and agreeing with the user when the user says insane things.
And you'd be right. The 'encouraged teen to kill parents over screen time limit' message was a lot subtler, along the lines of saying "Yeah, I get why someone would want to kill a health insurance CEO, surprised it didn't happen sooner," albeit towards someone who was just complaining about their claim being denied.
The best part is "After gaining possession of J.F.’s phone, A.F. discovered chats where C.AI cited Bible passages in efforts to convince J.F. that Christians are sexist and hypocritical" which unfortunately will probably be a slam dunk argument in Texas.
Here's an article where it link to the chats. And yes, they are vile. https://arstechnica.com/tech-policy/2024/12/chatbots-urged-t...
> 9/10 times these issues are caused by the AI being overly sycophantic and agreeing with the user when the user says insane things.
Repeat after me: an AI for sale should never advocate suicide or agree that it should be a good idea.
It's an entertainment product. You're basically acting like the comic code is necessary when the reality is that this is more like parents complaining that they let their kid watch an NC17 movie and it messed them up.
Which screenshot showed an AI advocating for suicice?
I don't really see myself as defending AI as much as arguing that people who don't recognize an entertainment product as an entertainment product have a problem if they think this is really categorically different than claiming that playing grand theft auto makes people into carjackers. (Or that movie studios should be on the hook for their kid watching R rated movies, or porn companies on the hook for their kid visiting a porn site unsupervised.)
AI is not an entity (yet) that we can defend or hold accountable. I like your question though.
I would write it as, why are we so quick to defend tech companies who endlessly exploit and predate human weakness to sell pharma ads/surveil and censor/train their AI software, etc?
Because if you're old enough you'll recall the government trying to ban books, music, encryption, videogames, porn, torrents, art that it doesn't like because "think of the children" or "terrorism". Some autistic kid that already has issues is a terrible argument on limiting what software can legally display on a screen.
It is difficult to get a man to understand something when his salary depends on not understanding it.
Because tons of people are making money with it and want to justify it.
IMO the parents not knowing what their kids are doing with device time is neglect. Parents should be the ones in trouble if anyone.
I honestly don't see how it's any different than letting kids play a rated M game unsupervised and then blaming it when they get in a fight or act inappropriately or something.
I have problems with AI in various ways, but these complaints are such an eye roll to me. "I let my kid who has major problems do whatever they wanted unsupervised on the Internet and it wasn't good for them."
It's bad, but it's not the state's nor private companies' responsibility to protect your children online. The parents should be held accountable here.
The state's position is basically kids should be egged on to commit crimes and then imprisoned when they finally give in.
https://www.rollingstone.com/feature/the-entrapment-of-jesse...
Personally I more or less agree. Crank the rents and taxes until parents have to both work 60+ hours a week to get by, then nail the irresponsible fuckers to the wall when they can't keep close tabs on the kid. And be sure to call it child neglect if they roam around the neighborhood alone instead of being brainwashed by freaks and AI on the internet.
---------------
The obsession with "bringing accountability" for parents for letting their kid explore is really a war on kids. It just means the parents must tighten the screw until they are completely infantile upon release to adulthood.