Character.ai to bar children under 18 from using its chatbots

(nytimes.com)

82 points | by geox 17 hours ago ago

81 comments

  • thw_9a83c 13 hours ago

    "Going forward, it will use technology to detect underage users based on conversations and interactions on the platform, as well as information from any connected social media accounts"

    Something tells me that this ain't gonna work. Kids and teens are more inventive than they probably realize.

    • gs17 12 hours ago

      I don't think the "use technology to detect underage users" approach has ever worked well for its stated intent (it works okay for being able to say you're "doing something" about the problem while not losing as many users), but it's slightly better than mandatory ID for everyone.

      • mothballed 12 hours ago

        It's just a way to mitigate being sued when inevitably the lawsuits pour in that the AI-assisted harmful decisions that were made totally weren't shared fault of the parents, other external environment, and/or unlucky genetics.

        • SoftTalker 10 hours ago

          And certainly not the fault of the always conscientious tech companies who are actually the ones running the services.

        • wat10000 11 hours ago

          Right, the stated intent may be to block underage users, but the actual intent is to Do Something so that you can't be accused of Not Doing Something.

    • makeitdouble 5 hours ago

      This is not in the spirit of this law, and will probably irk some, but...

      If a kid thinks like an adult, behaves like an adult and can't be distinguished from an adult from their online presence, let them use the chatbot. On the other end, I wish they'd flag immature adults as kids as well.

      18 is an arbitrary number, and if we have more appropriate ways to judge if someone is ready or not (assuming their check is worth its salt), it should be fine to defer to that.

      It's not like they're going to a bar to do tequila shots or scam retirees for insurance money.

    • zzo38computer 10 hours ago

      I would expect that "based on conversations and interactions on the platform" would result in both false positives and false negatives, and that "information from any connected social media accounts" might cause additional problems (possibly including that some people being unable to get in at all, or being able to get in once and then unable to access for reasons that are not explained). (It won't affect me because I do not use it, but it might affect someone who does try to use it)

    • alyxya 12 hours ago

      Deterring kids is still valuable. A lot of them won’t bother with figuring out how to get around the automated detection.

      • dylan604 11 hours ago

        Deterring kids just makes the kids that much more determined. What could possibly be so bad that the adults don't want the kids to use it? Let's find out...

        • mcmoor 4 hours ago

          Sometimes it's what we want! Especially if that determination results in them training inventive mind to hack. Most of us 20 years ago learnt to navigate our way in the network because of lots of restrictions and obfuscations built-in.

          Although it'd still be lame if it's so easy to break that someone can share the key, and then the kids don't need to learn anything.

        • ambicapter 10 hours ago

          It makes _some_ kids more determined, most kids probably don't care that much.

        • gilleain 11 hours ago

          As the Simpson's quote goes : "Oh Ralphy, why are you always so curious about Daddy's Forbidden Cupboard of Mysteries? ..."

    • BoorishBears 12 hours ago

      They don't care: if they put in robust sounding guards and openly state "chatbots are not for kids", they can point to them in court if someone gets around it

      • thw_9a83c 12 hours ago

        ...could be the whole point. They want to make the user to be the indemnifying party to themself. Basically getting rid of any responsibility.

        • vacuity 11 hours ago

          To be fair, what do you suppose they should do instead? ID isn't popular at all. I don't think the profit motive here is the same as Instagram's, with the latter putting up a front about child safety but making big money from children.

          • thw_9a83c 10 hours ago

            I agree that having addicted kids on Character.AI is probably less lucrative than having them on Instagram or TikTok.

            In any case, there is a general problem on the internet regarding how to allow people to remain reasonably anonymous while keeping children out of certain services.

          • BoorishBears 9 hours ago

            They're gearing up to do ID too (added Persona mentions to their TOS)

            The additional context here is that Google acquired them to get back Noam Shazeer and took some other members of their technical staff.

            So the current company is pretty much shambling along after having served its purpose to all stakeholders, and Google probably doesn't really care more than avoiding any sort of liability.

    • add-sub-mul-div 12 hours ago

      A model would also have to account for the emotionally stunted adults using an AI bf/gf service.

      • gs17 12 hours ago

        I doubt they care that much, since the cost of a false positive is that user having to verify their age and the alternative is all users having to verify their age.

      • BoorishBears 12 hours ago

        The number 1 reason they're doing this is so they can loosen the clamps on NSFW/romance, and they've previously released models targeted at adults intended to improve performance at that.

        • gs17 12 hours ago

          Especially now that OpenAI is entering that space officially by the end of the year.

    • LiquidSky 12 hours ago

      Nah, you just have to have some prompt that unknowingly reveals that the user has ever engaged with Roblox in any way, which leads to an instant block.

      • fgonzag 12 hours ago

        Someone who started using Roblox on its release date in 2006 at let's say 6 years old is now 25 years old.

      • zahlman 12 hours ago

        ... But those people will turn 18 eventually... ?

        • dylan604 11 hours ago

          that's the price you pay for playing Roblox

  • harrisonjackson 10 hours ago

    Lots of threads in here saying this is just a "legal" protection move...

    I'd like to believe that most actual people want to protect kids.

    It's easy to write off corporations and forget that they are founded by real people and employ real people... some with kids of their own or with nieces or nephews etc, and some of them probably do really care.

    Not saying character.ai is driven by that but I imagine the times they've been in the news were genuinely hard times to be working there...

    • b00ty4breakfast 10 hours ago

      I'm sure some of them care genuinly but I've interacted with corporations enough to know that the individual proclivities of the parts don't necessarily appear in the whole.

    • SoftTalker 10 hours ago

      Yeah they care so much they strictly limit what their own kids do with the technology. But push it on everyone else.

    • DuperPower 10 hours ago

      lol they business idea itself is very easy to degenerate for money so if Who created It didnt thought putting barriers to minors from the beggining the person is stupid or greedy

    • ml-anon 9 hours ago

      They (the leadership and the folks who left to Google) absolutely don’t give a shit about protecting kids. In fact most of America doesn’t give a shit about protecting kids (guns, SNAP, vaccines, the goddamn Epstein files).

  • BatteryMountain 12 hours ago

    How about we just enact a law that state children aren't allowed online, not allowed to interact with interative software (like ai, user generated content). Make the parents responsible, not companies.

    • ottah 11 hours ago

      Children do have rights to free expression. They're not property and honestly some of us would not have survived to adults without the ability to explore our identity independent of parental supervision.

      Plus you're just setting kids up for failure by keeping them from understanding the adult world. You can't keep them ignorant and then drop them in the deep end when they turn 18, and expect good outcomes.

      • BatteryMountain 11 minutes ago

        It's not to deprive them of understanding the adult world or free expression, it about the protecting them from the cesspool that is the internet. The pace at which kids are consuming information and maturing too early. Many of their "firsts" experiences being online. Your child being blackmailed by some online troll is not teaching them to understand the online world - they are actively being traumatised. It is parents responsibility to teach their children about the adult world - not the internet's (being other kids, companies, trolls, predators, propaganda machines, ads...).

        You can explore your identity independently of parental supervision just fine without the internet. I'd much rather have my kids have 5 good friends in real life and spend time together offline, than 500 online (10 of which are predators) while they are spending all their time in their bedrooms or in front of screens.

        How would you not have survived to adulthood without the internet? You know how that sounds right? Billions of kids have grown up without the internet just fine (and hello poverty?).

    • kelvinjps10 10 hours ago

      What about YouTube, Wikipedia? Although I can say the Internet for me was damaging in some ways, it bring more benefits like math (my math professor was really bad and I learned with YouTube) and just exploring random things in Wikipedia, also I learned English that way. Maybe a restricted internet could help here, restricting bad sites and content.

    • GaryBluto 9 hours ago

      It's incredible how quickly people will call for authoritarianism if it means supporting their Reddit-like aversion to children.

      • BatteryMountain 4 minutes ago

        Okay so let your kids own guns, sign contracts, do sex work / use drugs, buy & use alcohol/cigarettes, let them drive your car without supervision.

        I'm no authoritarian - simply the parents need to take responsibility for their kids & their actions, how they spend their time, what they allow into their minds.

        Most countries child laws states that the parents (esp the mother) are ultimately for their children’s welfare. Remember that kids cannot consent - the parents carry that responsibility.

        Oh and I'm not averse to children at all.

        Maybe american culture is just rotten to the core and how you raise your kids is just bewildering and gross to outsiders. At this point I feel that american parents are fully consenting to whatever is happening to your kids (being brain raped by tiktok/instagram/snap/fb/youtube/porn and others ("news")).

      • mcmoor 4 hours ago

        Children is always an Achilles's heel of libertarians. It either opens up an inner authoritarianism or force someone to answer some uncomfortable questions.

    • Fourier864 10 hours ago

      Do you have kids? How would school even work? 20 years ago in high school we were expected to use the internet.

      And these days internet integration in school is far stronger, my 6 year old's daily homework is entirely online.

      • BatteryMountain 4 minutes ago

        Take the homework offline, remove screens from schools,

      • causal 9 hours ago

        Yep. People love to make blanket statements like this without remembering how hard it was for parents to supervise Internet use even when there was only one Desktop computer in the family area. Nowadays you can almost any device to lookup porn.

        • fluoridation 8 hours ago

          When I was growing up home LANs weren't common, but nowadays you could solve it with just a little box that does all the filtering in a way that can't be bypassed. Blocking connections to non-whitelisted domains based on the internal IP or MAC is pretty easy. If people don't use these solutions it's because they either haven't researched enough or don't want to deal with the inconvenience (or, in at least some cases, don't really care).

        • goopypoop 8 hours ago

          and I was so sure that this time, the best course of action was also the easiest

      • goopypoop 8 hours ago

        "they can't get offline because they're now online" doesn't make any sense

    • r0fl 11 hours ago

      How will you enforce this?

      What if a child is at school where there are Chromebooks and teachers aren’t as tech savvy as the majority of hacker news?

      What if a child is at a library that has Chromebooks you can take out and use for homework?

      Wha if a child is at an older cousins place?

      What if a child is a park with their friends and uses a device?

      Should parents be next to their child helicopters parenting 24/7?

      Is that how you remember your childhood? Is that how you currently parent giving zero atonony to children?

      Blaming parents is ridiculous. Lot of parents aren’t tech savvy and are too dumb to be tech savvy and stay on top of the latest tech thing

      • fluoridation 11 hours ago

        You don't enforce it. The point of such laws is not to actually be enforced, but as a legal tool when problems come up. For example, if you as an adult say something inappropriate to a child online thinking it's an adult, you'd be protected if it's ever brought up, because the child shouldn't have been online to begin with and it was not your responsibility to check the other person's age. It's not unlike being in a bar and assuming everyone is 18 or older, because it's the bar's responsibility to forbid entry to anyone younger.

        • molave 9 hours ago

          Such a mindset is ripe for selective enforcement and discrimination if society or someone powerful does not like you.

          • fluoridation 8 hours ago

            Selective enforcement of what? If society does not like you, "you" being who exactly?

    • SoftTalker 10 hours ago

      Right. Companies get to externalize all the social costs of what they do, and simply reap profits.

    • tonyhart7 12 hours ago

      "How about we just enact a law that state children aren't allowed online"

      I literally circumvent website blocking using VPN as a kid, no one can stop anyone from going "online" in 2025

  • amiga386 11 hours ago

    They should ban the over-18s too.

    https://www.bbc.co.uk/news/technology-67012224

  • causal 13 hours ago

    Finally. CAI is notoriously shady with harvesting, selling, and making it difficult to remove data. How many kids have CAI's chatbots already seduced and had their intimate conversations sold to Google?

    • BoorishBears 12 hours ago

      I suspect Google is the one pushing for this. C.ai has dramatically folded on two of its longest standing struggles in the last two months: underage users and intellectual property

      In both cases they went nuclear in a way that implies they actually don't care if the current product survives as long as C.ai (read: Google) isn't exposed to the ongoing risk

  • Sparkyte 11 hours ago

    I think ChatBots are not very helpful if you don't treat them with scrutiny. There should be a mandatory requirement of a disclaimer before interacting with all AI.

  • ChrisArchitect 15 hours ago

    Related:

    Teen in love with chatbot killed himself – can the chatbot be held responsible?

    https://news.ycombinator.com/item?id=45726556

    • nopurpose 11 hours ago

      How is it possible to develop emotional connection to anyone with a goldfish memory? I'd be surprised if context size there is more than 50K tokens.

      • sosodev 11 hours ago

        Most AI companions have memory systems. It's still quite simplistic but it's not "goldfish memory" or just limited to the context window.

        From what I understand, some inputs from the user will trigger a tool call that searches the memory database and other times it will search for a memory randomly.

        With that said, I think people started falling in love with LLMs before memory systems and they probably even fell in love with chatbots before LLMs.

        I believe that the simple, unfortunate truth is that love is not at all a rational processes and your body doesn't need much input to produce those feelings.

      • PlunderBunny 9 hours ago

        People were developing an emotional connection to the Eliza program in the late 1960s [0], the code for which would have contained less characters than the discussion on this page.

        0. https://www.theguardian.com/technology/2023/jul/25/joseph-we...

      • goopypoop 2 hours ago

        a woman married a bridge in 2013

      • ambicapter 10 hours ago

        People got attached to a bot that only asked questions.

        • hackernewds 7 hours ago

          People are lonely and desire connection.

  • ivape 12 hours ago

    This is the only way. Tech companies cannot become like the police in many countries. The police in many countries are used as a "catch-all" function at the moment of, where they have to deal with the failure of other functions (parenting, community) exactly at the moment of (just-in time catch-all).

    Your kid is on the fucking computer all day building an unhealthy relationship with essentially a computer game character. Step the fuck in. These companies absolutely have to make liability clear here. It's an 18+ product, watch your kids.

    • wagwang 12 hours ago

      Step in as parents or step in as the unabomber?

    • throwaway314155 12 hours ago

      > This is the only way.

      You're more optimistic than I am. Their announcement is virtue signaling at best. Nothing will come from this. Kids will figure out a way around their so-called detection mechanisms because if there were any false positives for adults they would lose adult customers _and_ kids customers.

      • ivape 12 hours ago

        Nothing can be stopped. All that can be done is make clear where everyone stands. We're very early in all of this, and it's important to make clear what liability is. You want to sail out to the new world with everyone else, accept the terms, which includes death. You cannot just let your kid touch AI without both eyes open.

        The universe of " I didn't know the kids would make deep fakes of their classmates ", is yet to come. Some parents going straight to fucking jail. Talk to your kids, things are moving at a dangerous pace.

  • empath75 10 hours ago

    There is nothing about character.ai that would be appealing for more than 30 minutes to anybody that isn't suffering from some kind of acute mental health crisis. They should shut the whole thing down. That is a cool demo that should have never been made into a product.

  • gooseWithFood20 12 hours ago

    Honestly this needs to be standardised.

  • TZubiri 12 hours ago

    Very nice. Just yesterday I wrote about the 13-18 age group using ChatGPT and how I think it should be disallowed (without guardian consent), this was in the context of suicide cases.

    https://news.ycombinator.com/item?id=45733618

    On a similar note, I was completing my application for YC Startup School / Co-Founder matching program. And when listing possible ideas for startups I straight out explicitly mentioned I'm not interested in pursuing AI ideas at the moment, AI features are fine, but not as the main pitch.

    It feels like at least for me the bubble has popped, I have talked also recently about the way in which the bubble might pop would be due to legal liability collapse in the courts. https://news.ycombinator.com/item?id=45727060

    This added with the fact that AI was always a vague folk category of software, it's being used for robotics, NLP and fake images, I just don't think it's a real taxon.

    Similar to the crypto buzz from the last season, the reputable parties will exit and stop associating, while the grifters and free-associating mercenaries will remain.

    Even if you are completely selfish, it's not even hugely beneficial to be in the "AI" space, at least in my experience, customers come in with huge expectations, and non-huge budgets. Even if you sell your soul to implement a chatbot that will replace 911 operators, at this point the major actors have already done so, or not, and you are left with small companies that want to be able to fire 5 employees and will pay you 3 months of employee salary if you can get it done by vibe code completing their vibe coded prototype within a 2-3 deadline.

    • iteria 7 hours ago

      If parental consent is all that's necessary then you might as well blanket allow it. I'm a parent and interacting with a lot of parents of young children, the vast majority of them can't be bothered to deal with whining or inconvenience. Which is why I see 8 year olds with cash app even though that's not allowed any many abandon or extremely weaken parental controls on basically every advice.

      My personal favorite story is when I talk my youngest aunt about a videogame my cousin wanted. She said no absolutely not. Then proceeded to buy the game for ehr 10 year old. A game she was carded for. A game that has that it's M rated and has adult themes and whatnot on the box. She called me later in horror about how inappropriate this game I told her about 2 weeks earlier was. How could they make games like that for children she says about the game she was carded for because it's only for adults.

      I use her as an example but that situation is a lot of parents. I personally think that it's not the government's place to say how much exposure I want to give my child to the internet, but I have rules and boundaries around that with my kid. Many of her friends have free access and and have always had it since toddlers. People say it's parents not being savvy, but honestly it's parents not caring. Parent controls have been around over 30 years and they have always been dead simple. But they do increase the whining in your life from your kids and that means if parents can allow it a high quantity will. I have no faith that a law will stop significantly more kids than no law. I know too many parents who allow their kids to do things they know are harmful to their kid because "I don't want them to feel left out" or they don't want to deal with whining.

  • hatefulheart 13 hours ago

    It makes no difference to their bottom line. After all, appealing to children over the age of 18 is where LLMs find their market.

    • red2awn 10 hours ago

      They are not a profitable company at all. They only started monetizing this year and the customer base is not a fan of it at all.

    • hackernewds 13 hours ago

      If they are banning a large part of their current and future market, with competitors serving the space, how does it not affect their bottom line?

      • BoorishBears 12 hours ago

        Most competitors can't successfully serve kids yet.

        These kids hammer H100s for 30+ hours a week but will revolt at ads or the idea of paying money.

        C.ai probably only exists at its current size because Noam had access cheap access to TPUs and people who can scale inference on them at the earliest stages of their growth (and obviously because he could raise with his pedigree, but looking at things others deal with)

        Eventually if the unit economics start to work they can always roll this back, but I think people are underestimating how much of a positive this is for them

  • alyxya 12 hours ago

    I worry that all this does is reduce future liability issues and make those users use another chatbot instead. I trust character.ai more than other chatbots for safety.

    > [Dr. Nina Vasan] said the company should work with child psychologists and psychiatrists to understand how suddenly losing access to A.I. companions would affect young users.

    I hope there’s a more gradual transition here for those users. AI companions are often far more available than other people, so it’s easy to talk more and get more attached to them. This restriction may end up being a net negative to affected users.

  • Edmond 12 hours ago

    There is a correct way to do age verification (and information verification in general) that supports strong privacy and makes it difficult to evade:

    https://news.ycombinator.com/item?id=44723418

    It is also highly compatible with the internet both in terms of technical/performance scalability and utility scalability (you can use it for just about any information verification need in any kind of application).

    • Philpax 12 hours ago

      Undisclosed self-promotion.

      • Edmond 12 hours ago

        My motivation is less about self-promotion at this point and perhaps just frustration with the face-palm quality of the failure to properly implement information verification on the internet.

        Every time I hear about some dumb approach to age verification (conversation analysis...really?) or a romance scam story because of a fraudster somewhere in Malaysia..I have the need to scream...THERE IS A CORRECT SOLUTION.

        • Philpax 12 hours ago

          That's great, but you should still disclose that you're the one providing the "correct solution."

          • vntok 11 hours ago

            No, that's fine.

    • triceratops 12 hours ago

      Age verification doesn't have to be perfect or even cryptographically secure. We don't demand it for alcohol or tobacco: carcinogenic, addictive substances that cause (in the case of alcohol) impaired judgment leading to deadly accidents. There's no justification for online age verification to be more invasive or stringent than what's done today for buying alcohol or tobacco IRL.

      My proposal is here: https://news.ycombinator.com/item?id=45141744

    • wanderingbit 12 hours ago

      There are a couple big problems with this type of digital and decentralized type of authentication (I say this as a long time cryptocurrency professional who wants this to succeed):

      1. backups and account recovery: We’re working with humans here. They will lose their keys in great numbers, sometimes into the hands of malicious actors. How do users then recover their credentials in a quick and reliable manner?

      2. Fragmentation: let’s be optimistic and say digital credentials for drivers licenses are given out by _only_ 50 entities (one per State). Assuming we don’t have a single federal format for them (read: politically infeasible national id) how does facebook, let alone some rando startup, handle parsing and authenticating all these different credential formats? Oh and they can change at any time, due to some rando political issue in the given state.

      OP, you clearly know all this, so I’m just reminding you as someone down in the identity trenches.

      • Edmond 11 hours ago

        1.Backup and recovery with this solution is no different from backup and recovery of your phone. It is a potential issue but not unique. Cryptographic certificates and associated keys reside on your device.

        2.The data format issue is (or was) indeed a concern though it was never insurmountable. A data dictionary would have been the most straight forward approach to address it: https://cipheredtrust.com/doc/#data-processing

        I say data format discernment was a concern because as faith would have it, we now have the perfect tech to address that, LLMs. You can shove any data format into an LLM and it will spit out a transformation into what you are looking for without the need to know the source format.

        Browsers are integrating LLM features as APIs so this type of use would be feasible both for front and back end tasks.