AI if unregulated could be a lot more worse than social media. My kid after his first chat with AI said, he is my new friend. I was alarmed and explain him but what about the parents and guardians who are unaware how the kids are befriending the AI. Part of the problem is also how it is trained to be nice and encouraging. I am sure there are researchers who are talking about it but the question is are the policy makers listening to them ?
With the current acceleration of technology this is a repeating pattern. The new thing popular with kids is not understood by the parents before it is too late.
It kind of happened for me with online games. They were a new thing, and no one knew to what degree they could be addicting and life damaging. As a result I am probably over protective of my own kids when it comes to anything related to games.
We are already seeing many of the effects of the social media generation and I am not looking forward to what is going to happen to the AI natives whose guardians are ill-prepared to guide them. In the end, society will likely come to grips with it, but the test subjects will pay a heavy price.
Violent video games might not have impacted society, but what about addictive social media and addictive online games?
What about the algorithm feeding highly polarized content to folks? It's the new "lead in the air and water" of our generation.
What about green text bubble peer pressure? Fortnite and Roblox FOMO? The billion anime Gatcha games that are exceedingly popular? Whale hunting? Kids are being bullied and industrially engineered into spending money they shouldn't.
Raising kids on iPads, shortened attention spans, social media induced depression and suicide, lack of socialization, inattention in schools, ...
Social media leading people to believe everyone is having more fun than them, is better looking than them, that society is the source of their problems, ...
Now the creepy AI sex bots are replacing real friends.
“Regulate”? We can’t, and shouldn’t, regulate everything. Policymakers should focus on creating rules that ensure data safety, but if someone over 18 years wants to marry a chatbot… well, that’s their (stupid) choice.
Instead of trying to control everything, policymakers should educate people about how these chatbots work and how to keep their data safe. After all, not everyone who played Doom in the ’90s became a real killer, or assaults women because of YouPorn.
Society will adapt to these ridiculous new situations…what truly matters is people’s awareness and understanding.
We can, and we should, regulate some things. AI has, quite suddenly, built up billions of dollars worth of infrastructure and become pervasive in people's daily lives. Part of how society adapts to ridiculous new situations is through regulations.
I'm not proposing anything specifically, but the implication that this field should not be regulated is just foolish.
People think that we can just magically regulate everything. It's like a medieval peasant who doesn't understand chemistry/physics/etc thinking they can just pray harder to have better odds of something.
We literally CAN'T regulate some things for any reasonable definition of "can't" or "regulate". Our society is either not rich enough or not organized in a way to actually do it in any useful capacity and not make the problem worse.
I'm not saying AI chatbots are one of those things, but people toss around the idea of regulation way too casually and AI chatbots are way less cut and dry than bad food or toxic waste or whatever other extreme anyone wants to misleadingly project down into the long tail of weird stuff with limited upside and potential for unintended consequences elsewhere.
All your argument consists of is, "Somebody somewhere believes something untrue, and people don't use enough precision in their speach, so I am recommending we don't do anything regulatory about this problem."
Having a virtual girlfriend is not selling toxic yoghurts, it doesn’t harm anyone, it’s like if you buy yoghurt and put in on a pizza… you can do want you want with the yoghurt like with the AI.
The important thing is keep the data safe, like the yoghurt that must not be expired when sold.
We also don't want to regulate everything. Have you seen that someplace, or even here? Or it's an imaginary argument? The topic was regulating AI, and about that I like your thought: humans should be better educated and better informed. Should we, maybe, make a regulation to ensure that?
I understand what you’re saying but it’s a difficult balance. Not saying everything needs to be regulated and not saying we should be full blown neoliberalism. But think of some of “social” laws we have today (in the US). No child marriages, no child labor, no smoking before 19, and no drinking before 21. These laws are in place because we understand that those who can exploit will do the exploiting. Those who can be exploited will be exploited. That being said, I don’t agree with any of the age verification policies here with adult material. Honestly not sure what the happy medium is.
I already wrote “over 18”. AI is already regulated, you can’t use it if you’re under 14/18. But if you want to ask ChatGPT “what’s the meaning of everything” or “can we have digital children”, that’s a personal choice.
People are weird… for someone who is totally alone, having a virtual wife/child could be better than being completely alone.
They’re not using ChatGPT to do anything illegal, and already regulated, like planning to kill someone or commit theft.
There is a massive difference between a stuffed animal and an LLM. In fact, they have next to nothing in common. And as such, yes any reasonable parent would react differently to a close friendship suddenly formed with any online service.
I have two grandkids, one's 3 years old and one's 9 months old.
I feel like I'm not really ready for everything that's going to be vying for their attention in the next couple of decades. My daughter and her husband have good practices in place already IMHO but it's going to be a pernicious beast.
The only thing that's a given is that it's possible that you'll end up paying child support and alimony to someone that hates you, if you marry in real life and have real children.
Choosing you'd rather have AI wife and kids, rather than deal with the potential that many real people will face of paying child support and alimony to someone that hates them -- I don't see as an irrational decision (although not an inevitable one either).
In fact, if you don't consider at least the possibility, you are a fool.
I support my kids, but because I am married, I only have to spend a small fraction of what court ordered child support would be.
And that is because court ordered child support is actually a misnomer. It is merely a transfer payment to the custodial parent. There is actually no statutory requirement that it be spent on the child, nor any tracking or accountability that it is done. That would somehow be too impractical, even though somehow it's magically practical to count the pennies of the earner in the opposite direction, to make sure the full income flow is accounted for.
Yeah… but there’s often a relationship that happens before that.
Now if you go into that relationship with the mindset of “this person just wants my alimony and child support and hates my guts” I get why you might do yourself and your potential partner / ex-to-be a favour by instead getting an AI relationship.
The outcry when 4o was discontinued was such that open AI kept it on paying subscriptions. There are at least enough people attached to certain AI voices that it warrants a tech startup spending the resources to keep an old model around. That’s probably not an insignificant population.
The Stanford Prison Experiment only had 24 participants and implementation problems that should have concerned anyone with a pulse. But it’s been taught for decades.
A lot of psych research uses small samples. It’s a problem, but funding is limited and so it’s a start. Other researchers can take this and build upon it.
Anecdotally, watching people meltdown over the end of ChatGPT 4o indicates this is a bigger problem that 0.1%. And business wise, it would be odd if OpenAI kept an entire model available to serve that small a population.
"Study finds..." feels clickbaity to me whenever the study is just "we found some randos on social media doing a thing". With little effort a study could find just about any type of person you want on the Internet.
Of course the loneliest 5% are going to do something like this. If it weren't for AI they'd be writing twilight fan-fic and roleplaying it on some chatroom, or giving all their money to a "saudi prince."
Seems like nothing new, just a better or more immersive form of fantasy for those who can't have the life they fantasize about.
I'd argue it'd be psychologically healthier to roleplay in a chatroom with people who are human on the other end (if that could be guaranteed, which it no longer can).
Humans can potentially be much nastier than a chatbot. There are lonely vulnerable who can be exploited, but there are also people who get off on manipulating other people and convincing them to make profoundly self destructive and life altering choices.
I used to despise AIs' ass-kissing responses.
It doesn't add any value, and it's so cheap it's almost sarcastic.
But now, I feel sad because Codex doesn't praise me even though I come up with a super-clever implementation.
I think the part of my brain for feeling flattered when someone praises me didn't exist because no one complimented me.
But after ChatGPT and Claude flattered me again and again, I finally developed the circuit for feeling accepted, respected, and loved...
It reminds me of when I started stretching after my 30s.
First it was nothing but a torture, but after a while I began to feel good and comfortable when my muscles were stretched, and now I feel like shit when I skip the morning stretching.
It feels like a more evolved version of those who have what they consider to be relationships with anime characters or virtual idols in Japan. Often treating a doll or lifesize pillow replica of that character as someone the person can interact and spend time with. Obviously like the AI, the fact it is so common does suggest that it must be filling a unmet need in the person and I guess the key focus needs to be how do we help those stuck in that situation to become unstuck and how do we help those feel that unmet need is fulfilled?
Is this neccesarily a bad thing? I think a lot of people assume these same people would have developed relationships with humans otherwise. How many of these people are better off this way? That'd be an interesting study. I've read a couple of articles on how the "loneliness epidemic" is driving down life expectancy. Could AI chatbots negate that?
"It's not real", yeah, that is weird for sure. But I also find wrestling fans weird, they know it's not real and enjoy it anyways. Even most sports, people take it a lot more seriously than they should.
We're stuck in a really perverse collective-action problem. And, we keep doing this to ourselves. These technologies are not enriching our lives, but once they're adopted we either use them, or voluntarily fall behind. There seems to be very little general philanthropy in this regard.
Something I use as a heuristic that is pretty reliable is "am I treating a thing like a person, or a person like a thing?" If so then, maybe not necessarily bad but probably bad.
It's not about whether it's "real" or not. In this case of AI relationships, extremely sophisticated and poorly understood mechanisms of social-emotional communication and meaning making that have previously only ever been used for bonding with other people, and to a limited extent animals, are being directed at a machine. And we find that the mechanisms respond to that machine as if there is a person there, when there is not.
There is a lot of novel stuff happening there, technologically, socially, psychologically. We don't really know, and I don't trust anyone who is confidently predicting, what effects that will have on the person doing it, or their other social bonds.
Wrestling is theater! It's an ancient craft, well understood. If you're going to approach AI relationships as a natural extension of some well established human activity probably pet bonding is the closest. I don't think it's even that close though.
AI if unregulated could be a lot more worse than social media. My kid after his first chat with AI said, he is my new friend. I was alarmed and explain him but what about the parents and guardians who are unaware how the kids are befriending the AI. Part of the problem is also how it is trained to be nice and encouraging. I am sure there are researchers who are talking about it but the question is are the policy makers listening to them ?
With the current acceleration of technology this is a repeating pattern. The new thing popular with kids is not understood by the parents before it is too late.
It kind of happened for me with online games. They were a new thing, and no one knew to what degree they could be addicting and life damaging. As a result I am probably over protective of my own kids when it comes to anything related to games.
We are already seeing many of the effects of the social media generation and I am not looking forward to what is going to happen to the AI natives whose guardians are ill-prepared to guide them. In the end, society will likely come to grips with it, but the test subjects will pay a heavy price.
A whole generation turned out fine after murdering hookers in GTA before the industry came up with loot boxes.
How do we know which era of AI we're in?
Violent video games might not have impacted society, but what about addictive social media and addictive online games?
What about the algorithm feeding highly polarized content to folks? It's the new "lead in the air and water" of our generation.
What about green text bubble peer pressure? Fortnite and Roblox FOMO? The billion anime Gatcha games that are exceedingly popular? Whale hunting? Kids are being bullied and industrially engineered into spending money they shouldn't.
Raising kids on iPads, shortened attention spans, social media induced depression and suicide, lack of socialization, inattention in schools, ...
Social media leading people to believe everyone is having more fun than them, is better looking than them, that society is the source of their problems, ...
Now the creepy AI sex bots are replacing real friends.
100%; it's probably wise to default to better-to-be-conservative-than-sorry policy, at least as of now.
“Regulate”? We can’t, and shouldn’t, regulate everything. Policymakers should focus on creating rules that ensure data safety, but if someone over 18 years wants to marry a chatbot… well, that’s their (stupid) choice.
Instead of trying to control everything, policymakers should educate people about how these chatbots work and how to keep their data safe. After all, not everyone who played Doom in the ’90s became a real killer, or assaults women because of YouPorn.
Society will adapt to these ridiculous new situations…what truly matters is people’s awareness and understanding.
We can, and we should, regulate some things. AI has, quite suddenly, built up billions of dollars worth of infrastructure and become pervasive in people's daily lives. Part of how society adapts to ridiculous new situations is through regulations.
I'm not proposing anything specifically, but the implication that this field should not be regulated is just foolish.
You are using an incredibly poor rhetoric technique and setting up a strawman.
This is not about regulating everything.
This is about realizing adverse effects and regulating for those.
Just like no one is selling you toxic youghurt.
People think that we can just magically regulate everything. It's like a medieval peasant who doesn't understand chemistry/physics/etc thinking they can just pray harder to have better odds of something.
We literally CAN'T regulate some things for any reasonable definition of "can't" or "regulate". Our society is either not rich enough or not organized in a way to actually do it in any useful capacity and not make the problem worse.
I'm not saying AI chatbots are one of those things, but people toss around the idea of regulation way too casually and AI chatbots are way less cut and dry than bad food or toxic waste or whatever other extreme anyone wants to misleadingly project down into the long tail of weird stuff with limited upside and potential for unintended consequences elsewhere.
Again, another strawman.
Which people in specific think that?
All your argument consists of is, "Somebody somewhere believes something untrue, and people don't use enough precision in their speach, so I am recommending we don't do anything regulatory about this problem."
Having a virtual girlfriend is not selling toxic yoghurts, it doesn’t harm anyone, it’s like if you buy yoghurt and put in on a pizza… you can do want you want with the yoghurt like with the AI.
The important thing is keep the data safe, like the yoghurt that must not be expired when sold.
The important thing to you.
[dead]
What’s your opinion on current trend of “casino-fication of everything”?
We also don't want to regulate everything. Have you seen that someplace, or even here? Or it's an imaginary argument? The topic was regulating AI, and about that I like your thought: humans should be better educated and better informed. Should we, maybe, make a regulation to ensure that?
I understand what you’re saying but it’s a difficult balance. Not saying everything needs to be regulated and not saying we should be full blown neoliberalism. But think of some of “social” laws we have today (in the US). No child marriages, no child labor, no smoking before 19, and no drinking before 21. These laws are in place because we understand that those who can exploit will do the exploiting. Those who can be exploited will be exploited. That being said, I don’t agree with any of the age verification policies here with adult material. Honestly not sure what the happy medium is.
I already wrote “over 18”. AI is already regulated, you can’t use it if you’re under 14/18. But if you want to ask ChatGPT “what’s the meaning of everything” or “can we have digital children”, that’s a personal choice.
People are weird… for someone who is totally alone, having a virtual wife/child could be better than being completely alone.
They’re not using ChatGPT to do anything illegal, and already regulated, like planning to kill someone or commit theft.
Would you have been concerned if he said the plush was his new friend? Calling policy makers to ban plush?
You have to be careful to not overreact to things.
There is a massive difference between a stuffed animal and an LLM. In fact, they have next to nothing in common. And as such, yes any reasonable parent would react differently to a close friendship suddenly formed with any online service.
I have two grandkids, one's 3 years old and one's 9 months old.
I feel like I'm not really ready for everything that's going to be vying for their attention in the next couple of decades. My daughter and her husband have good practices in place already IMHO but it's going to be a pernicious beast.
https://www.reddit.com/r/MyBoyfriendIsAI/ in the wild. Rough stuff.
I'm glad to not have seen a r/MyBabyIsAI yet
My new startup is named TamagotchAI..
It's better than paying alimony and child support to someone that hates you.
You rather have AI relationships instead of actual kids?
I have actual kids and am pretty happy with my family life.
But many aren't, and some people might even have a level of rare self awareness to know that anyone they'd [be able to] marry would hate them.
That is:
1. Not a given.
2. Something one can work on so that they're either more likeable or at the very least less defeatist about the whole thing.
The only thing that's a given is that it's possible that you'll end up paying child support and alimony to someone that hates you, if you marry in real life and have real children.
Choosing you'd rather have AI wife and kids, rather than deal with the potential that many real people will face of paying child support and alimony to someone that hates them -- I don't see as an irrational decision (although not an inevitable one either).
In fact, if you don't consider at least the possibility, you are a fool.
So maybe wear a condom next time, if you don't want to support your own goddamn kids.
I support my kids, but because I am married, I only have to spend a small fraction of what court ordered child support would be.
And that is because court ordered child support is actually a misnomer. It is merely a transfer payment to the custodial parent. There is actually no statutory requirement that it be spent on the child, nor any tracking or accountability that it is done. That would somehow be too impractical, even though somehow it's magically practical to count the pennies of the earner in the opposite direction, to make sure the full income flow is accounted for.
Yeah… but there’s often a relationship that happens before that.
Now if you go into that relationship with the mindset of “this person just wants my alimony and child support and hates my guts” I get why you might do yourself and your potential partner / ex-to-be a favour by instead getting an AI relationship.
The linked study is of 29 “people” (assuming they are real).
How do we know if these examples aren’t just the 0.1% of the population that is, for all intend and purposes, “out there”?
So much of “news” is just finding these corner cases that evoke emotion, but ultimately have no impact.
The outcry when 4o was discontinued was such that open AI kept it on paying subscriptions. There are at least enough people attached to certain AI voices that it warrants a tech startup spending the resources to keep an old model around. That’s probably not an insignificant population.
The Stanford Prison Experiment only had 24 participants and implementation problems that should have concerned anyone with a pulse. But it’s been taught for decades.
A lot of psych research uses small samples. It’s a problem, but funding is limited and so it’s a start. Other researchers can take this and build upon it.
Anecdotally, watching people meltdown over the end of ChatGPT 4o indicates this is a bigger problem that 0.1%. And business wise, it would be odd if OpenAI kept an entire model available to serve that small a population.
The stanford prison experiment is unverifiable. Another example of one of these emotion evoking stories.
https://pubmed.ncbi.nlm.nih.gov/31380664/
See critiques of validity section:
https://en.wikipedia.org/wiki/Stanford_prison_experiment
I think it IS 0.1% (or less, hopefully) of the population.
But it’s hard to study users having these relationships without studying the users who have these relationships I reckon.
"Study finds..." feels clickbaity to me whenever the study is just "we found some randos on social media doing a thing". With little effort a study could find just about any type of person you want on the Internet.
Of course the loneliest 5% are going to do something like this. If it weren't for AI they'd be writing twilight fan-fic and roleplaying it on some chatroom, or giving all their money to a "saudi prince."
Seems like nothing new, just a better or more immersive form of fantasy for those who can't have the life they fantasize about.
I'd argue it'd be psychologically healthier to roleplay in a chatroom with people who are human on the other end (if that could be guaranteed, which it no longer can).
Humans can potentially be much nastier than a chatbot. There are lonely vulnerable who can be exploited, but there are also people who get off on manipulating other people and convincing them to make profoundly self destructive and life altering choices.
Chatbots enable self destructive manipulation of others at scale.
I'd argue it'd be psychologically healthier to get therapy and more friends.
So what? We don't live in the "should" universe. We live in this one.
I’d agree. at least there’s a possibility of real interaction with real people.
I'm sure Skynet could easily win in Terminator by sending a virtual boyfriend to Sarah Connor, instead of sending a trigger-happy cyborg after her.
[dead]
Turns out Her wasn’t set in such a distant future.
Turns out there's a Him, too.
I used to despise AIs' ass-kissing responses. It doesn't add any value, and it's so cheap it's almost sarcastic. But now, I feel sad because Codex doesn't praise me even though I come up with a super-clever implementation.
I think the part of my brain for feeling flattered when someone praises me didn't exist because no one complimented me. But after ChatGPT and Claude flattered me again and again, I finally developed the circuit for feeling accepted, respected, and loved...
It reminds me of when I started stretching after my 30s. First it was nothing but a torture, but after a while I began to feel good and comfortable when my muscles were stretched, and now I feel like shit when I skip the morning stretching.
It feels like a more evolved version of those who have what they consider to be relationships with anime characters or virtual idols in Japan. Often treating a doll or lifesize pillow replica of that character as someone the person can interact and spend time with. Obviously like the AI, the fact it is so common does suggest that it must be filling a unmet need in the person and I guess the key focus needs to be how do we help those stuck in that situation to become unstuck and how do we help those feel that unmet need is fulfilled?
Exactly; it's just a much more powerful medium to express one of the oldest and perennial of society problems
VHE has arrived
Just send the meteor already.
Extremely lonely people being exploited by sociopathic corporations for profit. Sounds like a baby and the bathwater scenario to me.
Is this neccesarily a bad thing? I think a lot of people assume these same people would have developed relationships with humans otherwise. How many of these people are better off this way? That'd be an interesting study. I've read a couple of articles on how the "loneliness epidemic" is driving down life expectancy. Could AI chatbots negate that?
"It's not real", yeah, that is weird for sure. But I also find wrestling fans weird, they know it's not real and enjoy it anyways. Even most sports, people take it a lot more seriously than they should.
We're stuck in a really perverse collective-action problem. And, we keep doing this to ourselves. These technologies are not enriching our lives, but once they're adopted we either use them, or voluntarily fall behind. There seems to be very little general philanthropy in this regard.
> Is this neccesarily a bad thing?
Yes?
[dead]
Something I use as a heuristic that is pretty reliable is "am I treating a thing like a person, or a person like a thing?" If so then, maybe not necessarily bad but probably bad.
It's not about whether it's "real" or not. In this case of AI relationships, extremely sophisticated and poorly understood mechanisms of social-emotional communication and meaning making that have previously only ever been used for bonding with other people, and to a limited extent animals, are being directed at a machine. And we find that the mechanisms respond to that machine as if there is a person there, when there is not.
There is a lot of novel stuff happening there, technologically, socially, psychologically. We don't really know, and I don't trust anyone who is confidently predicting, what effects that will have on the person doing it, or their other social bonds.
Wrestling is theater! It's an ancient craft, well understood. If you're going to approach AI relationships as a natural extension of some well established human activity probably pet bonding is the closest. I don't think it's even that close though.