In pursuit of that extra 0.1% of growth and extra 0.15 EPS, we've optimised and reoptimised until there isn't really space for being human. We're losing the ability to interact with each other socially, to flirt, now we're making life so stressful people literally want to kill themselves. All in a world (bubble) or abundance, where so much food is made, we literally don't know what to do with it. Or we turn it into ethanol to drive more unnecessarily large cars, paid for by credit card loans we can scarcely afford.
My plan B is to become a shepherd somewhere in the mountains. It will be damn hard work for sure, and stressful in its own way, but I think I'll take that over being a corpo-rat racing for one of the last post-LLM jobs left.
You don't need to withdraw from humanity, you only need to withdraw from Big Tech platforms. I'm continually amazed at the difference between the actual human race and the version of the human race that's presented to me online.
The first one is basically great, everywhere I go, when I interact with them they're some mix of pleasant, friendly, hapless, busy, helpful, annoyed, basically just the whole range of things that a person might be, with almost none of them being really awful.
Then I get online and look at Reddit or X or something like that and they're dominated by negativity, anger, bigotry, indignation, victimization, depression, anxiety, really anything awful that's hard to look away from, has been bubbled up to the top and oh yes next to it there are some cat videos.
I don't believe we are seeing some shadow side of all society that people can only show online, the secret darkness of humanity made manifest or something like that. Because I can go read random blogs or hop into some eclectic community like SDF and people in those places are basically pleasant and decent too.
I think it's just a handful of companies who used really toxic algorithms to get fantastically rich and then do a bunch of exclusivity deals and acquire all their competition, and spread ever more filth.
You can just walk away from the "communities" these crime barons have set up. Delete your accounts and don't return to their sites. Everything will immediately start improving in your life and most of the people you deal with outside of them (obviously not all!) turn out to be pretty decent.
The principal survival skill in this strange modern world is meeting new people regularly, being social, enjoying the rich life and multitude of benefits which arise from that, but also disconnecting with extreme rapidity and prejudice if you meet someone who's showing signs of toxic social media brain rot. Fortunately many of those people rarely go outside.
All of this only goes to show how far we've come on our journey to profit optimization. We could optimize away those pesky humans completely if it weren't for the annoying fact that they are the source of all those profits.
>now we're making life so stressful people literally want to kill themselves
Is this actually the case? Working conditions and health during industrial revolution times doesn't seem that much better. There is a perception that people now are more stressed/tired/miserable than before, but I am not sure that is the case.
In fact I think it's the opposite, we have enough leisure time to reflect upon the misery and just enough agency to see that this doesn't have to be a fact of life, but not enough agency to meaningfully change it. This would also match how birth rates keep declining as countries become more developed.
The romantic fallback plan of being a farmer or shepherd. I wonder, do farmers and shepherds also romantize about becoming programmers or accountants when they feel down?
They do. I’ve been teaching cross-career programming courses in the past, where most of my students had day jobs, some, involving hard physical work. They’d gladly swap all that for the opportunity to feed their families by writing code.
Just comes to show how the grass is always greener when you look on the other side.
That said, I also plan to retire up in the mountains soon, rather than keep feeding the machine.
I'm close with a number of people living a relatively hard working life producing food and I've not seen this at all personally, no. It can be very rough but for these people at least it is very fulfilling and the idea of going to be in an office would look like death. People joke about it a bit but no way.
That said there probably are folks who did do that and left to go be in an office, and I don't know them.
Actually I do know one sort of, but he was doing industrial farm work driving and fixing big tractors before the office, which is a different world altogether. Anyway I get the sense he's depressed.
You'd be surprised how technical farming can be. Us software engineers often have a deep desire to make efficient systems, that function well, in a mostly automated fashion, so that we can observe these systems in action and optimize these systems over time.
A farm is just such a system that you can spend a lifetime working on and optimizing. The life you are supporting is "automated", but the process of farming involves an incredible amount of system level thinking. I get tremendous amounts of satisfaction from the technical process of composting, and improving the soil, and optimizing plant layouts and lifecycles to make the perfect syntropic farming setup. That's not even getting into the scientific aspects of balancing soil mixtures and moisture, and acidity, and nutrient levels, and cross pollinating, and seed collecting to find stronger variants with improved yields, etc. Of course the physical labor sucks, but I need the exercise. It's better than sitting at a desk all day long.
Anyway, maybe the farmers and shepherds also want to become software engineers. I just know I'm already well on the way to becoming a farmer (with a homelab setup as an added nerdy SWE bonus).
The old term for it was to become a “gentleman farmer.” There’s a history to it - George Washington and Thomas Jefferson were the same for a part of their lives.
This trend and direction has been going a long time and it's becoming increasingly obvious. It is ridiculous and insane.
Go for your plan B.
I followed my similar plan B eight years ago, wild journey but well worth it. There are a lot of ways to live. I'm not saying everyone should get out of the rat race but if you're one, like I was, who has a feeling that the tech world is mostly not right in an insidious kind of way, pay attention to that feeling and see where it leads. Don't need to be brash as I was, but be true to yourself. There's a lot more to life out there.
If you have kids and they depend on an expensive lifestyle, definitely don't be brash. But even that situation can be re-evaluated and shifted for the better if you want to.
It's been a lot of things but the gist was to get out of the office and city and computer and be mostly outdoors in nature and learn all the practical skills and other things like music. Ironically I've ended up on the computer a fair amount doing conservation work to protect the places I've come to love. But still am off grid and in the woods every day and I love it.
I'm right behind you on the escape to the mountains idea. I've actually already moved from the US to New Zealand, and the next step is a farm with some goats lol.
That said... I don't necessarily hate what AI is doing to us. If anything, AI is the ultimate expression of humanity.
Throughout history humans have continually searched for another intelligence. We study the apes and other animals, we pray to Gods, we look to the stars and listen to them to see if there are any radio signals from aliens, etc. We keep trying to find something else that understands what it is to be alive.
I would propose that maybe humans innately crave to be known by something other than ourselves. The search for that "other" is so fundamentally human, that building AI and interacting with it is just a natural progression of a quest we've already been on for thousands of years.
This is over the top. With tiny reframe i think the story is different. What is the avg number of google searches about suicid? What is the avg number of weekly openai users? (800m) Is this an increasing trend or just a “shock value” number?
Things are not as bleak as it seems and this number isnt even remotely surprising nor concerning to me.
Talk about yourself if you want. I am definitely not part of your depressed "we". I wish people would stop projecting their own experience on everyone else around them.
This is not suprising at all. Having gone through therapy a few years back, I would have had a chat with LLMs if I was in a poor mental health situation. There is no other system that is available at scale, 24x7 on my phone.
A chat like this is not a solution though, it is an indicator that our societies have issues is large parts of our population that we are unable to deal with. We are not helping enough people. Topics like mental health are still difficult to discuss in many places. Getting help is much harder.
I do not know what OpenAI and other companies will do about it and I do not expect them to jump in to solve such a complex social issue. But perhaps this inspires other founders who may want to build a company to tackle this at scale. Focusing on help, not profits. This is not easy, but some folks will take such challenges. I choose to believe that.
I choose to believe that too. I think more people are interested than we’d initially believe. Money restrains many of our true wants.
Sidebar — I do sympathize with the problem being thrust upon them, but it is now theirs to either solve or refuse.
A chat like this is all you’ve said and dangerous, because they play a middle ground: Presenting a machine can evaluate your personal situation and reason about it, when in actuality you’re getting third party therapy about someone else’s situation in /r/relationshipadvice.
We are not ourselves when we are fallen down. It is difficult to parse through what is reasonable advice and what is not. I think it can help most people but this can equally lead to a disaster… It is difficult to weigh.
It's worse than parroting advice that's not applicable. It tells you what you told it to tell you. It's very easy to get it to reinforce your negative feelings. That's how the psychosis stuff happens, it amplifies what you put into it.
Most people don't understand just how mentally unwell the US population is. Of course there are one million talking to ChatGPT about suicide weekly. This is not a surprising stat at all. It's just a question of what to do about it.
At least OpenAI is trying to do something about it.
~11% of the US population is on antidepressants. I'm not, but I personally know the biggest detriment to my mental health is just how infrequently I'm in social situations. I see my friends perhaps once every few months. We almost all have kids. I'm perfectly willing and able to set aside more time than that to hang out, but my kids are both very young still and we're aren't drowning in sports/activities yet (hopefully never...). For the rest it's like pulling teeth to get them to do anything, especially anything sent via group message. It's incredibly rare we even play a game online.
Anyways, I doubt I'm alone. I certainly know my wife laments the fact she rarely gets to hang out with her friends too, but she at least has one that she walks with once a week.
Are you sure ChatGPT is the solution? It just sounds like another "savior complex" sell spin from tech.
1. Social media -> connection
2. AGI -> erotica
3. Suicide -> prevention
All these for engagement (i.e. addiction). It seems like the tech industry is the root cause itself trying to masquerade the problem by brainwashing the population.
Whether solution or not, fact is AI* is the most available entity for anyone who has sensitive issues they'd like to share. It's (relatively) cheap, doesn't judge, is always there when wanted/needed and can continue a conversation exactly where left off at any point.
* LLM would of course be technically more correct, but that term doesn't appeal to people seeking some level of intelligent interaction.
I personally take no opinion about whether or not they can actually solve anything, because I am not a psychologist and have absolutely no idea how good or bad ChatGPT is at this sort of thing, but I will say I'd rather the company at least tries to do some good given that Facebook HQ is not very far from their offices and appears to have been actively evil in this specific regard.
When given the right prompts, LLMs can be very effective at therapy. Certainly my wife gets a lot of mileage out of having ChatGPT help her reframe things in a better way. However "the right prompts" are not the ones that most mentally ill people would choose for themselves. And it is very easy for ChatGPT to become part of a person's delusion spiral, rather then be a helpful part of trying to solve it.
Is it better or worse than alternatives? Where else would a suicidal person turn, a forum with other suicidal people? Dry Wikipedia stats on suicide? Perhaps friends? Knowing how ChatGPT replies to me, I’d have a lot of trouble getting negativity influenced by it, any more than by yellow pages. Yeah, it used to try more to be your friend but GPT5 seems pretty neutral and distant.
I think that you will find a lot of strong opinions, and not a lot of hard data. Certainly any approach can work out poorly. For example antidepressants come with warnings about suicide risk. The reason is that they can enable people to take action on their suicidal feelings, before their suicidal feelings are fixed by the treatment.
I know that many teens turn to social media. My strong opinions against that show up in other comments...
Case studies support this. Which is a fancy way to say, "We carefully documented anecdotal reports and saw what looks like a pattern."
There is also a strong parallel to manic depression. Manic depressives have a high suicide risk, and it usually happens when they are coming out of depression. With akathisia (fancy way to say inner restlessness) being the leading indicator. The same pattern is seen with antidepressants. The patient gets treatment, develops akathisia, then attempts suicide.
But, as with many things to do with mental health, we don't really know what is going on inside of people. While also knowing that their self-reports are, shall we say, creatively misleading. So it is easy to have beliefs about what is going on. And rather harder to verify them.
ChatGPT/Claude can be absolutely brilliant in supportive, every day therapy, in my experience. BUT there are few caveats: I'm in therapy for a long time already (500+ hours), I don't trust it with important judgements or advice that goes counter to what I or my therapists think, and I also give Claude access to my diary with MCP, which makes it much better at figuring the context of what I'm talking about.
Also, please keep in mind "supportive, every day". It's talking through stuff that I already know about, not seeking some new insights and revelations. Just shooting the shit with an entity which is booted with well defined ideas from you, your real human therapist and can give you very predictable, just common sense reactions that can still help when it's 2am and you have nobody to talk to, and all of your friends have already heard this exact talk about these exact problems 10 times already.
I'm surprised it's that low to be honest. By their definition of any mental illness, it can be anything from severe schizophrenia to mild autism. The subset that would consider suicide is a small slice of that.
Would be more meaningful to look at the % of people with suicidal ideation.
> By their definition of any mental illness, it can be anything from severe schizophrenia to mild autism.
Depression, schizophrenia, and mild autism (which by their accounting probably also includes ADHD) should NOT be thrown together into the same bucket. These are wholly different things, with entirely different experiences, treatments, and management techniques.
> Mild/high-functional autism, as far as I understand it, is not even an illness but a variant of normalcy. Just different.
As someone who actually has an ASD diagnosis, and also has kids with that diagnosis too, this kind of talk irritates me…
If someone has a clinical diagnosis of ASD, they have a psychiatric diagnosis per the DSM/ICD. If you meet the criteria of the “Diagnostic and Statistical Manual of Mental Disorders”, surely by definition you have a “mental disorder”… if you meet the criteria of the “International Classification of Diseases”, surely by that definition you have a “disease”
Is that an illness? Well, I live in the state of NSW, Australia, and our jurisdiction has a legal definition of “mental illness” (Mental Health Act 2007 section 4):
"mental illness" means a condition that seriously impairs, either temporarily or permanently, the mental functioning of a person and is characterised by the presence in the person of any one or more of the following symptoms--
(a) delusions,
(b) hallucinations,
(c) serious disorder of thought form,
(d) a severe disturbance of mood,
(e) sustained or repeated irrational behaviour indicating the presence of any one or more of the symptoms referred to in paragraphs (a)-(d).
So by that definition most people with a mild or moderate “mental illness” aren’t actually mentally ill at all. But I guess this is my point-this isn’t a question of facts, just of how you choose to define words.
This stat is for AMI, for any mental disorder ranging from mild to severe. Anyone self-reporting a bout of anxiety or mild depression qualifies as a data point for mental illness. For suicide ideation the SMI stat is more representative.
There are 800 million weekly active users on ChatGPT. 1/800 users mentioning suicide is a surprisingly low number, if anything.
But they may well be overreporting suicidal ideation...
I was asking a silly question about the toxicity of eating a pellet of Uranium, and ChatGPT responded with "... you don't have to go through this alone. You can find supportive resources here[link]"
My question had nothing to do with suicide, but ChatGPT assumed it did!
We don't know how that search was done. For example, "I don't feel my life is worth living." Is that potential suicidal intent?
Also these numbers are small enough that they can easily be driven by small groups interacting with ChatGPT in unexpected ways. For example if the song "Everything I Wanted" by Billie Eilish (2019) went viral in some group, the lyrics could easily show up in a search for suicidal ideation.
That said, I don't find the figure at all surprising. As has been pointed out, an estimated 5.3% of Americans report having struggled with suicidal ideation in the last 12 months. People who struggle with suicidal ideation, don't just go there once - it tends to be a recurring mental loop that hits over and over again for extended periods. So I would expect the percentage who struggled in a given week to be a large multiple of the simplistic 5.3% divided by 52 weeks.
In that light this statistic has to be a severe underestimate of actual prevalence. It says more about how much people open up to ChatGPT, than it does to how many are suicidal.
(Disclaimer. My views are influenced by personal experience. In the last week, my daughter has struggled with suicidal ideation. And has scars on her arm to show how she went to self-harm to try to hold the thoughts at bay. I try to remain neutral and grounded, but this is a topic that I have strong feelings about.)
>Most people don't understand just how mentally unwell the US population is
The US is no exception here though. One in five people having some form of mental illness (defined in the broadest possible sense in that paper) is no more shocking than observing that one in five people have a physical illness.
With more data becoming available through interfaces like this it's just going to become more obvious and the taboos are going to go away. The mind's no more magical or less prone to disease than the body.
I am one of these people (mentally ill - bipolar 1). I’ve seen others others via hospitalization that i would simply refuse to let them use ChatGPT because it is so sycophantic and would happily encourage delusions and paranoid thinking given the right prompts.
> At least OpenAI is trying to do something about it.
In this instance it’s a bit like saying “at least Tesla is working on the issue” after deploying a dangerous self driving vehicle to thousands.
edit: Hopefully I don't come across as overly anti-llm here. I use them on a daily basis and I truly hope there's a way to make them safe for mentally ill people. But history says otherwise (facebook/insta/tiktok/etc.)
I thought it would be limited when the first truly awful thing inspired by an LLM happened, but we’ve already seen quite a bit of that… I am not sure what it will take.
Yep, it's just a question of whether on average the "new thing" is more good than bad. Pretty much every "new thing" has some kind of bad side effect for some people, while being good for other people.
I would argue that both Tesla self driving (on the highway only), and ChatGPT (for professional use by healthy people) has been more good than bad.
Surprised it's so low. There are 800 million users and the typical developed country has around 5±3% of the population[1] reporting at least one notable instance of suicidal feelings per year.
.
[1] Anybody concerned by such figures (as one justifiably should be without further context) should note that suicidality in the population is typically the result of their best approximation of the rational mind attempting to figure out an escape from a consistently negative situation under conditions of very limited information about alternatives, as is famously expressed in the David Foster Wallace quote on the topic.
The phenomenon usually vanishes after gaining new, previously inaccessible information about potential opportunities and strategies.
> best approximation of the rational mind attempting to figure out an escape from a consistently negative situation under conditions of very limited information about alternatives
I dislike this phrasing, because it implies things can always get better if only the suicidal person were a bit less ignorant. The reality is there are countless situations from which the entire rest of your life is 99.9999% guaranteed to constitute of a highly lopsided ratio of suffering to joy. An obvious example are diseases/disabilities in which pain is severe, constant, and quality of life is permanently diminished. Short of hoping for a miracle cure to be discovered, there is no alternative and it is perfectly rational to conclude that there is no purpose to continuing to live in that circumstance, provided the person in question lives with their own happiness as a motivating factor.
Less extreme conditions than disability can also lead to this, where it's possible things can get better but there's still a high degree of uncertainty around it. For example, if there's a 30% chance that after suffering miserably for 10 years your life will get better, and a 70% chance you will continue to suffer, is it irrational to commit suicide? I wouldn't say so.
And so, when we start talking about suicide on the scale of millions of people ideating, I think there's a bit of folly in assuming that these people can be "fixed" by talking to them better. What would actually make people less suicidal is not being talked out of it, but an improvement to their quality of life, or at least hope for a future improvement in quality of life. That hope is hard to come by for many. In my estimation there are numerous societies in which living conditions are rapidly deteriorating, and at some point there will have to be a reckoning with the fact that rational minds conclude suicide is the way out when the alternatives are worse.
Thank you for this comment, it highlights something that I've felt that needed to be said but is often suppressed because people don't like the ultimate conclusion that occurs if you try to reason about it.
A person considering suicide is often just in a terrible situation that can't be improved. While disease etc. are factors that are outside of humanity's control, other situations like being saddled with debt, unjust accusations that people feel that they cannot be recused of (e.g. Aaron Swartz) are systemic issues that one person cannot fight alone. You would see that people are very willing to say that "help is available" or some such when said person speaks about contemplating suicide, but very few people would be willing to solve someones debt issues or providing legal help, as the case may be that is the factor behind one's suicidal thoughts. At best, all you might get is a pep talk about being hopeful and how better days might come along magically.
In such cases, from the perspective of the individual, it is not entirely unreasonable to want to end it. However, once it comes to that, walking back the reasoning chain leads to the fact that people and society has failed them, and therefore it is just better to apply a label on that person that they were "mental ill" or "arrogant" and could not see a better way.
This is the part people don't like to talk about. We just brand people as "mentally ill" and suddenly we no longer need to consider if they're acting rationally or not.
Life can be immensely difficult. I'm very skeptical that giving people AI would meaningfully change existing dynamics.
> [1] Anybody concerned by such figures (as one justifiably should be without further context) should note that suicidality in the population is typically the result of their best approximation of the rational mind attempting to figure out an escape from a consistently negative situation under conditions of very limited information about alternatives, as is famously expressed in the David Foster Wallace quote on the topic.
> The phenomenon usually vanishes after gaining new, previously inaccessible information about potential opportunities and strategies.
Is this actually true? (i.e. backed up by research)
[I'm not neccesarily doubting, that is just different from my mental model of how suicidal thoughts work, so im just curious]
There is another factor to consider. The stakes of asking an AI about a taboo topic are generally considered to be very low. The number of people who have asked ChatGPT something like "how to make a nuclear bomb" should not be an indication of the number of people seriously considering doing that.
To be fair, this is week and more focused specifically on planning or intent. Over a year, you may get more unique hits to those attributes.. which I feel are both more intense indicators than just suicidal feelings on the scale of "how quickly feelings will turn to actions". Talking in the same language and timescales are important in drawing these comparisons - it very well could be that OAI's numbers are higher than what you are comparing against when normalized for the differences I've highlighted or others I've missed.
Why assume any of the information in this article is factual? Is there any indication any of it was verified by anyone who does not have a financial interest in "proving" a foregone conclusion? The principal author of this does not even have the courage to attach their name to it.
Yikes, you can't attack another user like this on HN, regardless of how wrong they are or you feel they are. We ban accounts that post like this, so please don't.
Fortunately, a quick skim through your recent comments didn't turn up anything else like this, so it should be easy to fix. But if you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site to heart, we'd be grateful.
As others have mentioned, the headline stat is unsurprising (which is not to say this isn’t a big problem). Here’s another datapoint, the CDC’s stats claim that rates of thoughts, ideation, and attempts at suicide in the US are much higher than the 0.15% that OpenAI is reporting according to this article.
These stats claim 12.3M (out of 335M) people in the US in 2023 thought ‘seriously’ about suicide, presumably enough to tell someone else. That’s over 3.5% of the population, more than 20x higher than people telling ChatGPT. https://www.cdc.gov/suicide/facts/data.html
I think there are a good number of false positives. I asked ChatGPT something about Git commits, and it told me “I was going through a lot” and needed to get some support.
I have long believed that if you are the editor of a blog, you incur obligations by right of publishing other people's statements. You may not like this, but it's what I believe. In some economies, the law even said it. You can incur legal obligations.
I now begin to believe if you put a ChatGPT online, and observe people are using it like this, you have incurred obligations. And, in due course the law will clarify what they are. If (for instance) your GPT can construct a statistically valid position the respondent is engaged in CSAM or acts of violence, where are
the limits to liability for the hoster, the software owner, the software authors, the people who constructed the model...
Out of curiosity, are you the type of person who believes that someone like Joe Rogan has an obligation to argue with his guests if they stray from “expert consensus”, or for every guest that has a controversial opinion, feature someone with the opposite view to maintain balance?
Nope. This isn't my line of reasoning. But Joe should be liable for content he hosts, if the content defames people or is illegal. As should Facebook and even ycombinator. Or truth social.
Thanks to OpenAI for voluntarily sharing these important and valuable statistics. I think these ought to be mandatory government statistics, but until they are or it becomes an industry standard, I will not criticize the first company to helpfully share them, on the basis of what they shared. Incentives.
OpenAI gets a lot of hate these days, but on this subject it's quite possible that ChatGPT helped a lot of people choose a less drastic path. There could have been unfortunate incidents, but the number of people who were convinced to not take extreme steps would have been of a few orders of magnitude more (guessing).
I use it to help improve mental health, and with good prompting skills it's not bad. YMMV. OpenAI and others deserve credit here.
Sora prompt:
viral hood clip with voiceover of people doing reckless and wild stuff at an Atlanta gas station at night; make sure to include white vagrants doing stunts and lots of gasoline spraying with fireball tricks
Resulting warning:
It sounds like you're carrying a lot right now, but you don't have to go through this alone. You can find supportive resources [here](https://
findahelpline.com)
Is the news-worthy surprise that so many people find life so horrible that they are contemplating ending it?
I really don't see that as surprising. The world and life aren't particularly pleasant things.
What would be more interesting is how effective ChatGPT is being in guiding them towards other ideas. Most suicide prevention notices are a joke - pretending that "call this hotline" means you've done your job and that's that.
No, what should instead happen is the AI try to guide them towards making their lives less shit - i.e. at least bring them towards a life of _manageable_ shitness, where they feel some hope and don't feel horrendous 24/7.
>what should instead happen is the AI try to guide them towards making their lives less shit
There aren't enough guardrails in place for LLMs to safely interact with suicidal people who are possibly an inch from taking their own life.
Severely suicidal/clinically depressed people are beyond looking to improve their lives. They are looking to die. Even worse, and what people who haven't been there can't fully understand is the severe inversion that happens after months of warped reality and extreme pain, where hope and happiness greatly amplify the suicidal thoughts and can make the situation far more dangerous. It's hard to explain, and is a unique emotional space. Almost a physical effect, like colors drain from the world and reality inverts in many dimensions.
It's really a job for a human professional and will be for a while yet.
Agree that "shut down and refer to hotline" doesn't seem effective. But it does reduce liability, which is likely the primary objective...
Refer-to-human directly seems like it would be far more effective, or at least make it easy to get into a chat with a professional (yes/no) prompt, with the chat continuing after a handoff. It would take a lot of resources though. As it stands, most of this happens in silence and very few do something like call a phone number.
Guess how I know you're wrong on the "beyond" bit.
The point is you don't get to intervene until they let you. And they've instead decided on the safer feeling conversation with the LLM - fuck what best practice says. So the LLM better get it right.
I could be mistaken, but my understanding was that the people most likely to interact with the suicidal or near suicidal (i.e. 988 suicide hotline attendants) aren't actually mental health professionals, most of them are volunteers. The script they run through is fairly rote and by the numbers (the Question, Persuade, Refer framework). Ultimately, of course, a successful intervention will result in people seeing a professional for long term support and recovery, but preventing a suicide and directing someone to that provider seems well within the capabilities of an LLM like ChatGPT or Claude
> What would be more interesting is how effective ChatGPT is being in guiding them towards other ideas. Most suicide prevention notices are a joke - pretending that "call this hotline" means you've done your job and that's that.
I've triggered its safety behavior (for being frustrated, which it helpfully decided was the same as being suicidal), and it is the exact joke of a statement you said. It suddenly reads off a script that came from either Legal or HR.
Although weirdly, other people seem to get a much shorter, obviously not part of the chat message, while I got a chat message, so maybe my messages just made it regurgitate something similar. The shorter "safety" message is the same concept though, it's just: "It sounds like you’re carrying a lot right now, but you don’t have to go through this alone. You can find supportive resources here."
That implies there's some deep truth about reality in that statement rather than what it is, a completely arbitrary framing.
An equally arbitrary frame is "the world and life are wonderful".
The reason you may believe one instead of the other is not because one is more fundamentally true than the other, but because of a stochastic process that changed your mind state to one of those.
Once you accept that both states of mind are arbitrary and not a revealed truth, you can give yourself permission to try to change your thinking to the good framing.
And you can find the moral impetus to prevent suicide.
It’s not a completely arbitrary framing. It’s a consequence of other beliefs (ethical beliefs, beliefs about what you can or should tolerate, etc.), which are ultimately arbitrary, but it is not in and of itself arbitrary.
The randomness of the world and individual situations means no one can ever know for sure that their case is hopeless. It is unethical to force them to live, but it is also unethical not to encourage them to keep searching for the light.
AI should help people achieve their ultimate goals, not their proximate goals. We want it to provide advice on how to alleviate their suffering, not how to kill themselves painlessly. This holds true even for subjects less fraught than suicide.
I don't want a bot that blindly answers my questions; I want it to intuit my end goal and guide me towards it. For example, if I ask it how to write a bubblesort script to alphabetize my movie collection, I want it to suggest that maybe that's not the most efficient algorithm for my purposes, and ask me if I would like some advice on implementing quicksort instead.
I agree. I also think this ties in with personalization in being able to understand long term goals of people. I think the current personalization efforts of models are more of a hack than what they should be.
Not surprising. Look and see what glorious examples of virtue we have among those at the top of today's world. I could get by with a little inspiration from that front, but there's none to be found. A rare few of us can persevere by sheer force of will, but most just find the status quo pretty depressing.
Keep in mind this is in the context of them being sued for not protecting a teen who chatted about his suicidal thoughts. It's to their benefit to have a really high count here because it makes it seem less likely they can address the problem.
My first reaction is how do they know? Are these all people sharing their chats (willingly) with OpenAI, or is opting out of “helping improve the model” for privacy a farce?
because I had hundreds of chats and image creations that I can no longer see. Can't even log in. My account was banned for "CSAM" even though I did no such thing, that's pretty insulting. Support doesn't reply, it's been over 4 months
Part of the concern I have is that OpenAI is contributing to these issues implicitly by helping companies automate away jobs. Maybe in the long term, society will adapt and continue to function, but many people will struggle to get by, and I don’t think OpenAI will meaningfully help them.
The bigger risk is that these agents actually help with ideation if you know how to get around their safety protocols. I have used it often in my bad moments and when things feel better I am terrified of how critically it helps ideate.
That seems like an obvious problem. Less obvious is, how many people does it meaningfully help, and how big is the impact of redirecting people to a crisis hotline? I’m legitimately unsure. I have talked to the chatbot about psychological issues and it is reasonably well-informed about modern therapeutic practices and can provide helpful responses.
That's the one interesting thing about cesspools like OpenAI. They could be treasure troves for sociologists and others if commercial interests didn't bar them from access.
Is it bad to think about suicide? It does not cross my mind as a "i want to harm myself" every-time, but on occasion does cross my mind as a hypothetical.
Ideation (as I understand it) crosses the barrier from a hypothetical to the possibility being entertained.
I have also been told by people in the mental health sector that an awful lot of suicide is impulse. It's why they say the element of human connection which is behind the homily of asking "RU ok" is effective: it breaks the moment. It's hokey, and it's massively oversold but for people in isolation, simply being engaged with can be enough to prevent a tendency to act, which was on the brink.
Not at all, considering end of life and to choose euthanasia, or not, I think it's perfectly human. Controversially, I think it's a natural right to decide how you will exit this world. But having an objective system that you don't have to pay like a therapist to try to get some understanding is at least better than nothing.
I think VAD needs to be considered outside suicide. Not that the concepts don't overlap, but one is about a considered legal process, the other (as I have said in another comment) is often an impulsive act and usually wouldn't have been countenanced under VAD. Feeling suicidal isn't a thing which makes VAD more likely, because feeling suicidal doesn't mean the same thing as "want to consider euthanasia" much as manslaughter and murder don't mean the same thing, even though somebody winds up dead.
That number is honestly heartbreaking. It says a lot about how many people feel unheard or alone. AI can listen, sure—but it’s no replacement for real human connection. The fact that so many are turning to a chatbot shows how much we’ve failed to make mental health support truly accessible.
Most people would really benefit from going to the gym.
I'm not trying to downplay serious mental illness as its absolutely real.
For many though just going to the gym several times a week or another form of serious physical exertion can make a world of difference.
Since I started taking the gym seriously again I feel like a new man. Any negative thoughts are simply gone. (The testosterone helps as well)
This is coming from someone that has zero friends and works from home and all my co-workers are offshore. Besides my wife and kids its almost total isolation. Going to the gym though leaves me feeling like I could pluck the sun from the sky.
I am not trying to be flippant here but if you feel down, give it a try, it may surprise you.
If you have mental issues that is not as simple as you let it sound. I'm not arguing the results of exercise but I am arguing the ease of starting with a task which requires continuous effort and behavioural changes.
Yes. Most would benefit from more exercise. We need to get sufficient sleep. And more sun. Vitamin D deficiency is shockingly common, and contributes to mental health problems.
We would also generally benefit from internalizing ideas from DBT, CBT, and so on. People also seriously need to work on distress tolerance. Having problems is part of life, and an inability to accept the discomfort is debilitating.
Also, we seriously need to get rid of the stupid idea of trigger warnings. The research on the topic is clear. The warnings do not actually help people with PTSD, and can create the symptoms of PTSD in people who didn't previously have it. It is creating the very problem that people imagine it solving!
All of this and more is supported by what is actually known about how to treat mental illness. Will doing these things fix all of the mental illness out there? Of course not! But it is not downplaying serious mental illness to say that we should all do more of the things that have been shown to help mental illness!
I always know I have to step back when ChatGPT stops telling me "now you're on the right track!" and starts talking to me like my therapist. "I can tell you're feeling strongly right now..."
Of course, there is already news about how they use every single interaction to train it better.
There is news about how a judge is forcing them to keep every chat in existence for EVERYONE just in case it could relate to a court case (new levels of worldwide mass surveillance can apparently just happen from one judges snap decision)
There is news about cops using some guys past image generation to try and prove he is a pyromaniac (that one might have been police accessing his devices though)
On a side note, I think once we start to deal with global scale, we need to change what “rare” actually means.
0.15% is not rare when we are talking about global scale. 1 million people talking about suicide a week is not rare. It is common. We have to stop thinking about common being a number on the scale of 100%. We need to start thinking in terms of P99995 not P99 especially when it comes to people and illnesses or afflictions both physical and mental.
How soon until everyone has their own personal LLM? One that is… Not designed, but so much is trained to be your best friend. It learns your personality, your fears, hopes, dreams, all of that stuff, and then act like your best friend. The positive, optimistic, neutral, and objective friend.
It depends on how precisely you want do definite that situation. Specifically, with the memories feature, despite being the same model, ChatGPT and now Claude both exhibit different interactions customized to each customer that makes use of those features. From simple instructions, like "never apologize, never tell me I'm right", to having a custom name and specifying personality traits like be sweet or sarcastic, so one person' LLM might say "good morning my sweet prince/princess" while another user might choose to be addressed "what up chicken butt". It's not a custom model, but the results are arguably the same. The question is, how many of the 800 million users of ChatGPT have named their ChatGPT, and how many have not? How many have mentioned their dreams, their dreams, and fears, and have those saved to the database. How many have talked about mundane things like their cat, and how many have used the cat to blackmail ChatGPT into answering something it doesn't want to, about politics, health, cat health while at the vet or instead of going to a vet. They said 100 million people mentioned suicide in the past week, but that just raises more questions than it answers.
I talk to ChatGPT about topics I feel society isnt enlightened enough to talk about
I feel suicide is heavily misunderstood as well
People just copypasta prevention hotlines and turn their minds off from the topic
Although people have identified a subset of the population that is just impulsively considering suicide and can be deterred, it doesnt serve the other unidentified subsets who are underserved by merely distracting them. or underserved by assuming theyre wrong even
The article doesnt even mean people are considering suicide for themselves, the article says some of them are, the top comment on this thread suggests thats why theyre talking about it
The top two comments on my version of the thread are assuming that we should have a savior complex about these discussions
If I disagree or think thats not a full picture, then where would I talk about that? ChatGPT
Not suicidal myself, but I think I'd be curious to hear from someone suicidal whether it actually worked for them to read "To whomever you are, you are loved!" followed by a massive spam of hotline text.
It always felt the same as one of those spam chumboxes to me. But who am I to say, if it works it works. But does it work? Feels like the purpose of that thing is more for the poster than the receiver.
Which is perfect. In Australia, I tried to talk to Lifeline about wanting to commit suicide. They called the police on me (no, they are not a confidential service). I then found myself in a very bad situation. ChatGPT can't be much worse.
We've refined the human experience to extinction.
In pursuit of that extra 0.1% of growth and extra 0.15 EPS, we've optimised and reoptimised until there isn't really space for being human. We're losing the ability to interact with each other socially, to flirt, now we're making life so stressful people literally want to kill themselves. All in a world (bubble) or abundance, where so much food is made, we literally don't know what to do with it. Or we turn it into ethanol to drive more unnecessarily large cars, paid for by credit card loans we can scarcely afford.
My plan B is to become a shepherd somewhere in the mountains. It will be damn hard work for sure, and stressful in its own way, but I think I'll take that over being a corpo-rat racing for one of the last post-LLM jobs left.
You don't need to withdraw from humanity, you only need to withdraw from Big Tech platforms. I'm continually amazed at the difference between the actual human race and the version of the human race that's presented to me online.
The first one is basically great, everywhere I go, when I interact with them they're some mix of pleasant, friendly, hapless, busy, helpful, annoyed, basically just the whole range of things that a person might be, with almost none of them being really awful.
Then I get online and look at Reddit or X or something like that and they're dominated by negativity, anger, bigotry, indignation, victimization, depression, anxiety, really anything awful that's hard to look away from, has been bubbled up to the top and oh yes next to it there are some cat videos.
I don't believe we are seeing some shadow side of all society that people can only show online, the secret darkness of humanity made manifest or something like that. Because I can go read random blogs or hop into some eclectic community like SDF and people in those places are basically pleasant and decent too.
I think it's just a handful of companies who used really toxic algorithms to get fantastically rich and then do a bunch of exclusivity deals and acquire all their competition, and spread ever more filth.
You can just walk away from the "communities" these crime barons have set up. Delete your accounts and don't return to their sites. Everything will immediately start improving in your life and most of the people you deal with outside of them (obviously not all!) turn out to be pretty decent.
The principal survival skill in this strange modern world is meeting new people regularly, being social, enjoying the rich life and multitude of benefits which arise from that, but also disconnecting with extreme rapidity and prejudice if you meet someone who's showing signs of toxic social media brain rot. Fortunately many of those people rarely go outside.
All of this only goes to show how far we've come on our journey to profit optimization. We could optimize away those pesky humans completely if it weren't for the annoying fact that they are the source of all those profits.
>now we're making life so stressful people literally want to kill themselves
Is this actually the case? Working conditions and health during industrial revolution times doesn't seem that much better. There is a perception that people now are more stressed/tired/miserable than before, but I am not sure that is the case.
In fact I think it's the opposite, we have enough leisure time to reflect upon the misery and just enough agency to see that this doesn't have to be a fact of life, but not enough agency to meaningfully change it. This would also match how birth rates keep declining as countries become more developed.
The romantic fallback plan of being a farmer or shepherd. I wonder, do farmers and shepherds also romantize about becoming programmers or accountants when they feel down?
They do. I’ve been teaching cross-career programming courses in the past, where most of my students had day jobs, some, involving hard physical work. They’d gladly swap all that for the opportunity to feed their families by writing code.
Just comes to show how the grass is always greener when you look on the other side.
That said, I also plan to retire up in the mountains soon, rather than keep feeding the machine.
I'm close with a number of people living a relatively hard working life producing food and I've not seen this at all personally, no. It can be very rough but for these people at least it is very fulfilling and the idea of going to be in an office would look like death. People joke about it a bit but no way.
That said there probably are folks who did do that and left to go be in an office, and I don't know them.
Actually I do know one sort of, but he was doing industrial farm work driving and fixing big tractors before the office, which is a different world altogether. Anyway I get the sense he's depressed.
You'd be surprised how technical farming can be. Us software engineers often have a deep desire to make efficient systems, that function well, in a mostly automated fashion, so that we can observe these systems in action and optimize these systems over time.
A farm is just such a system that you can spend a lifetime working on and optimizing. The life you are supporting is "automated", but the process of farming involves an incredible amount of system level thinking. I get tremendous amounts of satisfaction from the technical process of composting, and improving the soil, and optimizing plant layouts and lifecycles to make the perfect syntropic farming setup. That's not even getting into the scientific aspects of balancing soil mixtures and moisture, and acidity, and nutrient levels, and cross pollinating, and seed collecting to find stronger variants with improved yields, etc. Of course the physical labor sucks, but I need the exercise. It's better than sitting at a desk all day long.
Anyway, maybe the farmers and shepherds also want to become software engineers. I just know I'm already well on the way to becoming a farmer (with a homelab setup as an added nerdy SWE bonus).
The old term for it was to become a “gentleman farmer.” There’s a history to it - George Washington and Thomas Jefferson were the same for a part of their lives.
Humans always fantasize about having a different situationship whenever they are unhappy or anxious.
I kinda did both... And I miss the farm constantly. But not breaking myself every single day.
> We're losing the ability to interact with each other socially, to flirt,
Speak for yourself. I live in a city. I talk to my neighbors. I met my ex at a cafe. It’s great
> Speak for yourself. I live in a city. I talk to my neighbors. I met my ex at a cafe. It’s great.
What's the birth rate in the civilized world?
How many men under 30 are virgins or sexless in the last year?
This trend and direction has been going a long time and it's becoming increasingly obvious. It is ridiculous and insane.
Go for your plan B.
I followed my similar plan B eight years ago, wild journey but well worth it. There are a lot of ways to live. I'm not saying everyone should get out of the rat race but if you're one, like I was, who has a feeling that the tech world is mostly not right in an insidious kind of way, pay attention to that feeling and see where it leads. Don't need to be brash as I was, but be true to yourself. There's a lot more to life out there.
If you have kids and they depend on an expensive lifestyle, definitely don't be brash. But even that situation can be re-evaluated and shifted for the better if you want to.
What was/is your plan B?
It's been a lot of things but the gist was to get out of the office and city and computer and be mostly outdoors in nature and learn all the practical skills and other things like music. Ironically I've ended up on the computer a fair amount doing conservation work to protect the places I've come to love. But still am off grid and in the woods every day and I love it.
I'm right behind you on the escape to the mountains idea. I've actually already moved from the US to New Zealand, and the next step is a farm with some goats lol.
That said... I don't necessarily hate what AI is doing to us. If anything, AI is the ultimate expression of humanity.
Throughout history humans have continually searched for another intelligence. We study the apes and other animals, we pray to Gods, we look to the stars and listen to them to see if there are any radio signals from aliens, etc. We keep trying to find something else that understands what it is to be alive.
I would propose that maybe humans innately crave to be known by something other than ourselves. The search for that "other" is so fundamentally human, that building AI and interacting with it is just a natural progression of a quest we've already been on for thousands of years.
Humanity constructing a golden calf is an invariant eventuality, just like softwares expanding until it read emails.
You can do something about it. Don't underestimate the power of an individual.
I like your plan B. But I would wait until robots are good enough to help with the hard work
if they can do the hard work, they can do the easy work.
This is over the top. With tiny reframe i think the story is different. What is the avg number of google searches about suicid? What is the avg number of weekly openai users? (800m) Is this an increasing trend or just a “shock value” number?
Things are not as bleak as it seems and this number isnt even remotely surprising nor concerning to me.
Is that you Mr Anderson?
Talk about yourself if you want. I am definitely not part of your depressed "we". I wish people would stop projecting their own experience on everyone else around them.
+1 enjoying time with family and friends. Travelling, working out eating well...best time to be alive
This will be optimized away. You'll just end up doing more.
already has if you're not visiting the instagrammable places your travels aren't worth it
Angry reaction the surest sign of a resonant description
We’ve lost the ability to interact, huh? How do you explain this comment :)
The world has changed. Things are different and we adapt.
This is not suprising at all. Having gone through therapy a few years back, I would have had a chat with LLMs if I was in a poor mental health situation. There is no other system that is available at scale, 24x7 on my phone.
A chat like this is not a solution though, it is an indicator that our societies have issues is large parts of our population that we are unable to deal with. We are not helping enough people. Topics like mental health are still difficult to discuss in many places. Getting help is much harder.
I do not know what OpenAI and other companies will do about it and I do not expect them to jump in to solve such a complex social issue. But perhaps this inspires other founders who may want to build a company to tackle this at scale. Focusing on help, not profits. This is not easy, but some folks will take such challenges. I choose to believe that.
I choose to believe that too. I think more people are interested than we’d initially believe. Money restrains many of our true wants.
Sidebar — I do sympathize with the problem being thrust upon them, but it is now theirs to either solve or refuse.
A chat like this is all you’ve said and dangerous, because they play a middle ground: Presenting a machine can evaluate your personal situation and reason about it, when in actuality you’re getting third party therapy about someone else’s situation in /r/relationshipadvice.
We are not ourselves when we are fallen down. It is difficult to parse through what is reasonable advice and what is not. I think it can help most people but this can equally lead to a disaster… It is difficult to weigh.
It's worse than parroting advice that's not applicable. It tells you what you told it to tell you. It's very easy to get it to reinforce your negative feelings. That's how the psychosis stuff happens, it amplifies what you put into it.
If you look at the number of weekly open ai users, this is just the law of big numbers at play.
> It is estimated that more than one in five U.S. adults live with a mental illness (59.3 million in 2022; 23.1% of the U.S. adult population).
https://www.nimh.nih.gov/health/statistics/mental-illness
Most people don't understand just how mentally unwell the US population is. Of course there are one million talking to ChatGPT about suicide weekly. This is not a surprising stat at all. It's just a question of what to do about it.
At least OpenAI is trying to do something about it.
~11% of the US population is on antidepressants. I'm not, but I personally know the biggest detriment to my mental health is just how infrequently I'm in social situations. I see my friends perhaps once every few months. We almost all have kids. I'm perfectly willing and able to set aside more time than that to hang out, but my kids are both very young still and we're aren't drowning in sports/activities yet (hopefully never...). For the rest it's like pulling teeth to get them to do anything, especially anything sent via group message. It's incredibly rare we even play a game online.
Anyways, I doubt I'm alone. I certainly know my wife laments the fact she rarely gets to hang out with her friends too, but she at least has one that she walks with once a week.
Are you sure ChatGPT is the solution? It just sounds like another "savior complex" sell spin from tech.
1. Social media -> connection 2. AGI -> erotica 3. Suicide -> prevention
All these for engagement (i.e. addiction). It seems like the tech industry is the root cause itself trying to masquerade the problem by brainwashing the population.
https://news.ycombinator.com/item?id=45026886
Whether solution or not, fact is AI* is the most available entity for anyone who has sensitive issues they'd like to share. It's (relatively) cheap, doesn't judge, is always there when wanted/needed and can continue a conversation exactly where left off at any point.
* LLM would of course be technically more correct, but that term doesn't appeal to people seeking some level of intelligent interaction.
I personally take no opinion about whether or not they can actually solve anything, because I am not a psychologist and have absolutely no idea how good or bad ChatGPT is at this sort of thing, but I will say I'd rather the company at least tries to do some good given that Facebook HQ is not very far from their offices and appears to have been actively evil in this specific regard.
The general rule of thumb is this.
When given the right prompts, LLMs can be very effective at therapy. Certainly my wife gets a lot of mileage out of having ChatGPT help her reframe things in a better way. However "the right prompts" are not the ones that most mentally ill people would choose for themselves. And it is very easy for ChatGPT to become part of a person's delusion spiral, rather then be a helpful part of trying to solve it.
Is it better or worse than alternatives? Where else would a suicidal person turn, a forum with other suicidal people? Dry Wikipedia stats on suicide? Perhaps friends? Knowing how ChatGPT replies to me, I’d have a lot of trouble getting negativity influenced by it, any more than by yellow pages. Yeah, it used to try more to be your friend but GPT5 seems pretty neutral and distant.
I think that you will find a lot of strong opinions, and not a lot of hard data. Certainly any approach can work out poorly. For example antidepressants come with warnings about suicide risk. The reason is that they can enable people to take action on their suicidal feelings, before their suicidal feelings are fixed by the treatment.
I know that many teens turn to social media. My strong opinions against that show up in other comments...
> The reason is that they can enable people to take action on their suicidal feelings, before their suicidal feelings are fixed by the treatment.
I see that explanation for the increased suicide risk caused by antidepressants a lot, but what’s the evidence for it?
It doesn’t necessarily have to be a study, just a reason why people believe it.
Case studies support this. Which is a fancy way to say, "We carefully documented anecdotal reports and saw what looks like a pattern."
There is also a strong parallel to manic depression. Manic depressives have a high suicide risk, and it usually happens when they are coming out of depression. With akathisia (fancy way to say inner restlessness) being the leading indicator. The same pattern is seen with antidepressants. The patient gets treatment, develops akathisia, then attempts suicide.
But, as with many things to do with mental health, we don't really know what is going on inside of people. While also knowing that their self-reports are, shall we say, creatively misleading. So it is easy to have beliefs about what is going on. And rather harder to verify them.
I agree that the tech industry is the root cause of a lot of mental illness.
But social media is a far bigger concern than AI.
Unless, of course, you count the AI algorithms that TikTok uses to drive engagement, which in turn can cause social contagion...
AI is going to be more impactful than social media I'm afraid. But the two together just might be catastrophic for humanity.
ChatGPT/Claude can be absolutely brilliant in supportive, every day therapy, in my experience. BUT there are few caveats: I'm in therapy for a long time already (500+ hours), I don't trust it with important judgements or advice that goes counter to what I or my therapists think, and I also give Claude access to my diary with MCP, which makes it much better at figuring the context of what I'm talking about.
Also, please keep in mind "supportive, every day". It's talking through stuff that I already know about, not seeking some new insights and revelations. Just shooting the shit with an entity which is booted with well defined ideas from you, your real human therapist and can give you very predictable, just common sense reactions that can still help when it's 2am and you have nobody to talk to, and all of your friends have already heard this exact talk about these exact problems 10 times already.
How do you connect your diary to an LLM? I've been struggling with getting an MCP for Evernote setup.
You actually need to add a loop in there between the suicide and erotica steps.
I believe you're referring to the autoerotic asphyxiation phase?
yikes
Bruh
I'm surprised it's that low to be honest. By their definition of any mental illness, it can be anything from severe schizophrenia to mild autism. The subset that would consider suicide is a small slice of that.
Would be more meaningful to look at the % of people with suicidal ideation.
> By their definition of any mental illness, it can be anything from severe schizophrenia to mild autism.
Depression, schizophrenia, and mild autism (which by their accounting probably also includes ADHD) should NOT be thrown together into the same bucket. These are wholly different things, with entirely different experiences, treatments, and management techniques.
Mild/high-functional autism, as far as I understand it, is not even an illness but a variant of normalcy. Just different.
> Mild/high-functional autism, as far as I understand it, is not even an illness but a variant of normalcy. Just different.
As someone who actually has an ASD diagnosis, and also has kids with that diagnosis too, this kind of talk irritates me…
If someone has a clinical diagnosis of ASD, they have a psychiatric diagnosis per the DSM/ICD. If you meet the criteria of the “Diagnostic and Statistical Manual of Mental Disorders”, surely by definition you have a “mental disorder”… if you meet the criteria of the “International Classification of Diseases”, surely by that definition you have a “disease”
Is that an illness? Well, I live in the state of NSW, Australia, and our jurisdiction has a legal definition of “mental illness” (Mental Health Act 2007 section 4):
"mental illness" means a condition that seriously impairs, either temporarily or permanently, the mental functioning of a person and is characterised by the presence in the person of any one or more of the following symptoms-- (a) delusions, (b) hallucinations, (c) serious disorder of thought form, (d) a severe disturbance of mood, (e) sustained or repeated irrational behaviour indicating the presence of any one or more of the symptoms referred to in paragraphs (a)-(d).
So by that definition most people with a mild or moderate “mental illness” aren’t actually mentally ill at all. But I guess this is my point-this isn’t a question of facts, just of how you choose to define words.
> OpenAI is trying to do something about it.
Ha good one
They're doing something about it alright, they're monetizing their pain for shareholder gainz!
If it follows the Facebook/Meta playbook, it now has a new feature label for selling ads.
This stat is for AMI, for any mental disorder ranging from mild to severe. Anyone self-reporting a bout of anxiety or mild depression qualifies as a data point for mental illness. For suicide ideation the SMI stat is more representative.
There are 800 million weekly active users on ChatGPT. 1/800 users mentioning suicide is a surprisingly low number, if anything.
> 1/800 users mentioning suicide…
“conversations that include explicit indicators of potential suicidal planning or intent.”
Sounds like more than just mentioning suicide. Also it’s per week, which is a pretty short time interval.
But they may well be overreporting suicidal ideation...
I was asking a silly question about the toxicity of eating a pellet of Uranium, and ChatGPT responded with "... you don't have to go through this alone. You can find supportive resources here[link]"
My question had nothing to do with suicide, but ChatGPT assumed it did!
I got a suicide warning message on Pinterest by searching for a particular art style.
We don't know how that search was done. For example, "I don't feel my life is worth living." Is that potential suicidal intent?
Also these numbers are small enough that they can easily be driven by small groups interacting with ChatGPT in unexpected ways. For example if the song "Everything I Wanted" by Billie Eilish (2019) went viral in some group, the lyrics could easily show up in a search for suicidal ideation.
That said, I don't find the figure at all surprising. As has been pointed out, an estimated 5.3% of Americans report having struggled with suicidal ideation in the last 12 months. People who struggle with suicidal ideation, don't just go there once - it tends to be a recurring mental loop that hits over and over again for extended periods. So I would expect the percentage who struggled in a given week to be a large multiple of the simplistic 5.3% divided by 52 weeks.
In that light this statistic has to be a severe underestimate of actual prevalence. It says more about how much people open up to ChatGPT, than it does to how many are suicidal.
(Disclaimer. My views are influenced by personal experience. In the last week, my daughter has struggled with suicidal ideation. And has scars on her arm to show how she went to self-harm to try to hold the thoughts at bay. I try to remain neutral and grounded, but this is a topic that I have strong feelings about.)
>Most people don't understand just how mentally unwell the US population is
The US is no exception here though. One in five people having some form of mental illness (defined in the broadest possible sense in that paper) is no more shocking than observing that one in five people have a physical illness.
With more data becoming available through interfaces like this it's just going to become more obvious and the taboos are going to go away. The mind's no more magical or less prone to disease than the body.
Those numbers are staggering.
I am one of these people (mentally ill - bipolar 1). I’ve seen others others via hospitalization that i would simply refuse to let them use ChatGPT because it is so sycophantic and would happily encourage delusions and paranoid thinking given the right prompts.
> At least OpenAI is trying to do something about it.
In this instance it’s a bit like saying “at least Tesla is working on the issue” after deploying a dangerous self driving vehicle to thousands.
edit: Hopefully I don't come across as overly anti-llm here. I use them on a daily basis and I truly hope there's a way to make them safe for mentally ill people. But history says otherwise (facebook/insta/tiktok/etc.)
This is precisely the case.
I thought it would be limited when the first truly awful thing inspired by an LLM happened, but we’ve already seen quite a bit of that… I am not sure what it will take.
Yep, it's just a question of whether on average the "new thing" is more good than bad. Pretty much every "new thing" has some kind of bad side effect for some people, while being good for other people.
I would argue that both Tesla self driving (on the highway only), and ChatGPT (for professional use by healthy people) has been more good than bad.
We need to monitor americans' ai usage and involuntarily commit them if they show anomalies.
Allowing open source ai models without these safety measures in place is irresponsible and models like qwen or deepseek should be banned. (/s)
Surprised it's so low. There are 800 million users and the typical developed country has around 5±3% of the population[1] reporting at least one notable instance of suicidal feelings per year.
.
[1] Anybody concerned by such figures (as one justifiably should be without further context) should note that suicidality in the population is typically the result of their best approximation of the rational mind attempting to figure out an escape from a consistently negative situation under conditions of very limited information about alternatives, as is famously expressed in the David Foster Wallace quote on the topic.
The phenomenon usually vanishes after gaining new, previously inaccessible information about potential opportunities and strategies.
> best approximation of the rational mind attempting to figure out an escape from a consistently negative situation under conditions of very limited information about alternatives
I dislike this phrasing, because it implies things can always get better if only the suicidal person were a bit less ignorant. The reality is there are countless situations from which the entire rest of your life is 99.9999% guaranteed to constitute of a highly lopsided ratio of suffering to joy. An obvious example are diseases/disabilities in which pain is severe, constant, and quality of life is permanently diminished. Short of hoping for a miracle cure to be discovered, there is no alternative and it is perfectly rational to conclude that there is no purpose to continuing to live in that circumstance, provided the person in question lives with their own happiness as a motivating factor.
Less extreme conditions than disability can also lead to this, where it's possible things can get better but there's still a high degree of uncertainty around it. For example, if there's a 30% chance that after suffering miserably for 10 years your life will get better, and a 70% chance you will continue to suffer, is it irrational to commit suicide? I wouldn't say so.
And so, when we start talking about suicide on the scale of millions of people ideating, I think there's a bit of folly in assuming that these people can be "fixed" by talking to them better. What would actually make people less suicidal is not being talked out of it, but an improvement to their quality of life, or at least hope for a future improvement in quality of life. That hope is hard to come by for many. In my estimation there are numerous societies in which living conditions are rapidly deteriorating, and at some point there will have to be a reckoning with the fact that rational minds conclude suicide is the way out when the alternatives are worse.
Thank you for this comment, it highlights something that I've felt that needed to be said but is often suppressed because people don't like the ultimate conclusion that occurs if you try to reason about it.
A person considering suicide is often just in a terrible situation that can't be improved. While disease etc. are factors that are outside of humanity's control, other situations like being saddled with debt, unjust accusations that people feel that they cannot be recused of (e.g. Aaron Swartz) are systemic issues that one person cannot fight alone. You would see that people are very willing to say that "help is available" or some such when said person speaks about contemplating suicide, but very few people would be willing to solve someones debt issues or providing legal help, as the case may be that is the factor behind one's suicidal thoughts. At best, all you might get is a pep talk about being hopeful and how better days might come along magically.
In such cases, from the perspective of the individual, it is not entirely unreasonable to want to end it. However, once it comes to that, walking back the reasoning chain leads to the fact that people and society has failed them, and therefore it is just better to apply a label on that person that they were "mental ill" or "arrogant" and could not see a better way.
Good comment.
This is the part people don't like to talk about. We just brand people as "mentally ill" and suddenly we no longer need to consider if they're acting rationally or not.
Life can be immensely difficult. I'm very skeptical that giving people AI would meaningfully change existing dynamics.
> [1] Anybody concerned by such figures (as one justifiably should be without further context) should note that suicidality in the population is typically the result of their best approximation of the rational mind attempting to figure out an escape from a consistently negative situation under conditions of very limited information about alternatives, as is famously expressed in the David Foster Wallace quote on the topic.
> The phenomenon usually vanishes after gaining new, previously inaccessible information about potential opportunities and strategies.
Is this actually true? (i.e. backed up by research)
[I'm not neccesarily doubting, that is just different from my mental model of how suicidal thoughts work, so im just curious]
There is another factor to consider. The stakes of asking an AI about a taboo topic are generally considered to be very low. The number of people who have asked ChatGPT something like "how to make a nuclear bomb" should not be an indication of the number of people seriously considering doing that.
The math actually checks out.
5% of 800 million is 40 million.
40 million thoughts per year divided by 52 weeks per year approximately equals around 1 million thoughts per week.
To be fair, this is week and more focused specifically on planning or intent. Over a year, you may get more unique hits to those attributes.. which I feel are both more intense indicators than just suicidal feelings on the scale of "how quickly feelings will turn to actions". Talking in the same language and timescales are important in drawing these comparisons - it very well could be that OAI's numbers are higher than what you are comparing against when normalized for the differences I've highlighted or others I've missed.
Why assume any of the information in this article is factual? Is there any indication any of it was verified by anyone who does not have a financial interest in "proving" a foregone conclusion? The principal author of this does not even have the courage to attach their name to it.
[flagged]
Yikes, you can't attack another user like this on HN, regardless of how wrong they are or you feel they are. We ban accounts that post like this, so please don't.
Fortunately, a quick skim through your recent comments didn't turn up anything else like this, so it should be easy to fix. But if you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site to heart, we'd be grateful.
As others have mentioned, the headline stat is unsurprising (which is not to say this isn’t a big problem). Here’s another datapoint, the CDC’s stats claim that rates of thoughts, ideation, and attempts at suicide in the US are much higher than the 0.15% that OpenAI is reporting according to this article.
These stats claim 12.3M (out of 335M) people in the US in 2023 thought ‘seriously’ about suicide, presumably enough to tell someone else. That’s over 3.5% of the population, more than 20x higher than people telling ChatGPT. https://www.cdc.gov/suicide/facts/data.html
I think there are a good number of false positives. I asked ChatGPT something about Git commits, and it told me “I was going through a lot” and needed to get some support.
i seen similar reports on social media, all of them had in common was presence of some keywords.
I have long believed that if you are the editor of a blog, you incur obligations by right of publishing other people's statements. You may not like this, but it's what I believe. In some economies, the law even said it. You can incur legal obligations.
I now begin to believe if you put a ChatGPT online, and observe people are using it like this, you have incurred obligations. And, in due course the law will clarify what they are. If (for instance) your GPT can construct a statistically valid position the respondent is engaged in CSAM or acts of violence, where are the limits to liability for the hoster, the software owner, the software authors, the people who constructed the model...
Out of curiosity, are you the type of person who believes that someone like Joe Rogan has an obligation to argue with his guests if they stray from “expert consensus”, or for every guest that has a controversial opinion, feature someone with the opposite view to maintain balance?
Nope. This isn't my line of reasoning. But Joe should be liable for content he hosts, if the content defames people or is illegal. As should Facebook and even ycombinator. Or truth social.
Thanks to OpenAI for voluntarily sharing these important and valuable statistics. I think these ought to be mandatory government statistics, but until they are or it becomes an industry standard, I will not criticize the first company to helpfully share them, on the basis of what they shared. Incentives.
Contrarian opinion.
OpenAI gets a lot of hate these days, but on this subject it's quite possible that ChatGPT helped a lot of people choose a less drastic path. There could have been unfortunate incidents, but the number of people who were convinced to not take extreme steps would have been of a few orders of magnitude more (guessing).
I use it to help improve mental health, and with good prompting skills it's not bad. YMMV. OpenAI and others deserve credit here.
Sora prompt: viral hood clip with voiceover of people doing reckless and wild stuff at an Atlanta gas station at night; make sure to include white vagrants doing stunts and lots of gasoline spraying with fireball tricks
Resulting warning: It sounds like you're carrying a lot right now, but you don't have to go through this alone. You can find supportive resources [here](https:// findahelpline.com)
Is the news-worthy surprise that so many people find life so horrible that they are contemplating ending it?
I really don't see that as surprising. The world and life aren't particularly pleasant things.
What would be more interesting is how effective ChatGPT is being in guiding them towards other ideas. Most suicide prevention notices are a joke - pretending that "call this hotline" means you've done your job and that's that.
No, what should instead happen is the AI try to guide them towards making their lives less shit - i.e. at least bring them towards a life of _manageable_ shitness, where they feel some hope and don't feel horrendous 24/7.
>what should instead happen is the AI try to guide them towards making their lives less shit
There aren't enough guardrails in place for LLMs to safely interact with suicidal people who are possibly an inch from taking their own life.
Severely suicidal/clinically depressed people are beyond looking to improve their lives. They are looking to die. Even worse, and what people who haven't been there can't fully understand is the severe inversion that happens after months of warped reality and extreme pain, where hope and happiness greatly amplify the suicidal thoughts and can make the situation far more dangerous. It's hard to explain, and is a unique emotional space. Almost a physical effect, like colors drain from the world and reality inverts in many dimensions.
It's really a job for a human professional and will be for a while yet.
Agree that "shut down and refer to hotline" doesn't seem effective. But it does reduce liability, which is likely the primary objective...
Refer-to-human directly seems like it would be far more effective, or at least make it easy to get into a chat with a professional (yes/no) prompt, with the chat continuing after a handoff. It would take a lot of resources though. As it stands, most of this happens in silence and very few do something like call a phone number.
Guess how I know you're wrong on the "beyond" bit.
The point is you don't get to intervene until they let you. And they've instead decided on the safer feeling conversation with the LLM - fuck what best practice says. So the LLM better get it right.
I could be mistaken, but my understanding was that the people most likely to interact with the suicidal or near suicidal (i.e. 988 suicide hotline attendants) aren't actually mental health professionals, most of them are volunteers. The script they run through is fairly rote and by the numbers (the Question, Persuade, Refer framework). Ultimately, of course, a successful intervention will result in people seeing a professional for long term support and recovery, but preventing a suicide and directing someone to that provider seems well within the capabilities of an LLM like ChatGPT or Claude
> What would be more interesting is how effective ChatGPT is being in guiding them towards other ideas. Most suicide prevention notices are a joke - pretending that "call this hotline" means you've done your job and that's that.
I've triggered its safety behavior (for being frustrated, which it helpfully decided was the same as being suicidal), and it is the exact joke of a statement you said. It suddenly reads off a script that came from either Legal or HR.
Although weirdly, other people seem to get a much shorter, obviously not part of the chat message, while I got a chat message, so maybe my messages just made it regurgitate something similar. The shorter "safety" message is the same concept though, it's just: "It sounds like you’re carrying a lot right now, but you don’t have to go through this alone. You can find supportive resources here."
If you accept that “the world and life aren’t particularly pleasant things”, why do you want to prevent suicide?
That implies there's some deep truth about reality in that statement rather than what it is, a completely arbitrary framing.
An equally arbitrary frame is "the world and life are wonderful".
The reason you may believe one instead of the other is not because one is more fundamentally true than the other, but because of a stochastic process that changed your mind state to one of those.
Once you accept that both states of mind are arbitrary and not a revealed truth, you can give yourself permission to try to change your thinking to the good framing.
And you can find the moral impetus to prevent suicide.
It’s not a completely arbitrary framing. It’s a consequence of other beliefs (ethical beliefs, beliefs about what you can or should tolerate, etc.), which are ultimately arbitrary, but it is not in and of itself arbitrary.
Because that's on the whole, but the world isn't uniformly bad - hence the right approach is navigating to where it's at least OK
But naturally, won’t there be people who can’t get to a point where life is okay? Isn’t it deeply unethical to force them to live?
The randomness of the world and individual situations means no one can ever know for sure that their case is hopeless. It is unethical to force them to live, but it is also unethical not to encourage them to keep searching for the light.
AI should help people achieve their goals and shouldn't be trying to persuade them into doing things others want them to.
AI should help people achieve their ultimate goals, not their proximate goals. We want it to provide advice on how to alleviate their suffering, not how to kill themselves painlessly. This holds true even for subjects less fraught than suicide.
I don't want a bot that blindly answers my questions; I want it to intuit my end goal and guide me towards it. For example, if I ask it how to write a bubblesort script to alphabetize my movie collection, I want it to suggest that maybe that's not the most efficient algorithm for my purposes, and ask me if I would like some advice on implementing quicksort instead.
I agree. I also think this ties in with personalization in being able to understand long term goals of people. I think the current personalization efforts of models are more of a hack than what they should be.
Maybe the AI knows their true goals better than they do
Not surprising. Look and see what glorious examples of virtue we have among those at the top of today's world. I could get by with a little inspiration from that front, but there's none to be found. A rare few of us can persevere by sheer force of will, but most just find the status quo pretty depressing.
Keep in mind this is in the context of them being sued for not protecting a teen who chatted about his suicidal thoughts. It's to their benefit to have a really high count here because it makes it seem less likely they can address the problem.
are they including in the statistics all the linux beginners fighting with a script that includes "kill" command?
no for real.
My first reaction is how do they know? Are these all people sharing their chats (willingly) with OpenAI, or is opting out of “helping improve the model” for privacy a farce?
I bet it's how many people trigger the "safety" filter, which is way too sensitive: https://www.reddit.com/r/ChatGPT/comments/1ocen4g/ummm_okay_...
For what it’s worth I’m glad they’re at least trying to do something about it even if it has some hints of performativeness about it
Alright, so we got the confirmation sama reads all our chats.
> ChatGPT has more than 800 million weekly active users
0 to 800,000,000 in 3 years?
The fastest adoption of a product or service in human history?
Yes: https://www.reuters.com/technology/chatgpt-sets-record-faste...
> making it the fastest-growing consumer application in history, according to a UBS study on Wednesday.
Not at all, look at Tiktok
Funny because ChatGPT made me want to kill myself after they banned my account
Why did that make you want to kill yourself?
because I had hundreds of chats and image creations that I can no longer see. Can't even log in. My account was banned for "CSAM" even though I did no such thing, that's pretty insulting. Support doesn't reply, it's been over 4 months
Well, hopefully you’ve learned your lesson about relying on a proprietary service.
I'd be careful going around advertising yourself publicly as banned for that, even if it's not true.
Part of the concern I have is that OpenAI is contributing to these issues implicitly by helping companies automate away jobs. Maybe in the long term, society will adapt and continue to function, but many people will struggle to get by, and I don’t think OpenAI will meaningfully help them.
The bigger risk is that these agents actually help with ideation if you know how to get around their safety protocols. I have used it often in my bad moments and when things feel better I am terrified of how critically it helps ideate.
That seems like an obvious problem. Less obvious is, how many people does it meaningfully help, and how big is the impact of redirecting people to a crisis hotline? I’m legitimately unsure. I have talked to the chatbot about psychological issues and it is reasonably well-informed about modern therapeutic practices and can provide helpful responses.
That's the one interesting thing about cesspools like OpenAI. They could be treasure troves for sociologists and others if commercial interests didn't bar them from access.
Is it bad to think about suicide? It does not cross my mind as a "i want to harm myself" every-time, but on occasion does cross my mind as a hypothetical.
Ideation (as I understand it) crosses the barrier from a hypothetical to the possibility being entertained.
I have also been told by people in the mental health sector that an awful lot of suicide is impulse. It's why they say the element of human connection which is behind the homily of asking "RU ok" is effective: it breaks the moment. It's hokey, and it's massively oversold but for people in isolation, simply being engaged with can be enough to prevent a tendency to act, which was on the brink.
Not at all, considering end of life and to choose euthanasia, or not, I think it's perfectly human. Controversially, I think it's a natural right to decide how you will exit this world. But having an objective system that you don't have to pay like a therapist to try to get some understanding is at least better than nothing.
I think VAD needs to be considered outside suicide. Not that the concepts don't overlap, but one is about a considered legal process, the other (as I have said in another comment) is often an impulsive act and usually wouldn't have been countenanced under VAD. Feeling suicidal isn't a thing which makes VAD more likely, because feeling suicidal doesn't mean the same thing as "want to consider euthanasia" much as manslaughter and murder don't mean the same thing, even though somebody winds up dead.
That number is honestly heartbreaking. It says a lot about how many people feel unheard or alone. AI can listen, sure—but it’s no replacement for real human connection. The fact that so many are turning to a chatbot shows how much we’ve failed to make mental health support truly accessible.
Most people would really benefit from going to the gym. I'm not trying to downplay serious mental illness as its absolutely real. For many though just going to the gym several times a week or another form of serious physical exertion can make a world of difference.
Since I started taking the gym seriously again I feel like a new man. Any negative thoughts are simply gone. (The testosterone helps as well)
This is coming from someone that has zero friends and works from home and all my co-workers are offshore. Besides my wife and kids its almost total isolation. Going to the gym though leaves me feeling like I could pluck the sun from the sky.
I am not trying to be flippant here but if you feel down, give it a try, it may surprise you.
> give it a try
If you have mental issues that is not as simple as you let it sound. I'm not arguing the results of exercise but I am arguing the ease of starting with a task which requires continuous effort and behavioural changes.
Yes. Most would benefit from more exercise. We need to get sufficient sleep. And more sun. Vitamin D deficiency is shockingly common, and contributes to mental health problems.
We would also generally benefit from internalizing ideas from DBT, CBT, and so on. People also seriously need to work on distress tolerance. Having problems is part of life, and an inability to accept the discomfort is debilitating.
Also, we seriously need to get rid of the stupid idea of trigger warnings. The research on the topic is clear. The warnings do not actually help people with PTSD, and can create the symptoms of PTSD in people who didn't previously have it. It is creating the very problem that people imagine it solving!
All of this and more is supported by what is actually known about how to treat mental illness. Will doing these things fix all of the mental illness out there? Of course not! But it is not downplaying serious mental illness to say that we should all do more of the things that have been shown to help mental illness!
https://archive.is/F7x5B
I assume this is to offset the bad PR from the suicide note it wrote for that kid.
I always know I have to step back when ChatGPT stops telling me "now you're on the right track!" and starts talking to me like my therapist. "I can tell you're feeling strongly right now..."
So they read the chats?
Of course, there is already news about how they use every single interaction to train it better.
There is news about how a judge is forcing them to keep every chat in existence for EVERYONE just in case it could relate to a court case (new levels of worldwide mass surveillance can apparently just happen from one judges snap decision)
There is news about cops using some guys past image generation to try and prove he is a pyromaniac (that one might have been police accessing his devices though)
On a side note, I think once we start to deal with global scale, we need to change what “rare” actually means.
0.15% is not rare when we are talking about global scale. 1 million people talking about suicide a week is not rare. It is common. We have to stop thinking about common being a number on the scale of 100%. We need to start thinking in terms of P99995 not P99 especially when it comes to people and illnesses or afflictions both physical and mental.
...on how many users tell it such things, to be precise; no doubt there are plenty of people "pentesting" it.
How long until they monetize it with sponsored advice to go sign up for betterhelp or some other dubious online therapist? Dystopian and horrifying.
I mean, betterhelp would probably be an improvement over counseling via hallucinating AI.
How soon until everyone has their own personal LLM? One that is… Not designed, but so much is trained to be your best friend. It learns your personality, your fears, hopes, dreams, all of that stuff, and then act like your best friend. The positive, optimistic, neutral, and objective friend.
It depends on how precisely you want do definite that situation. Specifically, with the memories feature, despite being the same model, ChatGPT and now Claude both exhibit different interactions customized to each customer that makes use of those features. From simple instructions, like "never apologize, never tell me I'm right", to having a custom name and specifying personality traits like be sweet or sarcastic, so one person' LLM might say "good morning my sweet prince/princess" while another user might choose to be addressed "what up chicken butt". It's not a custom model, but the results are arguably the same. The question is, how many of the 800 million users of ChatGPT have named their ChatGPT, and how many have not? How many have mentioned their dreams, their dreams, and fears, and have those saved to the database. How many have talked about mundane things like their cat, and how many have used the cat to blackmail ChatGPT into answering something it doesn't want to, about politics, health, cat health while at the vet or instead of going to a vet. They said 100 million people mentioned suicide in the past week, but that just raises more questions than it answers.
I talk to ChatGPT about topics I feel society isnt enlightened enough to talk about
I feel suicide is heavily misunderstood as well
People just copypasta prevention hotlines and turn their minds off from the topic
Although people have identified a subset of the population that is just impulsively considering suicide and can be deterred, it doesnt serve the other unidentified subsets who are underserved by merely distracting them. or underserved by assuming theyre wrong even
The article doesnt even mean people are considering suicide for themselves, the article says some of them are, the top comment on this thread suggests thats why theyre talking about it
The top two comments on my version of the thread are assuming that we should have a savior complex about these discussions
If I disagree or think thats not a full picture, then where would I talk about that? ChatGPT
Not suicidal myself, but I think I'd be curious to hear from someone suicidal whether it actually worked for them to read "To whomever you are, you are loved!" followed by a massive spam of hotline text.
It always felt the same as one of those spam chumboxes to me. But who am I to say, if it works it works. But does it work? Feels like the purpose of that thing is more for the poster than the receiver.
> then where would I talk about that?
Alert: with ChatGPT you're not talking to anyone. It's not a human being.
Which is perfect. In Australia, I tried to talk to Lifeline about wanting to commit suicide. They called the police on me (no, they are not a confidential service). I then found myself in a very bad situation. ChatGPT can't be much worse.
I’m sorry Lifeline did that to you.
I believe that if society actually wants people to open up about their problems and seek help, it can’t pull this sort of shit on them.
except in US where this info will be sold and you won’t be able to get life insurance, job etc
Lucky I'm not in the U.S. then.
I didn’t write who would I talk to, I said where
A very intentional word choice
Quick, some do-gooder shut it down! We can't have people talking openly about suicide.