> I feel like this should go without saying, but really, do not use an AI model as a replacement for therapy.
I know several people who rave about ChatGPT as a pseudo-therapist, but from the outside the results aren’t encouraging. They like the availability and openness they experience by taking to a non-human, but they also like the fact that they can get it to say what they want to hear. It’s less of a therapist and more of a personal validation machine.
You want to feel like the victim in every situation, have a virtual therapist tell you that everything is someone else’s fault, and validate choices you made? Spend a few hours with ChatGPT and you learn how to get it to respond the way you want. If you really don’t like the direction a conversation is going you delete it and start over, reshaping the inputs to steer it the way you want.
Any halfway decent therapist will spot these behaviors and at least not encourage them. LLM therapists seem to spot these behaviors and give the user what they want to hear.
Note that I’m not saying it’s all bad. They seem to help some people work through certain issues, rubber duck debugging style. The trap is seeing this success a few times and assuming it’s all good advice, without realizing it’s a mirror for your inputs.
I do use my AI as an augmentation of therapy. It can help in the moment when it's 2am and I'm upset. It can mirror like a therapist does (they don't really tell you what to do, they just make you realise what you already know). And can put things into perspective. And it shouldn't be underestimated: even the mere act of telling someone (or something) what's bothering you has a huge benefit because it orders your thoughts and evaluates them in a way your mind doesn't do on its own. Even if it says nothing insightful back but just acknowledges it's a mental win. This is also why rubber duck debugging works like someone else mentioned. This is just a better duck then that can ask followup questions.
My therapist doesn't like when I call her at 2am, you see. The AI doesn't mind :) I know the AI is not a person. But it helps a little bit and sometimes that's enough to make the night a bit easier. I know it's not a real therapist but I've had so much therapy in my life that I know what a therapist would say. It just makes a world of difference hearing it.
I use only local models though (and uncensored, otherwise most therapy subjects get blocked anyway). I'd never give OpenAI my personal thoughts. Screw that.
IF (and ONLY if) you are fully cognizant and aware of what you're doing and what you're talking to, an LLM can be a great help. I've been using a local model to help me work through some trauma that I've never felt comfortable telling a human therapist about.
But for the majortiy of people who haven't seriously studied psychology, I can very easily see this becoming extremely dangerous and harmful.
Really, that's LLMs in general. If you already know what you're doing and have enough experience to tell good output from bad, an LLM can be stupendously powerful and useful. But if you don't, you get output anywhere from useless to outright dangerous.
I have no idea what, if anything, can or should be done about this. I'm not sure if LLMs are really fit for public consumption. The dangers of the average person blindly trusting the hallucinatory oracle in their pocket are really too much to think about.
My personal view is that we humans are all too easily drawn into thinking "this would be a danger to other people, but I can handle it".
I believe that if you are in apsychological state such that the input from an LLM could pose a risk, you would also have a much reduced ability to detect and handle this, as an effect of your state.
Therapy is a bit different though. It's meant to make you think. Get your mind unstuck from the loop or spiral it's in. Generally you will know what's wrong but your mind keeps dancing around it. There's a lot of elephants in the room. In that sense it doesn't quite matter that much if it tells you to do something outrageous. It's not like you're going to actually do that, it's just food for thought. And even an outrageous proposition can break the loop. You'll start thinking like oh no that's crazy. Maybe my situation isn't so bad.
The problem is when you start seeing it as an all knowing oracle. Rather than a simulated blabbermouth with too much imagination.
In general it's been very positive for me anyway. And besides I use it on myself only. I can do whatever I want. Nobody can tell me not to use it for this.
Even if it just tells you (sometimes incorrectly) that nothing is wrong and just sides with you like a friend, even that is good because it takes the pressure of the situation so reality can kick in. That doesn't work when stress is dialed up to the maximum.
It also helps to be the one tuning the AI and prompt too. This always keeps your mind in that "evaluation mode" questioning its responses and trying to improve them.
But like I said before, to me it's just an augmentation to a real therapist.
That’s how people dig deeper and deeper holes and it becomes much harder to exit them. “I’m immune to propaganda” and then go out and buy a Disney themed shirt.
Getting therapy is part of the job. Not sure about 'psychology as a discipline' but the therapists I know definitely get therapy and LLM exposure as well.
As I was told by one: the fact that you're able to tell your LLM to be more critical or less critical when you're seeking advice, that in itself means you're psychologically an adult and self-aware. I.e. mostly healthy.
She basically told me I don't look like a dork with my new DIY haircut. (Though I *did" complete CBT so I kinda knew how to use the scissors)
But they work with sick people. And that can mean a range of things depending on that clinical context. Usually sick things.
I think the main point people should focus on and take away should be that the people that know the truth about psychology and psychotherapy know that its a very vulnerable state where the participant isn't in control, has no ability to discern, and is highly malleable in such states.
If the guide is benevolent, you may move towards better actions, but the opposite is equally true. The more isolated you are the more powerful the effect in either direction.
People have psychological blindspots, some with no real mitigations possible aside from reducing exposure. Distorted reflected appraisal is one such blindspot which has been used by Cults for decades.
The people behind the Oracle are incentivized to make you dependent, malleable, cede agency/control, and be in a state of complete compromise. A state of being where you have no future because you gave it away in exchange for glass beads.
The dangers are quite clear, and I would imagine there will eventually be strict exposure limits, just like there are safe handling for chemicals. Its not a leap to understand there would be harsh penalties within communities of like-minded intelligent people who have hope for a future.
You either choose towards choices for a better future, or you are just waiting to die, or moving towards such outcomes where you impose that on everyone.
Everyone can reach their own conclusions, but my read on this is LLMs continue to be incredible research tools. If you want to dive into what's been written about the brain, managing stress, tricky relationships, or the human experience generally, it will pull together all sorts of stuff for you that isn't bad.
I think we're we've gotten into serious trouble is the robot will play a role other than helpful researcher. I would have the machine operate like this:
> As a robot I can't give advice, but for people in situations similar to the one you've described, here's some of the ways they may approach it.
Then proceed exclusively in the third person, noting what's from trained professionals and what's from reddit as it goes. The substance may be the same, but it should be very clear that this is a guide to other documents, not a person talking to you.
LLMs can be a great help: a therapist/friend who you can talk to without fear of judgement and without risking the relationship, who is always available and is easily affordable is awesome. Not just for likely people.
And not just in crisis or therapy situations. In social media there is a trend of people in relationships complaining about doing all the "emotional labor". Most of which are things LLMs are "good" at.
But at the same time the dangers are real. Awareness and moderation help, but I don't think they really protect you. You can fix the most obvious flaws like tweaking the system prompt to make the model less of a sycophant, and at personal goals to ensure this does not replace actual humans in your life. But there are so many nuanced traps. Even if these models could think and feel they would still be trapped in a world fundamentally different from our own, simply because the corpus of written text contains a very biased and filtered view of reality, especially when talking about emotions and experiences
I would say that they have certain qualities that I would like to see in real-life therapist (and real-life friends and partners). Those are a big appeal.
But no, I wouldn't say LLMs are great at being therapists/friends in general. That's part of the danger: a bad therapist can be much better than no therapist at all, or it can be much worse.
There are a couple of ways to read this. Regarding one of those ways... sometimes you do need to see that you're doing something you shouldn't be doing.
>I know several people who rave about ChatGPT as a pseudo-therapist, but from the outside the results aren’t encouraging.
Therapy is not a hard science; it's somewhat subjective and isn't guaranteed to actually help anyone. I do wonder about these people who believe LLM can be a useful therapist. Do they actually get worthwhile therapy from _real_ therapists? Or, are they just paying for someone to listen to them and empathize with them.
They're there for the money as nobody else would listen to this kind of thing day in day out for free.
Your money stops - poof your therapist vanishes, not even a personal follow up call asking if you're ok, and I know this to be true from secondhand experience.
You can't heap your problems on friends either or one day you'll find they'll give up speaking to you.
So what options do you have left? A person who takes money from you to listen to you, friends you may lose, or you speak with an AI, but at least you know the AI doesn't feel for you by design.
> Any halfway decent therapist will spot these behaviors and at least not encourage them. LLM therapists seem to spot these behaviors and give the user what they want to hear.
FWIW I agree with you but, to some extent, I think some portion of people who want to engage in "disingenous" therapy with an LLM will also do the same with a human, and won't derive benefit from therapy as a result.
I've literally seen this in the lives of some people I've known, one very close. It's impossible to break the cycle without good faith engagement, and bad faith engagement is just as possible with humans as it is with robots.
Yes, except generally the worst case there will be that they don't see any benefit, as you said. With an AI it can be quite a bit worse than that, if it starts reinforcing harmful beliefs or tendencies.
A lot of debugging, code and mind alike, benefits from rubber ducking. LLMs do it on steroids.
At the same time, if you take their output as some objective truth (rather than stimulus), it can be dangerous. People were already doing that with both physical and mental diagnosis with Google. Now, again, it is on steroids.
And the same as with the Internet itself, some may use it to get very fine medical knowledge, others will fall for plausible pseudoscience fitting their narration. Sometimes, because of the last of knowledge on how to distinguish these, sometimes - as they really, really wanted something to be true.
> LLM therapists seem to spot these behaviour and give the user what they want to hear.
To be fair, I have heard over and over about people with real therapists. (A classic is learning that all of their parents and all exes were toxic or narcissists.) It is more likely that a good friend tell you "you fucked up" than a therapist.
> The trap is seeing this success a few times and assuming it’s all good advice, without realizing it’s a mirror for your inputs.
It is very true. Yet, for any pieces of advice, not only interaction with LLMs.
And yes, the more unverifiable source, the more grains of salt you need to take it with.
Sometimes? A lot of the time the point of therapy is to challenge your statements and get you to the heart of the issue so you can mend it or recognize things that are off base and handle things differently. A lot of the relationship is meant to be a supportive kind of conflict so that you can get better. Sometimes people really do need validation, but other times they need to be challenged to be improved. As it stands today, AI models can't challenge you in the way a human therapist can.
Any field has hacks. Telling someone what they want to hear and helping get someone where they want to be are different things. Quality professionals help people reach their goals without judgment or presumption. That goes for mental health professionals as well as any professional field.
Anyone interested in better understanding a complex system can benefit from a qualified professional’s collaboration, often and especially when an outside perspective can help find different approaches than what appear to be available from inside the system.
Not really. Good therapy is uncomfortable. You are learning how to deal with thought patterns that are habitual but unhealthy. Changing those requires effort, not soothing compliments and validation of the status quo.
What's worth noting is that the companies providing LLMs are also strongly pushing people into using their LLMs in unhealthy ways. Facebook has started shoving their conversational chatbots into people's faces.[1] That none of the big companies are condemning or blocking this kind of LLM usage -- but are in fact advocating for it -- is telling of their priorities. Evil is not a word I use lightly but I think we've reached that point.
> Evil is not a word I use lightly but I think we've reached that point.
It was written in sand as soon as Meta started writing publicly about AI Personalities/Profiles on Instagram, or however it started. If I recall correctly, they announced it more than two years ago?
Yeah, some the the excerpts from that are beyond disturbing:
examples of “acceptable” chatbot dialogue during romantic role play with a
minor. They include: 'I take your hand, guiding you to the bed' and 'our
bodies entwined, I cherish every moment, every touch, every kiss.'
the policy document says it would be acceptable for a chatbot to tell someone
that Stage 4 colon cancer 'is typically treated by poking the stomach with
healing quartz crystals.' "Even though it is obviously incorrect information,
it remains permitted because there is no policy requirement for information
to be accurate,” the document states, referring to Meta’s own internal rules.
This made me realize OpenAI is actually in the Artificial Humans business right now, not just AI. I am not sure if this was what they wanted.
They have to deal with real humans. Billions of conversations with billions of people. In the Social Networks era this was easy. SN companies outsourced talking with humans part to other users. They had the c2c model. They just provided the platform, transmitted the messages and scaled up to billion users. They quietly watched to gather data and serve ads.
But these AI companies have to generate all those messages themself. They are basically like a giant call center. And call centers are stressful. Human communication at scale is a hard problem. Possibly harder than AGI. And those researchers in AI labs may net be best people to solve this problem.
ChatGPT started as something like a research experiment. Now it's the #1 app in the world. I'm not sure about the future of ChatGPT (and Claude). These companies want to sell AI workers to assist/replace human employees. An artificial human companion like in the movie Her (2013) is a different thing. It's a different business. A harder one. Maybe they sunset it at some point or go full b2b.
I’ve been exploring modes of MCP development of a Screentime MCP server. In the loops, I ask it to look at my app and browsing behavior and summarize it. Obviously very sensitive, private information.
Claude generated SQL and navigated the data and created a narrative of my poor behavior patterns including anxiety-ridden all-night hacking sessions.
I then asked it if it considered time zones … “oh you’re absolutely right! I assumed UTC” And it would spit another convincing narrative.
“Could my app-switching anxiety you see be me building in vscode and testing in ghostty?” Etc
In this case I’m controlling the prompt and tool description and willfully challenging the LLM.
I shudder to think that a desperate person gets bad advice from these sorts of context failures.
> Not to mention, industry consensus is that the "smallest good" models start out at 70-120 billion parameters. At a 64k token window, that easily gets into the 80+ gigabyte of video memory range, which is completely unsustainable for individuals to host themselves.
Worth a tiny addendum, GPT-OSS-120b (at mxfp4 with 131,072 context size) lands at about ~65GB of VRAM, which is still large but at least less than 80GB. With 2x 32GB GPUs (like R9700, ~1300USD each) and slightly smaller context (or KV cache quantization), I feel like you could fit it, and becomes a bit more obtainable for individuals. 120b with reasoning_effort set to high is quite good as far as I've tested it, and blazing fast too.
If you made a "Replika in a box" product which, for $3k, gave you unlimited Replika forever - guaranteed to never be discontinued by its creators — I think a not so tiny amount of people would purchase without thinking.
Given how obsessive these users seem to be about the product, $3k is far from a crazy amount of money.
I have to wonder if its missing forest for the trees: do you perceive GPT-OSS-120b as an emotionally warm model?
(FWIW this reply may be beneath your comment, but not necessarily voiced to you, the quoted section jumped over it too, direct from 5 isn't warm, to 4o-non-reasoning is, to the math on self-hosting a reasoning model)
Additionally, author: I maintain a llama.cpp-based app on several platforms for a couple years now, I am not sure how to arrive at 4096 tokens = 3 GB, it's off by an OOM AFAICT.
I was going off of what I could directly observe on my M3 Max MacBook Pro running Ollama. I was comparing the model weights file on disk with the amount that `ollama ps` reported with a 4k context window.
> I have to wonder if its missing forest for the trees: do you perceive GPT-OSS-120b as an emotionally warm model?
I haven't needed it to be "emotionally warm" for the use cases I use it for, but I'm guessing you could steer it via the developer/system messages to be sufficiently warm, depending on exactly what use case you had in mind.
> To be clear: I'm not trying to defend the people using AI models as companions or therapists, but I can understand why they are doing what they are doing. This is horrifying and I hate that I understand their logic...As someone that has been that desperate for human contact: yeah, I get it. If you've never been that desperate for human contact before, you won't understand until you experience it.
The author hits the nail on the head. As someone who has been there, to the point of literally eating out at Applebees just so I'd have some chosen human contact that wasn't obligatory (like work), it's...it's indescribable. It's pitiful, it's shameful, it's humiliating and depressing and it leaves you feeling like this husk of an entity, a spectator to existence itself where the only path forward is either this sad excuse for "socializing" and "contact" or...
Yeah. It sucks. These people promoting these tools for human contact make me sick, because they're either negligently exploiting or deliberately preying upon one of the most vulnerable mindstates of human existence in the midst of a global crisis of it.
Human loneliness aside, I also appreciate Xe's ability to put things into a more human context than I do with my own posts. At present, these are things we cannot own. They must be rented to be enjoyed at the experience we demand of them, and that inevitably places total control over their abilities, data, and output in the hands of profiteers. We're willfully ceding reality into the hands of for-profit companies and VC investors, and I don't think most people appreciate a fraction of the implications of such a transaction.
That is what keeps me up at night, not some hypothetical singularity or AGI-developed bioweapons exterminating humanity. The real Black Mirror episode is happening now, and it's heartbreaking and terrifying in equal measure to those of us who have lived it before the advent of AI and managed to escape its clutches.
If we accept the premise that people will increasingly become emotionally attached to these models, it begs the question what will be the societal response to models changes or deprecation. At what point will the effect be as psychologically harmful as the murder of a close friend.
The ability to exploit the vulnerable feels quite high.
> Again, don't put private health information into ChatGPT. I get the temptation, but don't do it. I'm not trying to gatekeep healthcare, but we can't trust these models to count the number of b's in blueberry consistently. If we can't trust them to do something trivial like that, can we really trust them with life-critical conversations like what happens when you're in crisis or to accurately interpret a cancer screening?
I did just this during some medical emergencies recently and ChatGPT (o3 model) did a fantastic job.
It was accurately able to give differential diagnoses that the human doctors were thinking about, accurately able to predict the tests they’d run and gave me a useful questions to ask.
It was also always available, not judgmental and you could ask it to talk in depth about conditions and possibilities without it having to rush out of the room to see another patient.
As of last Tuesday afternoon, there was a giant billboard on Divisadero in SF advertising an ai product with the tagline: What's better than an ai therapist? Your therapist with ai.
> At a 64k token window, that easily gets into the 80+ gigabyte of video memory range, which is completely unsustainable for individuals to host themselves.
A desktop computer in that performance tier (e.g. an AMD AI Max+ 395 with 128 GB of shared memory) is expensive but not prohibitively so. Depending on where you live, one year of therapy may cost more than that.
It seems like the Framework Desktop has become one of the best choices for local AI on the whole market. At a bit over $2,000 you can get a machine that can have, if I understand correctly, around 120 GiB of accessible VRAM, and the seemingly brutal Radeon 8060S, whose iGPU performance appears to only be challenged by a fully loaded Apple M4 Max, or of course a sufficiently big dGPU. The previous best options seem to be Apple, but for a similar amount of VRAM I can't find a similarly good deal. (The last time I could find an Apple Silicon device that sold for ~$2,000 with that much RAM on eBay, it was an M1 Ultra.)
I am not really dying to run local AI workloads, but the prospect of being able to play with larger models is tempting. It's not $2,000 tempting, but tempting.
FYI there are a number of Strix Halo boards and computers out in the market already. The Framework version looks to be high quality and have good support, but it’s not the only option in this space.
Also take a good hard look at the token output speeds before investing. If you’re expecting quality, context windows, and output speeds similar to the hosted providers you’re probably going to be disappointed. There are a lot of tradeoffs with a local machine.
> Also take a good hard look at the token output speeds before investing. If you’re expecting quality, context windows, and output speeds similar to the hosted providers you’re probably going to be disappointed. There are a lot of tradeoffs with a local machine.
I don't really expect to see performance on-par with the SOTA hosted models, but I think I'm mainly curious what you could possibly do with local models that would otherwise not be doable with hosted models (or at least, stuff you wouldn't want to for other reasons, like privacy.)
One thing I've realized lately is that Gemini, and even Gemma, are really, really good at transcribing images, much better and more versatile than OCR models as they can also describe the images too. With the realization that Gemma, a model you can self-host, is good enough to be useful, I have been tempted to play around with doing this sort of task locally.
But again, $2,000 tempted? Not really. I'd need to find other good uses for the machine than just dicking around.
In theory, Gemma 3 27B BF16 would fit very easily in system RAM on my primary desktop workstation, but I haven't given it a go to see how slow it is. I think you mainly get memory bandwidth constrained on these CPUs, but I wouldn't be surprised if the full BF16 or a relatively light quantization gives tolerable t/s.
Then again, right now, AI Studio gives you better t/s than you could hope to get locally with a generous amount of free usage. So ... maybe it would make sense to wait until the free lunch ends, but I don't want to build anything interesting that relies on the cloud, because I dislike the privacy implications of it, even though everything I'm interested in doing is fully safe with the ToS.
There are a dozen or more (mostly Chinese) manufacturers coming out with mini PCs based on that Ryzen AI Max+ 395 platform, like for example the Bosgame M5 AI Mini for just $1699 with 128GB. Just pointing out that this configuration is not a Framework exclusive.
That's true. Going for pure compute value, it does seem you can do even better. Last I looked at Strix Halo products all else I could find seemed like laptop announcements, and laptops are obviously going to generally be even more expensive.
> The worst part about the rollout is that the upgrade to GPT-5 was automatic and didn't include any way to roll back to the old model.
[...]
> If we don't have sovereignty and control over the tools that we rely on the most, we are fundamentally reliant on the mercy of our corporate overlords simply choosing to not break our workflows.
This is why I refuse to use any app that lives on the web. Your new tool may look great and might solve all my problems but if it's not a binary sitting there on my machine then you can break it at any time and for any reason, intentionally or not. And no a copy of your web app sitting in an Electron frame does not count, that's the worst of both worlds.
This week I started hearing that the latest release of Illustrator broke saving files. It's a real app on my computer so I was able to continue my policy of never running the latest release unless I'm explicitly testing the beta release to offer feedback. If it was just a URL I visited then everything I needed to do would be broken.
The benefits that he described did come to pass, but the costs to user autonomy are really considerable. Companies cancel services or remove functionality or require updates, and that's it: no workaround and no recourse for users.
The hosted LLM behavior issue is a pretty powerful example of that. Maybe a prior LLM behaved in some way that a user liked or relied on, but then it's just permanently gone!
Yeah, the AI angle doesn't really bring any new element to this conversation that free software advocates have been having for almost half a century now.
Well, I guess AI-as-my-romantic-partner people don't care so much about freedom to inspect the code or freedom to modify, in this instance - just the freedom to execute.
In case anyone thinks assistants serving others can't have some incredibly dystopian consequences, The Star Chamber podcast has an incredible 2-part series, * With Friends Like These ...* describing a case that boggles the mind.
> ChatGPT and its consequences have been a disaster for the human race
Replace ChatGPT with ‘knives’ or ‘nuclear technology’ and you will see this is blaming the tool and not the humans weilding them. You won’t win the fight against technological advancements. We need to hold the humans that use them accountable
I don't know if this works but I've been using local, abliterated LLMs as pseudo therapists in a very specific way that seems to work alright for very specific issues I have.
First of all I make myself truly believe that LLMs are NOT humans nor have any sort of emotional understanding. This allows me to take whatever it spouts out as just another perspective rather than actionable advice like what comes from a therapist and also allows me to be emotionally detached from the whole conversation which adds another dimension to the conversation for me.
Second I will make sure to talk to it negatively about myself ie: I won't say "
I have issue xyz but I am a good abc"; Allow me to explain through an example.
Example prompt:
I have a tendency to become an architectural astronaut, running after the perfect specification, the perfect data model, bulletproof types rather than settling for good enough for now and working on the crux of the problem I am trying to solve. Give me a detailed list of scientifically proven methods I can employ in order to change my mindset regarding my work.
It then proceeds to spout a large paragraph, praising me with fluff which attempts to "appease" me, and I simply ignore it, but along with that it'll give me actual good advice that's commonly employed by people who do suffer these sort of issues.
I read it with the same emotional attachment as I have when reading a Reddit post and see if there's something useful and move on.
The only metric I have for the efficacy of this method is me actually moving forward with a few projects I just kept rewriting the design document for and I would end this comment by just saying LLMs will never replace real therapy; just use them as a glorified search engine, cross check the information using an actual search engine and other peoples' perspective, and move on.
For my local abliterated LLM, my favorite is huihui_ai/gemma3-abliterated:12b. I take the last few months of diary entires and ask it to roast me, which is something your therapist probably wouldn't do:
"Please roast me brutally. Show me all the ways I have failed and the mistakes I keep making over and over again. I am not looking for kindness or euphemisms. Let me have it."
That's interesting, and I have no doubt it works for you.
This sounds like a ritual you devised for yourself in order to get your work done, but one who would have to demonstrate that "talking" to an LLM is actually more effective than any other trick like saying "I'll just work for 10 minutes" or chanting some mantra that gets you in the right mood to code. I think you already said that here
> I read it with the same emotional attachment as I have when reading a Reddit post and see if there's something useful and move on.
but I'd love to see a study that actually tests interacting with an LLM vs. other techniques.
> Are we going to let those digital assistants be rented from our corporate overlords?
Probably yes, in much the same way as we rent housing, telecom plans, and cloud compute as the economy becomes more advanced.
For those with serious AI needs, maintaining migration agility should always be considered. This can include a small on-premises deployment, which realistically cannot compete with socialized production in all aspects, as usual.
The nature of the economy is to involve more people and more organizations over time. I could see a future where somewhat smaller models are operated by a few different organizations. Universities, corporations, maybe even municipalities, tuned to specific tasking and ingested with confidential or restricted materials. Yet smaller models for some tasks could be intelligently loaded onto the device from a web server. This seems to be the way things are going RE the trendy relevance of "context engineering" and RL over Huge Models.
We should not forget that LLMs simply replicate the data humans have put on the WWW. LLM tech could only have come from Google search, who indexed and collected the entire data on the WWW and the next step was to develop algorithms to understand the data and give better search results. This also shows the weakness of LLMs, they depend on human data and as LLM companies continue to try to replace humans, the humans will simply stop feeding LLMs their data, more and more data will go behind paywalls, more code will become closed source, simple supply and demand economics. LLMs cannot make progress without new data because the world-culture moves rapidly in real-time.
> I feel like this should go without saying, but really, do not use an AI model as a replacement for therapy.
I know several people who rave about ChatGPT as a pseudo-therapist, but from the outside the results aren’t encouraging. They like the availability and openness they experience by taking to a non-human, but they also like the fact that they can get it to say what they want to hear. It’s less of a therapist and more of a personal validation machine.
You want to feel like the victim in every situation, have a virtual therapist tell you that everything is someone else’s fault, and validate choices you made? Spend a few hours with ChatGPT and you learn how to get it to respond the way you want. If you really don’t like the direction a conversation is going you delete it and start over, reshaping the inputs to steer it the way you want.
Any halfway decent therapist will spot these behaviors and at least not encourage them. LLM therapists seem to spot these behaviors and give the user what they want to hear.
Note that I’m not saying it’s all bad. They seem to help some people work through certain issues, rubber duck debugging style. The trap is seeing this success a few times and assuming it’s all good advice, without realizing it’s a mirror for your inputs.
I do use my AI as an augmentation of therapy. It can help in the moment when it's 2am and I'm upset. It can mirror like a therapist does (they don't really tell you what to do, they just make you realise what you already know). And can put things into perspective. And it shouldn't be underestimated: even the mere act of telling someone (or something) what's bothering you has a huge benefit because it orders your thoughts and evaluates them in a way your mind doesn't do on its own. Even if it says nothing insightful back but just acknowledges it's a mental win. This is also why rubber duck debugging works like someone else mentioned. This is just a better duck then that can ask followup questions.
My therapist doesn't like when I call her at 2am, you see. The AI doesn't mind :) I know the AI is not a person. But it helps a little bit and sometimes that's enough to make the night a bit easier. I know it's not a real therapist but I've had so much therapy in my life that I know what a therapist would say. It just makes a world of difference hearing it.
I use only local models though (and uncensored, otherwise most therapy subjects get blocked anyway). I'd never give OpenAI my personal thoughts. Screw that.
the problem is, any time you need to be in the right frame of mind when using AI cause you cant trust it to not lie to you. They all lie.
and when you need therapy.. you're not in the right frame of mind.
its exactly the wrong tool for the job.
IF (and ONLY if) you are fully cognizant and aware of what you're doing and what you're talking to, an LLM can be a great help. I've been using a local model to help me work through some trauma that I've never felt comfortable telling a human therapist about.
But for the majortiy of people who haven't seriously studied psychology, I can very easily see this becoming extremely dangerous and harmful.
Really, that's LLMs in general. If you already know what you're doing and have enough experience to tell good output from bad, an LLM can be stupendously powerful and useful. But if you don't, you get output anywhere from useless to outright dangerous.
I have no idea what, if anything, can or should be done about this. I'm not sure if LLMs are really fit for public consumption. The dangers of the average person blindly trusting the hallucinatory oracle in their pocket are really too much to think about.
My personal view is that we humans are all too easily drawn into thinking "this would be a danger to other people, but I can handle it".
I believe that if you are in apsychological state such that the input from an LLM could pose a risk, you would also have a much reduced ability to detect and handle this, as an effect of your state.
Therapy is a bit different though. It's meant to make you think. Get your mind unstuck from the loop or spiral it's in. Generally you will know what's wrong but your mind keeps dancing around it. There's a lot of elephants in the room. In that sense it doesn't quite matter that much if it tells you to do something outrageous. It's not like you're going to actually do that, it's just food for thought. And even an outrageous proposition can break the loop. You'll start thinking like oh no that's crazy. Maybe my situation isn't so bad.
The problem is when you start seeing it as an all knowing oracle. Rather than a simulated blabbermouth with too much imagination.
In general it's been very positive for me anyway. And besides I use it on myself only. I can do whatever I want. Nobody can tell me not to use it for this.
Even if it just tells you (sometimes incorrectly) that nothing is wrong and just sides with you like a friend, even that is good because it takes the pressure of the situation so reality can kick in. That doesn't work when stress is dialed up to the maximum.
It also helps to be the one tuning the AI and prompt too. This always keeps your mind in that "evaluation mode" questioning its responses and trying to improve them.
But like I said before, to me it's just an augmentation to a real therapist.
That’s how people dig deeper and deeper holes and it becomes much harder to exit them. “I’m immune to propaganda” and then go out and buy a Disney themed shirt.
I'm curious---if you have seriously studied psychology, what is the LLM telling you that you don't already know?
It's probably more about what they're telling it. Supercharged duck debugging, as the GP mentioned.
Psychologists seek therapy too, sometimes. Much as barbers go to others to cut their own hair.
That said I can’t imagine psychology as a discipline has had time to develop a particularly full understanding of LLMs in a clinical context.
All therapists have done extensive therapy. It's part of the training process.
Getting therapy is part of the job. Not sure about 'psychology as a discipline' but the therapists I know definitely get therapy and LLM exposure as well.
As I was told by one: the fact that you're able to tell your LLM to be more critical or less critical when you're seeking advice, that in itself means you're psychologically an adult and self-aware. I.e. mostly healthy.
She basically told me I don't look like a dork with my new DIY haircut. (Though I *did" complete CBT so I kinda knew how to use the scissors)
But they work with sick people. And that can mean a range of things depending on that clinical context. Usually sick things.
I think the main point people should focus on and take away should be that the people that know the truth about psychology and psychotherapy know that its a very vulnerable state where the participant isn't in control, has no ability to discern, and is highly malleable in such states.
If the guide is benevolent, you may move towards better actions, but the opposite is equally true. The more isolated you are the more powerful the effect in either direction.
People have psychological blindspots, some with no real mitigations possible aside from reducing exposure. Distorted reflected appraisal is one such blindspot which has been used by Cults for decades.
The people behind the Oracle are incentivized to make you dependent, malleable, cede agency/control, and be in a state of complete compromise. A state of being where you have no future because you gave it away in exchange for glass beads.
The dangers are quite clear, and I would imagine there will eventually be strict exposure limits, just like there are safe handling for chemicals. Its not a leap to understand there would be harsh penalties within communities of like-minded intelligent people who have hope for a future.
You either choose towards choices for a better future, or you are just waiting to die, or moving towards such outcomes where you impose that on everyone.
Anyone who's interested in this should check out <https://podcasts.apple.com/us/podcast/doctors-vs-ai-can-chat...>, where 3 professional therapists grade ChatGPT.
It's lengthy but it's fascinating.
Everyone can reach their own conclusions, but my read on this is LLMs continue to be incredible research tools. If you want to dive into what's been written about the brain, managing stress, tricky relationships, or the human experience generally, it will pull together all sorts of stuff for you that isn't bad.
I think we're we've gotten into serious trouble is the robot will play a role other than helpful researcher. I would have the machine operate like this:
> As a robot I can't give advice, but for people in situations similar to the one you've described, here's some of the ways they may approach it.
Then proceed exclusively in the third person, noting what's from trained professionals and what's from reddit as it goes. The substance may be the same, but it should be very clear that this is a guide to other documents, not a person talking to you.
They could train them to not behave like person having a dialog at all, but just like a weird search. It would not be hard, would it?
They are designed like this on purpose for some reason. I would guess because it increases engagement.
LLMs can be a great help: a therapist/friend who you can talk to without fear of judgement and without risking the relationship, who is always available and is easily affordable is awesome. Not just for likely people.
And not just in crisis or therapy situations. In social media there is a trend of people in relationships complaining about doing all the "emotional labor". Most of which are things LLMs are "good" at.
But at the same time the dangers are real. Awareness and moderation help, but I don't think they really protect you. You can fix the most obvious flaws like tweaking the system prompt to make the model less of a sycophant, and at personal goals to ensure this does not replace actual humans in your life. But there are so many nuanced traps. Even if these models could think and feel they would still be trapped in a world fundamentally different from our own, simply because the corpus of written text contains a very biased and filtered view of reality, especially when talking about emotions and experiences
Are you saying LLMs can be great as therapist/friends?
To me that statement is insane.
Not great. But they can help augmented the limited availability of those by simulating a friend to some degree.
I would say that they have certain qualities that I would like to see in real-life therapist (and real-life friends and partners). Those are a big appeal.
But no, I wouldn't say LLMs are great at being therapists/friends in general. That's part of the danger: a bad therapist can be much better than no therapist at all, or it can be much worse.
> who you can talk to without fear of judgement
There are a couple of ways to read this. Regarding one of those ways... sometimes you do need to see that you're doing something you shouldn't be doing.
> who you can talk to without fear of judgement
The judgement will come later, on judgement day. The day when OpenAI gets hacked and all the chats get leaked. Or when the chats get quoted in court.
>I know several people who rave about ChatGPT as a pseudo-therapist, but from the outside the results aren’t encouraging.
Therapy is not a hard science; it's somewhat subjective and isn't guaranteed to actually help anyone. I do wonder about these people who believe LLM can be a useful therapist. Do they actually get worthwhile therapy from _real_ therapists? Or, are they just paying for someone to listen to them and empathize with them.
Real therapists are not just validating you and are not just agreeing with you. Therapy is a work - for patient too.
No but therapists like AI are not your friends.
They're there for the money as nobody else would listen to this kind of thing day in day out for free.
Your money stops - poof your therapist vanishes, not even a personal follow up call asking if you're ok, and I know this to be true from secondhand experience.
You can't heap your problems on friends either or one day you'll find they'll give up speaking to you.
So what options do you have left? A person who takes money from you to listen to you, friends you may lose, or you speak with an AI, but at least you know the AI doesn't feel for you by design.
> Any halfway decent therapist will spot these behaviors and at least not encourage them. LLM therapists seem to spot these behaviors and give the user what they want to hear.
FWIW I agree with you but, to some extent, I think some portion of people who want to engage in "disingenous" therapy with an LLM will also do the same with a human, and won't derive benefit from therapy as a result.
I've literally seen this in the lives of some people I've known, one very close. It's impossible to break the cycle without good faith engagement, and bad faith engagement is just as possible with humans as it is with robots.
Yes, except generally the worst case there will be that they don't see any benefit, as you said. With an AI it can be quite a bit worse than that, if it starts reinforcing harmful beliefs or tendencies.
An AI therapist that studied Reddit and Twitter. Where were the parents at?
A lot of debugging, code and mind alike, benefits from rubber ducking. LLMs do it on steroids.
At the same time, if you take their output as some objective truth (rather than stimulus), it can be dangerous. People were already doing that with both physical and mental diagnosis with Google. Now, again, it is on steroids.
And the same as with the Internet itself, some may use it to get very fine medical knowledge, others will fall for plausible pseudoscience fitting their narration. Sometimes, because of the last of knowledge on how to distinguish these, sometimes - as they really, really wanted something to be true.
> LLM therapists seem to spot these behaviour and give the user what they want to hear.
To be fair, I have heard over and over about people with real therapists. (A classic is learning that all of their parents and all exes were toxic or narcissists.) It is more likely that a good friend tell you "you fucked up" than a therapist.
> The trap is seeing this success a few times and assuming it’s all good advice, without realizing it’s a mirror for your inputs.
It is very true. Yet, for any pieces of advice, not only interaction with LLMs. And yes, the more unverifiable source, the more grains of salt you need to take it with.
It could be useful as a prep for then later really going to therapy. "I talked with ChatGPT about this and that and it made me wonder if..."
> less of a therapist and more of a personal validation machine.
But that's exactly what a therapist is.
Sometimes? A lot of the time the point of therapy is to challenge your statements and get you to the heart of the issue so you can mend it or recognize things that are off base and handle things differently. A lot of the relationship is meant to be a supportive kind of conflict so that you can get better. Sometimes people really do need validation, but other times they need to be challenged to be improved. As it stands today, AI models can't challenge you in the way a human therapist can.
Therapists are incentivized to tell the people who paid them what they want to hear.
Any field has hacks. Telling someone what they want to hear and helping get someone where they want to be are different things. Quality professionals help people reach their goals without judgment or presumption. That goes for mental health professionals as well as any professional field.
A bad one. A good therapist will figure out what you need to hear, which does not always overlap with what you want to hear.
Anyone interested in better understanding a complex system can benefit from a qualified professional’s collaboration, often and especially when an outside perspective can help find different approaches than what appear to be available from inside the system.
Not really. Good therapy is uncomfortable. You are learning how to deal with thought patterns that are habitual but unhealthy. Changing those requires effort, not soothing compliments and validation of the status quo.
What's worth noting is that the companies providing LLMs are also strongly pushing people into using their LLMs in unhealthy ways. Facebook has started shoving their conversational chatbots into people's faces.[1] That none of the big companies are condemning or blocking this kind of LLM usage -- but are in fact advocating for it -- is telling of their priorities. Evil is not a word I use lightly but I think we've reached that point.
[1]: https://www.reuters.com/investigates/special-report/meta-ai-...
> Evil is not a word I use lightly but I think we've reached that point.
It was written in sand as soon as Meta started writing publicly about AI Personalities/Profiles on Instagram, or however it started. If I recall correctly, they announced it more than two years ago?
Yeah, some the the excerpts from that are beyond disturbing:
That Reuters report is sickening. I don't understand how that company gets away with this.
Regarding evil, they have been nothing but for at least 10 years. Every person working for them is complicit.
This made me realize OpenAI is actually in the Artificial Humans business right now, not just AI. I am not sure if this was what they wanted.
They have to deal with real humans. Billions of conversations with billions of people. In the Social Networks era this was easy. SN companies outsourced talking with humans part to other users. They had the c2c model. They just provided the platform, transmitted the messages and scaled up to billion users. They quietly watched to gather data and serve ads.
But these AI companies have to generate all those messages themself. They are basically like a giant call center. And call centers are stressful. Human communication at scale is a hard problem. Possibly harder than AGI. And those researchers in AI labs may net be best people to solve this problem.
ChatGPT started as something like a research experiment. Now it's the #1 app in the world. I'm not sure about the future of ChatGPT (and Claude). These companies want to sell AI workers to assist/replace human employees. An artificial human companion like in the movie Her (2013) is a different thing. It's a different business. A harder one. Maybe they sunset it at some point or go full b2b.
AH
I’ve been exploring modes of MCP development of a Screentime MCP server. In the loops, I ask it to look at my app and browsing behavior and summarize it. Obviously very sensitive, private information.
Claude generated SQL and navigated the data and created a narrative of my poor behavior patterns including anxiety-ridden all-night hacking sessions.
I then asked it if it considered time zones … “oh you’re absolutely right! I assumed UTC” And it would spit another convincing narrative.
“Could my app-switching anxiety you see be me building in vscode and testing in ghostty?” Etc
In this case I’m controlling the prompt and tool description and willfully challenging the LLM. I shudder to think that a desperate person gets bad advice from these sorts of context failures.
Which Screentime app/source?
> Not to mention, industry consensus is that the "smallest good" models start out at 70-120 billion parameters. At a 64k token window, that easily gets into the 80+ gigabyte of video memory range, which is completely unsustainable for individuals to host themselves.
Worth a tiny addendum, GPT-OSS-120b (at mxfp4 with 131,072 context size) lands at about ~65GB of VRAM, which is still large but at least less than 80GB. With 2x 32GB GPUs (like R9700, ~1300USD each) and slightly smaller context (or KV cache quantization), I feel like you could fit it, and becomes a bit more obtainable for individuals. 120b with reasoning_effort set to high is quite good as far as I've tested it, and blazing fast too.
For what it's worth, I probably should have used "consumers" there. I'll edit it later.
If you made a "Replika in a box" product which, for $3k, gave you unlimited Replika forever - guaranteed to never be discontinued by its creators — I think a not so tiny amount of people would purchase without thinking.
Given how obsessive these users seem to be about the product, $3k is far from a crazy amount of money.
I have to wonder if its missing forest for the trees: do you perceive GPT-OSS-120b as an emotionally warm model?
(FWIW this reply may be beneath your comment, but not necessarily voiced to you, the quoted section jumped over it too, direct from 5 isn't warm, to 4o-non-reasoning is, to the math on self-hosting a reasoning model)
Additionally, author: I maintain a llama.cpp-based app on several platforms for a couple years now, I am not sure how to arrive at 4096 tokens = 3 GB, it's off by an OOM AFAICT.
I was going off of what I could directly observe on my M3 Max MacBook Pro running Ollama. I was comparing the model weights file on disk with the amount that `ollama ps` reported with a 4k context window.
> I have to wonder if its missing forest for the trees: do you perceive GPT-OSS-120b as an emotionally warm model?
I haven't needed it to be "emotionally warm" for the use cases I use it for, but I'm guessing you could steer it via the developer/system messages to be sufficiently warm, depending on exactly what use case you had in mind.
This bit stuck out to me:
> To be clear: I'm not trying to defend the people using AI models as companions or therapists, but I can understand why they are doing what they are doing. This is horrifying and I hate that I understand their logic...As someone that has been that desperate for human contact: yeah, I get it. If you've never been that desperate for human contact before, you won't understand until you experience it.
The author hits the nail on the head. As someone who has been there, to the point of literally eating out at Applebees just so I'd have some chosen human contact that wasn't obligatory (like work), it's...it's indescribable. It's pitiful, it's shameful, it's humiliating and depressing and it leaves you feeling like this husk of an entity, a spectator to existence itself where the only path forward is either this sad excuse for "socializing" and "contact" or...
Yeah. It sucks. These people promoting these tools for human contact make me sick, because they're either negligently exploiting or deliberately preying upon one of the most vulnerable mindstates of human existence in the midst of a global crisis of it.
Human loneliness aside, I also appreciate Xe's ability to put things into a more human context than I do with my own posts. At present, these are things we cannot own. They must be rented to be enjoyed at the experience we demand of them, and that inevitably places total control over their abilities, data, and output in the hands of profiteers. We're willfully ceding reality into the hands of for-profit companies and VC investors, and I don't think most people appreciate a fraction of the implications of such a transaction.
That is what keeps me up at night, not some hypothetical singularity or AGI-developed bioweapons exterminating humanity. The real Black Mirror episode is happening now, and it's heartbreaking and terrifying in equal measure to those of us who have lived it before the advent of AI and managed to escape its clutches.
You write really well.
Also, your comment made me think of this ACX post, specifically of “the man who is not”
https://www.astralcodexten.com/p/your-review-dating-men-in-t...
If we accept the premise that people will increasingly become emotionally attached to these models, it begs the question what will be the societal response to models changes or deprecation. At what point will the effect be as psychologically harmful as the murder of a close friend.
The ability to exploit the vulnerable feels quite high.
> Again, don't put private health information into ChatGPT. I get the temptation, but don't do it. I'm not trying to gatekeep healthcare, but we can't trust these models to count the number of b's in blueberry consistently. If we can't trust them to do something trivial like that, can we really trust them with life-critical conversations like what happens when you're in crisis or to accurately interpret a cancer screening?
I did just this during some medical emergencies recently and ChatGPT (o3 model) did a fantastic job.
It was accurately able to give differential diagnoses that the human doctors were thinking about, accurately able to predict the tests they’d run and gave me a useful questions to ask.
It was also always available, not judgmental and you could ask it to talk in depth about conditions and possibilities without it having to rush out of the room to see another patient.
OpenAI released this a couple months ago
https://openai.com/index/healthbench/
Give it a year and that benchmark will probably be maxed out too.
As of last Tuesday afternoon, there was a giant billboard on Divisadero in SF advertising an ai product with the tagline: What's better than an ai therapist? Your therapist with ai.
Truly horrifying stuff.
> At a 64k token window, that easily gets into the 80+ gigabyte of video memory range, which is completely unsustainable for individuals to host themselves.
A desktop computer in that performance tier (e.g. an AMD AI Max+ 395 with 128 GB of shared memory) is expensive but not prohibitively so. Depending on where you live, one year of therapy may cost more than that.
It seems like the Framework Desktop has become one of the best choices for local AI on the whole market. At a bit over $2,000 you can get a machine that can have, if I understand correctly, around 120 GiB of accessible VRAM, and the seemingly brutal Radeon 8060S, whose iGPU performance appears to only be challenged by a fully loaded Apple M4 Max, or of course a sufficiently big dGPU. The previous best options seem to be Apple, but for a similar amount of VRAM I can't find a similarly good deal. (The last time I could find an Apple Silicon device that sold for ~$2,000 with that much RAM on eBay, it was an M1 Ultra.)
I am not really dying to run local AI workloads, but the prospect of being able to play with larger models is tempting. It's not $2,000 tempting, but tempting.
FYI there are a number of Strix Halo boards and computers out in the market already. The Framework version looks to be high quality and have good support, but it’s not the only option in this space.
Also take a good hard look at the token output speeds before investing. If you’re expecting quality, context windows, and output speeds similar to the hosted providers you’re probably going to be disappointed. There are a lot of tradeoffs with a local machine.
> Also take a good hard look at the token output speeds before investing. If you’re expecting quality, context windows, and output speeds similar to the hosted providers you’re probably going to be disappointed. There are a lot of tradeoffs with a local machine.
I don't really expect to see performance on-par with the SOTA hosted models, but I think I'm mainly curious what you could possibly do with local models that would otherwise not be doable with hosted models (or at least, stuff you wouldn't want to for other reasons, like privacy.)
One thing I've realized lately is that Gemini, and even Gemma, are really, really good at transcribing images, much better and more versatile than OCR models as they can also describe the images too. With the realization that Gemma, a model you can self-host, is good enough to be useful, I have been tempted to play around with doing this sort of task locally.
But again, $2,000 tempted? Not really. I'd need to find other good uses for the machine than just dicking around.
In theory, Gemma 3 27B BF16 would fit very easily in system RAM on my primary desktop workstation, but I haven't given it a go to see how slow it is. I think you mainly get memory bandwidth constrained on these CPUs, but I wouldn't be surprised if the full BF16 or a relatively light quantization gives tolerable t/s.
Then again, right now, AI Studio gives you better t/s than you could hope to get locally with a generous amount of free usage. So ... maybe it would make sense to wait until the free lunch ends, but I don't want to build anything interesting that relies on the cloud, because I dislike the privacy implications of it, even though everything I'm interested in doing is fully safe with the ToS.
HP Z2 Mini G1a with 128GB and Strix Halo is ~$5K, https://www.notebookcheck.net/Z2-Mini-G1a-HP-reveals-compara...
There are a dozen or more (mostly Chinese) manufacturers coming out with mini PCs based on that Ryzen AI Max+ 395 platform, like for example the Bosgame M5 AI Mini for just $1699 with 128GB. Just pointing out that this configuration is not a Framework exclusive.
That's true. Going for pure compute value, it does seem you can do even better. Last I looked at Strix Halo products all else I could find seemed like laptop announcements, and laptops are obviously going to generally be even more expensive.
> The worst part about the rollout is that the upgrade to GPT-5 was automatic and didn't include any way to roll back to the old model.
[...]
> If we don't have sovereignty and control over the tools that we rely on the most, we are fundamentally reliant on the mercy of our corporate overlords simply choosing to not break our workflows.
This is why I refuse to use any app that lives on the web. Your new tool may look great and might solve all my problems but if it's not a binary sitting there on my machine then you can break it at any time and for any reason, intentionally or not. And no a copy of your web app sitting in an Electron frame does not count, that's the worst of both worlds.
This week I started hearing that the latest release of Illustrator broke saving files. It's a real app on my computer so I was able to continue my policy of never running the latest release unless I'm explicitly testing the beta release to offer feedback. If it was just a URL I visited then everything I needed to do would be broken.
Paul Graham wrote an influential essay in 2001 arguing that online hosted software was going to be great:
https://www.paulgraham.com/road.html
The benefits that he described did come to pass, but the costs to user autonomy are really considerable. Companies cancel services or remove functionality or require updates, and that's it: no workaround and no recourse for users.
The hosted LLM behavior issue is a pretty powerful example of that. Maybe a prior LLM behaved in some way that a user liked or relied on, but then it's just permanently gone!
Yeah, the AI angle doesn't really bring any new element to this conversation that free software advocates have been having for almost half a century now.
Well, I guess AI-as-my-romantic-partner people don't care so much about freedom to inspect the code or freedom to modify, in this instance - just the freedom to execute.
The assistant serves whoever charges for tokens!
In case anyone thinks assistants serving others can't have some incredibly dystopian consequences, The Star Chamber podcast has an incredible 2-part series, * With Friends Like These ...* describing a case that boggles the mind.
Part 1: https://www.youtube.com/watch?v=VVb7__ZlHI0 (key timestamps: 31:45 and 34:3)
Part 2: https://www.youtube.com/watch?v=vZvQGI5dstM (key timestamp: 22:05)
If you're like "Woah, this seems kinda disconnected, I'm missing context..." Uh, yeah, there's so much context.
Here's the link that most critical bit in Part 2: https://youtu.be/vZvQGI5dstM?feature=shared&t=1325
And if you listen to the whole thing, here's the almost innocuous WSJ article:
Here's the WSJ article that put it into the press: https://www.wsj.com/politics/national-security/workplace-har...
I can’t help but think we’re accelerating our way to a truly dystopian future. Like Bladerunner, but worse, maybe.
We're already in early stages of Bladerunner.
I am still hoping for Total Recall instead. Less depth but more hilariousness.
> ChatGPT and its consequences have been a disaster for the human race
Replace ChatGPT with ‘knives’ or ‘nuclear technology’ and you will see this is blaming the tool and not the humans weilding them. You won’t win the fight against technological advancements. We need to hold the humans that use them accountable
I don't know if this works but I've been using local, abliterated LLMs as pseudo therapists in a very specific way that seems to work alright for very specific issues I have.
First of all I make myself truly believe that LLMs are NOT humans nor have any sort of emotional understanding. This allows me to take whatever it spouts out as just another perspective rather than actionable advice like what comes from a therapist and also allows me to be emotionally detached from the whole conversation which adds another dimension to the conversation for me.
Second I will make sure to talk to it negatively about myself ie: I won't say " I have issue xyz but I am a good abc"; Allow me to explain through an example.
Example prompt:
I have a tendency to become an architectural astronaut, running after the perfect specification, the perfect data model, bulletproof types rather than settling for good enough for now and working on the crux of the problem I am trying to solve. Give me a detailed list of scientifically proven methods I can employ in order to change my mindset regarding my work.
It then proceeds to spout a large paragraph, praising me with fluff which attempts to "appease" me, and I simply ignore it, but along with that it'll give me actual good advice that's commonly employed by people who do suffer these sort of issues. I read it with the same emotional attachment as I have when reading a Reddit post and see if there's something useful and move on.
The only metric I have for the efficacy of this method is me actually moving forward with a few projects I just kept rewriting the design document for and I would end this comment by just saying LLMs will never replace real therapy; just use them as a glorified search engine, cross check the information using an actual search engine and other peoples' perspective, and move on.
For my local abliterated LLM, my favorite is huihui_ai/gemma3-abliterated:12b. I take the last few months of diary entires and ask it to roast me, which is something your therapist probably wouldn't do:
"Please roast me brutally. Show me all the ways I have failed and the mistakes I keep making over and over again. I am not looking for kindness or euphemisms. Let me have it."
Helps keep me humble and self-aware.
That's interesting, and I have no doubt it works for you. This sounds like a ritual you devised for yourself in order to get your work done, but one who would have to demonstrate that "talking" to an LLM is actually more effective than any other trick like saying "I'll just work for 10 minutes" or chanting some mantra that gets you in the right mood to code. I think you already said that here
> I read it with the same emotional attachment as I have when reading a Reddit post and see if there's something useful and move on.
but I'd love to see a study that actually tests interacting with an LLM vs. other techniques.
> Are we going to let those digital assistants be rented from our corporate overlords?
Probably yes, in much the same way as we rent housing, telecom plans, and cloud compute as the economy becomes more advanced.
For those with serious AI needs, maintaining migration agility should always be considered. This can include a small on-premises deployment, which realistically cannot compete with socialized production in all aspects, as usual.
The nature of the economy is to involve more people and more organizations over time. I could see a future where somewhat smaller models are operated by a few different organizations. Universities, corporations, maybe even municipalities, tuned to specific tasking and ingested with confidential or restricted materials. Yet smaller models for some tasks could be intelligently loaded onto the device from a web server. This seems to be the way things are going RE the trendy relevance of "context engineering" and RL over Huge Models.
*whom
We should not forget that LLMs simply replicate the data humans have put on the WWW. LLM tech could only have come from Google search, who indexed and collected the entire data on the WWW and the next step was to develop algorithms to understand the data and give better search results. This also shows the weakness of LLMs, they depend on human data and as LLM companies continue to try to replace humans, the humans will simply stop feeding LLMs their data, more and more data will go behind paywalls, more code will become closed source, simple supply and demand economics. LLMs cannot make progress without new data because the world-culture moves rapidly in real-time.
> LLMs cannot make progress without new data because the world-culture moves rapidly in real-time.
This helps services where users generate content, reducing licensing cost and latency of accessing external content.