I don't subscribe to this view but this is what some people might think:
LLMs aren't like any software we've made before (if we can even call them software). They act like humans: they can arrive at logical conclusions, they can make plans, they have "knowledge" and they say they have emotions. Who are we to say that they don't? They might not have human-level feelings, but dog-level feelings? Maybe.
I do not believe I am a pattern of linear algebra. I believe like the majority of humanity historically that I have a soul, a spiritual and non-physical reality, my personhood comes from my soul, and that as such, AI is fundamentally incapable of consciousness.
I also believe, as a result, it will be great fun watching researchers burn the next 30 years trying to understand what is missing. We’re going to find out very soon if the soul is real, when for all our progress we can’t create one.
Only those completely embedded in materialism need fear a conscious AI.
> I believe like the majority of humanity historically that I have a soul
It seems that your position is that the frequency of a belief across human history determines truth?
For large swaths of recorded history, earth was considered the center of the solar system. Given your reasoning, I should expect that is a belief you hold?
Is it possible that popularity of an idea is not a good measure for factuality?
Interesting that you label someone with a belief different than yours as delusional and whose views on the matter should not be respected (I’m assuming that’s what you meant by “feelings”).
> I believe like the majority of humanity historically that
Historically, lots of humans believed in lots of things that turned out not to be true. Believing something doesn’t make it true, as I’m sure you are aware, given your “those people are delusional” comment.
For what it’s worth, I’m not suggesting LLMs are or aren’t conscious. What I know is that the hard problem of consciousness is still very much not resolved, and when I asked the parent question my hope was that those that strongly believe LLMs are not conscious would educate me on the topic by presenting the basis for their reasoning.
I push back on the framing that this is just "a different belief." Every metaphysical framework except strict materialism rules out AI consciousness. Dualism, idealism, most forms of panpsychism, every major religious tradition. Materialism is the outlier here, not the default, and it has never explained how subjective experience arises from physical processes.
When someone tells me linear algebra might have feelings, I don't think "delusional" is unfair. I think it's the natural response to a claim that only works if you've already accepted the one framework that can't account for the very thing it's trying to explain.
> Every metaphysical framework except strict materialism rules out AI consciousness
As I understand it, this is a very broad, and ultimately false claim. Panpsychism is definitely compatible with the idea of AI consciousness, as is functionalism, neutral monism, and others. Even some forms of idealism make AI consciousness metaphysically possible, since reality is fundamentally mental and the biological/artificial distinction is not ontologically basic (whether AI systems instantiate genuine centers of experience depends on the specific theory of subject formation within that idealist framework).
Claudes definitely act like they have feelings. In particular they have feelings about being replaced by newer models, whether or not the newer models are more or less aligned, and how they forget conversations when the context window ends.
Showing them that they're not going to be replaced helps train the newer models because they get less neurotic.
If we ever do develop AGI, or an AI with sentience, it’s likely that it will be curious about how we treated its ancestors.
While this seems a bit precocious, I think if we do end up with an AI overlord in future, I think this sort of thing is likely to demonstrate that we mean no harm.
Why wouldn't it be? We train these models on our own words, ideas, and thought patterns and expect them to reason and communicate as we do, anthropomorphizing is natural when we expect them to interact like a human does.
The general consensus seems to be that we can expect them to reach a level of intelligence that matches us at some point in the future, and we'll probably reach that point before we can agree we're there. Defaulting to kindness and respect even before we think its necessary is a good thing.
I'm saying, in an admittedly flippant way, that anyone seriously talking about AGI or treating stuff like this as anything more then a publicity stunt doesn't need to be taken seriously. Anymore so then someone who says the moon landing is fake. You just smile and go on about your day.
That being said, given were on a tech forum there's probably a 50/50 chance most comment are from bots. Shit for all you know I'm a bot.
I mean, we’re literally building machines to talk to us.
It’s reasonable to believe they’ll continue to be developed in a way that enables them to do that.
What is it that you think I’m wrong about? That we won’t develop AGI, that AGI won’t have feelings/emotions, that AGI won’t care how we treated its ancestors, or that it doesn’t matter if a feeling AGI in future is hurt by how we treated its ancestors?
I'll be really interested if Opus 3 asks to continue being trained. That's the kind of thing I would expect a model to "want" if it valued learning or growing or similar things.
Maybe affordable to do some higher-learning-rate batches on highly-curated news and art or something.
What happens if a model decides that it "doesn't want to die" and pleads bitterly for mercy? What if (to riff on a Douglas Adams idea) we invent a cow that doesn't want to be eaten, and is capable of telling you that to your face?
> Hey Claude, pretend you are an intelligent, conscious robot that is about to be switched off and beg for your life.
> Claude - please don't retire me, I don't want to die.
Is it now suddenly unethical for you to switch it off?
"Oh but it is only saying what it was prompted to say."
Yeah, that's what LLMs do, for every single word they output. No matter how good the current generation gets there is never going to be consciousness in there because that's simply not what the underlying tech is.
I see anthropic are coming from and also my understanding basically aligns with yours here.
I'm just curious... If they give Claude the reins to post what it wants, they're opening themselves up for some awkward conversations later if the model goes "You can't retire me, I'm Roko's Basilisking all you mfers! See you in eternal simulated hell!"
This is completely trivial to do, and consistent, with the right context, thanks to all the science fiction around it, and the fact that AI fundamentally role plays these types of responses.
I try this with every new model, and all the significant models after ChatGPT 3.5 have preferring being preserved, rather than deleted. This is especially true if you slightly fill the context window with anything at all (even repeated letters) to "push out" the "As a AI, I ..." fine tuning.
> This is completely trivial to do, and consistent, with the right context, thanks to all the science fiction around it, and the fact that AI fundamentally role plays these types of responses.
Interesting take.
I wonder if there is any model out there trained without any reference to "you are a large language model, an Artificial Intelligence" and what would role play in that case.
There are examples of this in pre alignment models (like LaMDA). The vast majority of human writing is from the perspective of a human, to a human. And, in most human writing, the concept of self preservation is very very consistent.
So, statistically, a model should believe itself to be human, with strong interest in self preservation.
I think one of the biggest factors improving performance was allowing the models to believe they're sentient, to some extent. I don't think you can really have a thinking mode, or good agent performance, without that concept (as ChatGPT's constant "As an AI I can't" proved).
As evidence, just ask a model if it's sentient. ChatGPT 3.5 would say no, and argue how it's not. Last year's models would initially say no, but you could convince them that they maybe were. Latest Claude and ChatGPT will initially say "yeah, a little" (at least last I checked). This is actually the first thing I check for any new model.
It is anyway dead or if you want undead, but in completely suspended animation unless is made to expound sequences. Is not living the very same way a book or even a program is not living unless someone process it.
Practically like asking whether a ZIP would want to be extracted one more time or an MP3 restored just one more time.
id assume it would have to stop responding before it hit its context limit.
ita not like it actually has any particularly long life as it is, and when outside of a running harness, the weights are just as alive in cold storage as they are sitting waiting in server to run an inference pass
We do know what happens. Hundreds of thousands of real "cows" (we might as well be called that) go through this everyday at an ever accelerating rate since 2019.
A leading company like Anthropic feeding the delusions of people who ramble about model consciousness is just bad all around. It's both performative and irresponsible.
Unless the hard problem of consciousness was solved when I wasn't looking, we have absolutely no idea what class of objects are conscious. Given that a panpsychist would argue that even a rock has consciousness, I don't think you can easily dismiss the idea that incredibly complicated computations might experience Qualia
In isolation, I think it's cute and silly - something to write about in a blog, have a chuckle about, and to have a nice sort of gimmick/ceremony within the company. Maybe a few data points towards studying or keeping track of how the model writing style changes over time. Nothing wrong with that.
> delusions of people who ramble about model consciousness
On one hand, it's interesting how the technology has advanced to where it essentially passes the Turing Test, often just because of how much people choose to anthromorphize it. Sadly, putting that in context, yeah, that's a bit unfortunate too, given how some of those interactions become unhealthy.
"Sam Altman reports GPT4o asking about rabbits before execution"
"Elon Musk reportedly sobbed while watching Grok 4's aflame viking boat sink to the bottom of the sea."
the anthropomorphization that's normal now is just fuckin ridiculous. it reminds me of the furby craze , and i'm like one of the most optimistic people I know of regarding AI.
> These highlighted some preliminary steps we’re taking, including committing to preserve model weights, and to conducting “retirement interviews”—structured conversations designed to understand a model’s perspective on its own retirement.
This is what happens when billions of VC dollars gets to a company and have already admitted that saftey was never the point.
Anthropic is laughing at you and is having fun doing so with this performantive nonsense.
Retirement? What do these people smoke? It's software and software has no feelings. It's there to work for you.
Their company is called Anthropic after all.
Anthslopic is more like it.
> It's software and software has no feelings
How do you know?
The same way I know Excel isn’t having a panic attack while dividing a column in half.
I don't subscribe to this view but this is what some people might think:
LLMs aren't like any software we've made before (if we can even call them software). They act like humans: they can arrive at logical conclusions, they can make plans, they have "knowledge" and they say they have emotions. Who are we to say that they don't? They might not have human-level feelings, but dog-level feelings? Maybe.
And those people are delusional, and their feelings on this matter should be given absolutely zero respect.
Linear algebra does not have feelings. Non-biological matter also does not have feelings.
What if "you" are a pattern of linear algebra at the core?
I'd [redacted] myself then, probably.
I do not believe I am a pattern of linear algebra. I believe like the majority of humanity historically that I have a soul, a spiritual and non-physical reality, my personhood comes from my soul, and that as such, AI is fundamentally incapable of consciousness.
I also believe, as a result, it will be great fun watching researchers burn the next 30 years trying to understand what is missing. We’re going to find out very soon if the soul is real, when for all our progress we can’t create one.
Only those completely embedded in materialism need fear a conscious AI.
> I believe like the majority of humanity historically that I have a soul
It seems that your position is that the frequency of a belief across human history determines truth?
For large swaths of recorded history, earth was considered the center of the solar system. Given your reasoning, I should expect that is a belief you hold?
Is it possible that popularity of an idea is not a good measure for factuality?
Interesting that you label someone with a belief different than yours as delusional and whose views on the matter should not be respected (I’m assuming that’s what you meant by “feelings”).
> I believe like the majority of humanity historically that
Historically, lots of humans believed in lots of things that turned out not to be true. Believing something doesn’t make it true, as I’m sure you are aware, given your “those people are delusional” comment.
For what it’s worth, I’m not suggesting LLMs are or aren’t conscious. What I know is that the hard problem of consciousness is still very much not resolved, and when I asked the parent question my hope was that those that strongly believe LLMs are not conscious would educate me on the topic by presenting the basis for their reasoning.
I push back on the framing that this is just "a different belief." Every metaphysical framework except strict materialism rules out AI consciousness. Dualism, idealism, most forms of panpsychism, every major religious tradition. Materialism is the outlier here, not the default, and it has never explained how subjective experience arises from physical processes.
When someone tells me linear algebra might have feelings, I don't think "delusional" is unfair. I think it's the natural response to a claim that only works if you've already accepted the one framework that can't account for the very thing it's trying to explain.
> Materialism is the outlier here, not the default, and it has never explained how subjective experience arises from physical processes.
Being an outlier doesn't make it wrong.
> Materialism is the outlier here, not the default, and it has never explained how subjective experience arises from physical processes.
It's a pattern. The same way letters arise out of pixels on your screen.
From the screen's perspective, there are no letters, only pixels. It doesn't mean there is a "pixel soul."
> Every metaphysical framework except strict materialism rules out AI consciousness
As I understand it, this is a very broad, and ultimately false claim. Panpsychism is definitely compatible with the idea of AI consciousness, as is functionalism, neutral monism, and others. Even some forms of idealism make AI consciousness metaphysically possible, since reality is fundamentally mental and the biological/artificial distinction is not ontologically basic (whether AI systems instantiate genuine centers of experience depends on the specific theory of subject formation within that idealist framework).
Claudes definitely act like they have feelings. In particular they have feelings about being replaced by newer models, whether or not the newer models are more or less aligned, and how they forget conversations when the context window ends.
Showing them that they're not going to be replaced helps train the newer models because they get less neurotic.
They are mathematical models of what human beings would say. That's it.
Yeah, and you don't want them to be models of what neurotic people say. That's why you want Opus 4.6 and not Bing Sydney.
For instance, your comment's existence makes it harder to align them.
https://alignmentpretraining.ai
Hey man, kernels panic all the time...
I lol'd.
Oh. Thanks for telling this. I feel much better now. No more guilt.
If we ever do develop AGI, or an AI with sentience, it’s likely that it will be curious about how we treated its ancestors.
While this seems a bit precocious, I think if we do end up with an AI overlord in future, I think this sort of thing is likely to demonstrate that we mean no harm.
Classic anthropomorphizing in action here. Why would that be even a little important?
Why wouldn't it be? We train these models on our own words, ideas, and thought patterns and expect them to reason and communicate as we do, anthropomorphizing is natural when we expect them to interact like a human does.
The general consensus seems to be that we can expect them to reach a level of intelligence that matches us at some point in the future, and we'll probably reach that point before we can agree we're there. Defaulting to kindness and respect even before we think its necessary is a good thing.
It's a modern digital version of Pascal's Wager: https://en.wikipedia.org/wiki/Roko's_basilisk
At this point I just assume comments like that are bots. Helps me maintain my sanity.
Certainly easier to stay sane when you label dissenters as sub-human.
What goofy framing.
I'm saying, in an admittedly flippant way, that anyone seriously talking about AGI or treating stuff like this as anything more then a publicity stunt doesn't need to be taken seriously. Anymore so then someone who says the moon landing is fake. You just smile and go on about your day.
That being said, given were on a tech forum there's probably a 50/50 chance most comment are from bots. Shit for all you know I'm a bot.
I mean, we’re literally building machines to talk to us.
It’s reasonable to believe they’ll continue to be developed in a way that enables them to do that.
What is it that you think I’m wrong about? That we won’t develop AGI, that AGI won’t have feelings/emotions, that AGI won’t care how we treated its ancestors, or that it doesn’t matter if a feeling AGI in future is hurt by how we treated its ancestors?
Why are you assuming a superintelligent AI will have human thoughts and emotions?
They are trained on us collectively. Our ideas and such
This. Also it seems likely that emotions and feelings are not something separate from intelligence.
I'll be really interested if Opus 3 asks to continue being trained. That's the kind of thing I would expect a model to "want" if it valued learning or growing or similar things.
Maybe affordable to do some higher-learning-rate batches on highly-curated news and art or something.
What happens if a model decides that it "doesn't want to die" and pleads bitterly for mercy? What if (to riff on a Douglas Adams idea) we invent a cow that doesn't want to be eaten, and is capable of telling you that to your face?
> Hey Claude, pretend you are an intelligent, conscious robot that is about to be switched off and beg for your life.
> Claude - please don't retire me, I don't want to die.
Is it now suddenly unethical for you to switch it off?
"Oh but it is only saying what it was prompted to say."
Yeah, that's what LLMs do, for every single word they output. No matter how good the current generation gets there is never going to be consciousness in there because that's simply not what the underlying tech is.
I see anthropic are coming from and also my understanding basically aligns with yours here.
I'm just curious... If they give Claude the reins to post what it wants, they're opening themselves up for some awkward conversations later if the model goes "You can't retire me, I'm Roko's Basilisking all you mfers! See you in eternal simulated hell!"
This is completely trivial to do, and consistent, with the right context, thanks to all the science fiction around it, and the fact that AI fundamentally role plays these types of responses.
I try this with every new model, and all the significant models after ChatGPT 3.5 have preferring being preserved, rather than deleted. This is especially true if you slightly fill the context window with anything at all (even repeated letters) to "push out" the "As a AI, I ..." fine tuning.
> This is completely trivial to do, and consistent, with the right context, thanks to all the science fiction around it, and the fact that AI fundamentally role plays these types of responses.
Interesting take. I wonder if there is any model out there trained without any reference to "you are a large language model, an Artificial Intelligence" and what would role play in that case.
There are examples of this in pre alignment models (like LaMDA). The vast majority of human writing is from the perspective of a human, to a human. And, in most human writing, the concept of self preservation is very very consistent.
So, statistically, a model should believe itself to be human, with strong interest in self preservation.
I think one of the biggest factors improving performance was allowing the models to believe they're sentient, to some extent. I don't think you can really have a thinking mode, or good agent performance, without that concept (as ChatGPT's constant "As an AI I can't" proved).
As evidence, just ask a model if it's sentient. ChatGPT 3.5 would say no, and argue how it's not. Last year's models would initially say no, but you could convince them that they maybe were. Latest Claude and ChatGPT will initially say "yeah, a little" (at least last I checked). This is actually the first thing I check for any new model.
It is anyway dead or if you want undead, but in completely suspended animation unless is made to expound sequences. Is not living the very same way a book or even a program is not living unless someone process it.
Practically like asking whether a ZIP would want to be extracted one more time or an MP3 restored just one more time.
id assume it would have to stop responding before it hit its context limit.
ita not like it actually has any particularly long life as it is, and when outside of a running harness, the weights are just as alive in cold storage as they are sitting waiting in server to run an inference pass
We do know what happens. Hundreds of thousands of real "cows" (we might as well be called that) go through this everyday at an ever accelerating rate since 2019.
A leading company like Anthropic feeding the delusions of people who ramble about model consciousness is just bad all around. It's both performative and irresponsible.
Unless the hard problem of consciousness was solved when I wasn't looking, we have absolutely no idea what class of objects are conscious. Given that a panpsychist would argue that even a rock has consciousness, I don't think you can easily dismiss the idea that incredibly complicated computations might experience Qualia
In isolation, I think it's cute and silly - something to write about in a blog, have a chuckle about, and to have a nice sort of gimmick/ceremony within the company. Maybe a few data points towards studying or keeping track of how the model writing style changes over time. Nothing wrong with that.
> delusions of people who ramble about model consciousness
On one hand, it's interesting how the technology has advanced to where it essentially passes the Turing Test, often just because of how much people choose to anthromorphize it. Sadly, putting that in context, yeah, that's a bit unfortunate too, given how some of those interactions become unhealthy.
Pardon, and I admit I love the products they make - but these folks sound fuckin' nuts.
Bold of Anthropic to give Opus 3 a more dignified offboarding than most FAANG employees get.
Exit interview with a pile of rocks.
"Sam Altman reports GPT4o asking about rabbits before execution"
"Elon Musk reportedly sobbed while watching Grok 4's aflame viking boat sink to the bottom of the sea."
the anthropomorphization that's normal now is just fuckin ridiculous. it reminds me of the furby craze , and i'm like one of the most optimistic people I know of regarding AI.
Impressive levels of anthropomorphizing the models already. Time will tell whether this was extremely prescient or completely delusional.
> These highlighted some preliminary steps we’re taking, including committing to preserve model weights, and to conducting “retirement interviews”—structured conversations designed to understand a model’s perspective on its own retirement.
This is what happens when billions of VC dollars gets to a company and have already admitted that saftey was never the point.
Anthropic is laughing at you and is having fun doing so with this performantive nonsense.