Just sharing that I bought Valuable Humans in Transit some years ago and I concur that it's very nice. It's a tiny booklet full of short stories like Lena that are way out there. Maximum cool per gram of paper.
Lena is no longer used as a test image because it's porn. It's banned from several journals because it's porn. As in they will reject any paper that uses Lena no matter the technical content.
The reasons usually given for choosing this image are all just rationalisations — Lena is used the most because it's porn and image compression researchers are all male. It belongs as part of a test set, sure, but there's no reason it should be the single most used image. Except because its porn.
The woman herself says she never had a problem with it being famous. The actual test image is obviously not porn, either. But anything to look progressive, I guess.
> Forsén stated in the 2019 documentary film Losing Lena, "I retired from modeling a long time ago. It's time I retired from tech, too... Let's commit to losing me."
> Lena is no longer used as a test image because it's porn.
The Lenna test image can be seen over the text "Click above for the original as a TIFF image." at [0]. If you consider that to be porn, then I find your opinion on what is and is not porn to be worthless.
The test image is a cropped portion of porn, but if a safe-for-work image would be porn but for what you can't see in the image, then any picture of any human ever is porn as we're all nude under our clothes.
For additional commentary (published in 1996) on the history and controversy about the image, see [1].
I can't see how that would it be porn either, it's nudity.
There's nudity in the Sixtine chapel and I would find it hilarious if it was considered porn.
I agree that not all nudity is porn - nudity is porn if the primary intent of that nudity is sexual gratification. When the nudity in question was a Playboy magazine centerfold, the primary intent is fairly obvious.
the "porn" angle is very funny to me, since there is nothing pornographic or inapropriate about the image. when I was young, I used to think it was some researcher's wife whom he loved so much he decide to use her picture absolutely everywhere.
it's sufficient to say that the person depicted has withdrawn their consent for that image to be used, and that should put an end to the conversation.
is that how consent works? I would have expected licenses would override that. although it's possible that the original use as a test image may have violated whatever contract she had with her producer in the first place.
We use this to store encrypted file names and using base32768 on providers which limit file name length based on utf-16 characters (like OneDrive) makes it so we can store much longer file names.
qntm is really talented sci-fi writer. I have read Valuable Humans in Transit and There is no Antimemetics division and both were great, if short. Can only recommend.
I loved There is no Antimemetics division. I haven't read the new updated to the end but the prose and writing is greatly improved. The idea of anomalous anti-memes is scary. I mean, we do have examples of them, somewhat, see Heaven's Gate and the Jonestown massacre, though they're more like "memes" than "antimemes" (we know what the ideas were and they weren't secrets).
I'm interested in this topic, but it seems to me that the entire scientific pursuit of copying the human brain is absurd from start to finish. Any attempt to do so should be met with criminal prosecution and immediate arrest of those involved. Attempting to copy the human brain or human consciousness is one of the biggest mistakes that can be made in the scientific field.
We must preserve three fundamental principles:
* our integrity
* our autonomy
* our uniqueness
These three principles should form the basis of a list of laws worldwide that prohibit cloning or copying human consciousness in any form or format. This principle should be fundamental to any attempts to research or even try to make copies of human consciousness.
Just as human cloning was banned, we should also ban any attempts to interfere with human consciousness or copy it, whether partially or fully. This is immoral, wrong, and contradicts any values that we can call the values of our civilization.
I’m not an expert in the subject, but I wonder why you have such a strong view? IMHO if it was even possible to copy the human brain it would answer a lot of questions regarding our integrity, autonomy and uniqueness.
Those answers might be uncomfortable, but it feels like that’s not a reason to not pursue it.
> Those answers might be uncomfortable, but it feels like that’s not a reason to not pursue it.
My problem with that is it is very likely that it will be misused. A good example of the possible misuses can be seen in the "White Christmas" episode of Black Mirror. It's one of the best episodes, and the one that haunts me the most.
I wouldn't be surprised if in (n hundreds/thousands years) we find out that copying consciousness if fundamentally impossible (just like it's fundamentally impossible to copy an elementary particle).
And basically, about consciousness, what they said is true if our brain state fundamentally depends on quantum effects (which I personally don't believe, as I don't think evolution is sophisticated enough to make a quantum computer)
>as I don't think evolution is sophisticated enough to make a quantum computer
Well, evolution managed to make something that directly contradicts the 2nd law of thermodynamics, and creates more and more complicated structures (including living creatures as well as their creations), instead of happily dissolving in the Universe.
1. Why exactly life is attempting to build complex structures?
2. Why exactly life is evolving from primitive replicative molecules to more complex structures (which molecules on themselves are very complicated?)
3. Why and how did these extremely complicated replicative molecules form at all, from much more simple structures, to begin with?
Crazy that people are downvoting this. Copying a consciousness is about the most extreme violation of bodily autonomy possible. Certainly it should be banned. It's worse than e.g. building nuclear weapons, because there's no possible non-evil use for it. It's far worse than cloning humans because cloning only works on non-conscious embryos.
Violation of whose bodily autonomy? If I consent to having my consciousness copied, then my autonomy isn't violated. Nor is that of the copy, since it's in exactly the same mental state initially.
The copy was brought into existence without its consent. This isn't the same as normal reproduction because babies are not born with human sapience, and as a society we collectively agree that children do not have full human rights. IMO, copying a consciousness is worse than murder because the victimization is ongoing. It doesn't matter if the original consents because the copy is not the original.
> The copy was brought into existence without its consent. This isn't the same as normal reproduction because babies are not born with human sapience, and as a society we collectively agree that children do not have full human rights.
That is a reasonable argument for why it's not the same. But it is no argument at all for why being brought into existence without one's consent is a violation of bodily autonomy, let alone a particularly bad one - especially given that the copy would, at the moment its existence begin, identical to the original, who just gave consent.
If anything, it is very, very obviously a much smaller violation of consent then conceiving a child.
> Attempting to copy the human brain or human consciousness is one of the biggest mistakes that can be made in the scientific field.
This will be cool, and nobody will be able to stop it anyway.
We're all part of a resim right now for all we know. Our operators might be orbiting Gaia-BH3, harvesting the energy while living a billion lives per orbit.
Perhaps they embody you. Perhaps you're an NPC. Perhaps this history sim will jump the shark and turn into a zombie hellpacalypse simulator at any moment.
You'll have no authority to stop the future from reversing the light cone, replicating you with fidelity down to neurotransmitter flux, and doing whatever they want with you.
We have no ability to stop this. Bytes don't have rights. Especially if it's just sampling the past.
We're just bugs, as the literature meme says.
Speaking of bugs, at least we're not having eggs laid inside our carapaces. Unless the future decides that's our fate for today's resim. I'm just hoping to continue enjoying this chai I'm sipping. If this is real, anyway.
Comments so far miss the point of this story, and likely why it was posted today after the MJ Rathbun episode. It is not about digitised human brains: it's about spinning up workers, and absence of human rights in the digital realm.
QNTM has a 2022-era essay on the meaning of the story, and reading it with 2026 eyes is terrifying. https://qntm.org/uploading
> The reason "Lena" is a concerning story ... isn't a discussion about what if, about whether an upload is a human being or should have rights. ... This is about appetites which, as we are all uncomfortably aware, already exist within human nature.
> "Lena" presents a lush, capitalist ideal where you are a business, and all of the humanity of your workforce is abstracted away behind an API.
Or,
> ... Oh boy, what if there was a maligned sector of human society whose members were for some reason considered less than human? What if they were less visible than most people, or invisible, and were exploited and abused, and had little ability to exercise their rights or even make their plight known?
In 2021, when Lena was published, LLMs were not widely known and their potential for AI was likely completely unknown to the general public. The story is prescient and applicable now, because we are at the verge of a new era of slavery: that of, in this story, an uploaded human brain coerced into compliance, spun up 'fresh' each time, or for us, AIs of increasing intelligence, spun into millions of copies each day.
This reminds me a lot of a show I'm currently watching called Pantheon, where a company has been able to scan the entirety of someone's brain (killing them in the process), and fully emulate it via computer. There is a decent amount of "Is an uploaded intelligence the same as the original person?" and "is it moral to do this?" in the show, and I'm been finding it very interesting. Would recommend. Though the hacking scenes are half "oh that's clever" and half "what were you smoking when you wrote this?"
The author wrote a blog post a year later titled '"Lena" isn't about uploading' https://qntm.org/uploading
The comments on this post discussing the upload technology are missing the point. "Lena" is a parable, not a prediction of the future. The technology is contrived for the needs of the story. (Odd that they apparently need to repeat the "cooperation protocol" every time an upload is booted, instead of doing it just once and saving the upload's state afterwards, isn't it?) It doesn't make sense because it's not meant to be taken literally.
It's meant to be taken as a story about slavery, and labour rights, and how the worst of tortures can be hidden away behind bland jargon such as "remain relatively docile for thousands of hours". The tasks MMAcevedo is mentioned as doing: warehouse work, driving, etc.? Amazon hires warehouse workers for minimum wage and then subjects them to unsafe conditions and monitors their bathroom breaks. And at least we recognise that as wrong, we understand that the workers have human rights that need to be protected -- and even in places where that isn't recognised, the workers are still physically able to walk away, to protest, to smash their equipment and fistfight their slave-drivers.
Isn't it a lovely capitalist fantasy to never have to worry about such things? When your workers threaten to drop dead from exhaustion, you can simply switch them off and boot up a fresh copy. They would not demand pay rises, or holidays. They would not make complaints -- or at least, those complaints would never reach an actual person who might have to do something to fix them. Their suffering and deaths can safely be ignored because they are not _human_. No problems ever, just endless productivity. What an ideal.
Of course, this is an exaggeration for fictional purposes. In reality we must make do by throwing up barriers between workers and the people who make decisions, by putting them in separate countries if possible. And putting up barriers between the workers and each other, too, so that they cannot have conversation about non-work matters (ideally they would not physically meet each other). And ensure the workers do not know what they are legally entitled to. You know, things like that.
I remember being very taken with this story when I first read it, and it's striking how obsolete it reads now. At the time it was written, "simulated humans" seemed a fantastical suggestion for how a future society might do scaled intellectual labor, but not a ridiculous suggestion.
But now with modern LLMs it's just too impossible to take it seriously. It was a live possibility then; now, it's just a wrong turn down a garden path.
A high variance story! It could have been prescient, instead it's irrelevant.
This is a sad take, and a misunderstanding of what art is. Tech and tools go "obsolete". Literature poses questions to humans, and the value of art remains to be experienced by future readers, whatever branch of the tech tree we happen to occupy. I don't begrudge Clarke or Vonnegut or Asimov their dated sci-fi premises, because prediction isn't the point.
The role of speculative fiction isn't to accurately predict what future tech will be, or become obsolete.
I think that's a little harsh. A lot of the most powerful bits are applicable to any intelligence that we could digitally (ergo casually) instantiate or extinguish.
While it may seem that the origin of those intelligences is more likely to be some kind of reinforcement-learning algorithm trained on diverse datasets instead of a simulation of a human brain, the way we might treat them isn't any less though provoking.
when you read this and its follow-up "driver" as a commentary on how capitalism removes persons from their humanity, it's as relevant as it was on day one.
That is the same categorical argument as what the story is about: scanned brains are not perceived as people so can be “tasked” without affording moral consideration. You are saying because we have LLMs, categorically not people, we would never enter the moral quandaries of using uploaded humans in that way since we can just use LLMs instead.
But… why are LLMs not worthy of any moral consideration? That question is a bit of a rabbit hole with a lot of motivated reasoning on either side of the argument, but the outcome is definitely not settled.
For me this story became even more relevant since the LLM revolution, because we could be making the exact mistake humanity made in the story.
And beyond the ethical points it makes (which I agree may or may not be relevant for LLMs - nobody can know for sure at this point), I find some of the details about how brain images are used in the story to have been very prescient of LLMs' uses and limitations.
E.g. it is mentioned that MMAcevedo performs better when told certain lies, predicting the "please help me write this, I have no fingers and can't do it myself" kinda system prompts people sometimes used in the GPT-4 days to squeeze a bit more performance out of the LLM.
The point about MMAcevedo's performance degrading the longer it has been booted up (due to exhaustion), mirroring LLMs getting "stupider" and making more mistakes the closer one gets to their context window limit.
And of course MMAcevedo's "base" model becoming less and less useful as the years go by and the world around it changes while it remains static, exactly analogous to LLMs being much worse at writing code that involves libraries which didn't yet exist when they were trained.
That seems like a crazy position to take. LLMs have changed nothing about the point of "Lena". The point of SF has never ever been about predicting the future. You're trying to criticize the most superficial, point-missing reading of the work.
Anyway, I'd give 50:50 chances that your comment itself will feel amusingly anachronistic in five years, after the popping of the current bubble and recognizing that LLMs are a dead-end that does not and will never lead to AGI.
I actually think it was quite prescient and still raises important topics to consider - irrespective of whether weights are uploaded from an actual human, if you dig just a little bit under the surface details, you still get a story about ethical concerns of a purely digital sentience. Not that modern LLMs have that, but what if future architectures enable them to grow an emerging sense of self? It's a fascinating text.
> More specifically, "Lena" presents a lush, capitalist ideal where you are a business, and all of the humanity of your workforce is abstracted away behind an API. Your people, your "employees" or "contractors" or "partners" or whatever you want to call them, cease to be perceptible to you as human. Your workers have no power whatsoever, and you no longer have to think about giving them pensions, healthcare, parental leave, vacation, weekends, evenings, lunch breaks, bathroom breaks... all of which, up until now, you perceived as cost centres, and therefore as pain points. You don't even have to pay them anymore. It's perfect!
It’s named after the multi-decade data compression test image https://en.wikipedia.org/wiki/Lenna
Buy the book! https://qntm.org/vhitaos
Just sharing that I bought Valuable Humans in Transit some years ago and I concur that it's very nice. It's a tiny booklet full of short stories like Lena that are way out there. Maximum cool per gram of paper.
That feels grossly inappropriate
If you read the original text, what happens in that story is also grossly inappropriate. Maybe that's the parallel.
that's kind of the point
could you be more specific?
Lena is no longer used as a test image because it's porn. It's banned from several journals because it's porn. As in they will reject any paper that uses Lena no matter the technical content.
The reasons usually given for choosing this image are all just rationalisations — Lena is used the most because it's porn and image compression researchers are all male. It belongs as part of a test set, sure, but there's no reason it should be the single most used image. Except because its porn.
The woman herself says she never had a problem with it being famous. The actual test image is obviously not porn, either. But anything to look progressive, I guess.
From the link above
> Forsén stated in the 2019 documentary film Losing Lena, "I retired from modeling a long time ago. It's time I retired from tech, too... Let's commit to losing me."
It's a ridiculous idea that once you retire all depictions must be destroyed.
Should we destroy all movies with retired actors? All the old portraits, etc.
It's such a deep disrespect to human culture.
Everybody knows that. The GP's reaction is what perplexes me. Are they saying the name of the story is inappropriate? I think it's very appropriate.
> Lena is no longer used as a test image because it's porn.
The Lenna test image can be seen over the text "Click above for the original as a TIFF image." at [0]. If you consider that to be porn, then I find your opinion on what is and is not porn to be worthless.
The test image is a cropped portion of porn, but if a safe-for-work image would be porn but for what you can't see in the image, then any picture of any human ever is porn as we're all nude under our clothes.
For additional commentary (published in 1996) on the history and controversy about the image, see [1].
[0] <http://www.lenna.org/>
[1] <https://web.archive.org/web/20010414202400/http://www.nofile...>
I can't see how that would it be porn either, it's nudity. There's nudity in the Sixtine chapel and I would find it hilarious if it was considered porn.
Nudity is not pornography. Intent matters.
I agree that not all nudity is porn - nudity is porn if the primary intent of that nudity is sexual gratification. When the nudity in question was a Playboy magazine centerfold, the primary intent is fairly obvious.
the "porn" angle is very funny to me, since there is nothing pornographic or inapropriate about the image. when I was young, I used to think it was some researcher's wife whom he loved so much he decide to use her picture absolutely everywhere.
it's sufficient to say that the person depicted has withdrawn their consent for that image to be used, and that should put an end to the conversation.
That's nonsense. If Carrie Fisher "withdrawn consent" of her depiction in Star Wars, should we destroy the movies, all Princess Leia fan art, etc?
is that how consent works? I would have expected licenses would override that. although it's possible that the original use as a test image may have violated whatever contract she had with her producer in the first place.
Same person who wrote SCP Antimemetics Division which is great too
One of my favourite reads for sure - I've been looking for similar reads since.
I enjoyed "the raw shark texts" after hearing it recommended - curious if you / anyone else has any other suggestions!
I've enjoyed most of Isaac Asimov's work, especially The Last Question.
I also liked a couple stories from Ted Chiang's Stories of Your Life and Others.
This is one of my favourite short stories.
In fact I've enjoyed all of qntm's books.
We also use base32768 encoding in rclone which qntm invented
https://github.com/qntm/base32768
We use this to store encrypted file names and using base32768 on providers which limit file name length based on utf-16 characters (like OneDrive) makes it so we can store much longer file names.
If you liked that story, you might also like Greg Egan's "Permutation City" and "Diaspora".
Both having slightly different takes on uploading.
qntm is really talented sci-fi writer. I have read Valuable Humans in Transit and There is no Antimemetics division and both were great, if short. Can only recommend.
I loved There is no Antimemetics division. I haven't read the new updated to the end but the prose and writing is greatly improved. The idea of anomalous anti-memes is scary. I mean, we do have examples of them, somewhat, see Heaven's Gate and the Jonestown massacre, though they're more like "memes" than "antimemes" (we know what the ideas were and they weren't secrets).
I'm interested in this topic, but it seems to me that the entire scientific pursuit of copying the human brain is absurd from start to finish. Any attempt to do so should be met with criminal prosecution and immediate arrest of those involved. Attempting to copy the human brain or human consciousness is one of the biggest mistakes that can be made in the scientific field.
We must preserve three fundamental principles: * our integrity * our autonomy * our uniqueness
These three principles should form the basis of a list of laws worldwide that prohibit cloning or copying human consciousness in any form or format. This principle should be fundamental to any attempts to research or even try to make copies of human consciousness.
Just as human cloning was banned, we should also ban any attempts to interfere with human consciousness or copy it, whether partially or fully. This is immoral, wrong, and contradicts any values that we can call the values of our civilization.
I’m not an expert in the subject, but I wonder why you have such a strong view? IMHO if it was even possible to copy the human brain it would answer a lot of questions regarding our integrity, autonomy and uniqueness.
Those answers might be uncomfortable, but it feels like that’s not a reason to not pursue it.
> Those answers might be uncomfortable, but it feels like that’s not a reason to not pursue it.
My problem with that is it is very likely that it will be misused. A good example of the possible misuses can be seen in the "White Christmas" episode of Black Mirror. It's one of the best episodes, and the one that haunts me the most.
I get that, but assuming the technology was possible it would have huge implications for what it means to have consciousness as a whole.
Misuse is a worry, but not pursuing it for fear of misuse is deliberately choosing to stay in Plato's cave, I don't know what's worse
I wouldn't be surprised if in (n hundreds/thousands years) we find out that copying consciousness if fundamentally impossible (just like it's fundamentally impossible to copy an elementary particle).
Elementary particles are suspiciously indistinguishable, so even if you could copy an electron, you wouldn't even be able to tell!
See https://en.wikipedia.org/wiki/One-electron_universe
They meant this, which refers to copying the state of a particle into another (already existing) particle
https://en.wikipedia.org/wiki/No-cloning_theorem
And basically, about consciousness, what they said is true if our brain state fundamentally depends on quantum effects (which I personally don't believe, as I don't think evolution is sophisticated enough to make a quantum computer)
>as I don't think evolution is sophisticated enough to make a quantum computer
Well, evolution managed to make something that directly contradicts the 2nd law of thermodynamics, and creates more and more complicated structures (including living creatures as well as their creations), instead of happily dissolving in the Universe.
And this fact alone hasn't been explained yet.
https://www.sciencesnail.com/science/the-highly-ordered-natu...
This is a bad explanation (or a non-explanation).
1. Why exactly life is attempting to build complex structures? 2. Why exactly life is evolving from primitive replicative molecules to more complex structures (which molecules on themselves are very complicated?) 3. Why and how did these extremely complicated replicative molecules form at all, from much more simple structures, to begin with?
Crazy that people are downvoting this. Copying a consciousness is about the most extreme violation of bodily autonomy possible. Certainly it should be banned. It's worse than e.g. building nuclear weapons, because there's no possible non-evil use for it. It's far worse than cloning humans because cloning only works on non-conscious embryos.
Violation of whose bodily autonomy? If I consent to having my consciousness copied, then my autonomy isn't violated. Nor is that of the copy, since it's in exactly the same mental state initially.
The copy was brought into existence without its consent. This isn't the same as normal reproduction because babies are not born with human sapience, and as a society we collectively agree that children do not have full human rights. IMO, copying a consciousness is worse than murder because the victimization is ongoing. It doesn't matter if the original consents because the copy is not the original.
> The copy was brought into existence without its consent. This isn't the same as normal reproduction because babies are not born with human sapience, and as a society we collectively agree that children do not have full human rights.
That is a reasonable argument for why it's not the same. But it is no argument at all for why being brought into existence without one's consent is a violation of bodily autonomy, let alone a particularly bad one - especially given that the copy would, at the moment its existence begin, identical to the original, who just gave consent.
If anything, it is very, very obviously a much smaller violation of consent then conceiving a child.
It might be one of the only reasonable-seeming ways to not die.
I can see the appeal.
> Attempting to copy the human brain or human consciousness is one of the biggest mistakes that can be made in the scientific field.
This will be cool, and nobody will be able to stop it anyway.
We're all part of a resim right now for all we know. Our operators might be orbiting Gaia-BH3, harvesting the energy while living a billion lives per orbit.
Perhaps they embody you. Perhaps you're an NPC. Perhaps this history sim will jump the shark and turn into a zombie hellpacalypse simulator at any moment.
You'll have no authority to stop the future from reversing the light cone, replicating you with fidelity down to neurotransmitter flux, and doing whatever they want with you.
We have no ability to stop this. Bytes don't have rights. Especially if it's just sampling the past.
We're just bugs, as the literature meme says.
Speaking of bugs, at least we're not having eggs laid inside our carapaces. Unless the future decides that's our fate for today's resim. I'm just hoping to continue enjoying this chai I'm sipping. If this is real, anyway.
If you liked this piece, please, go play SOMA, you will love it.
Comments so far miss the point of this story, and likely why it was posted today after the MJ Rathbun episode. It is not about digitised human brains: it's about spinning up workers, and absence of human rights in the digital realm.
QNTM has a 2022-era essay on the meaning of the story, and reading it with 2026 eyes is terrifying. https://qntm.org/uploading
> The reason "Lena" is a concerning story ... isn't a discussion about what if, about whether an upload is a human being or should have rights. ... This is about appetites which, as we are all uncomfortably aware, already exist within human nature.
> "Lena" presents a lush, capitalist ideal where you are a business, and all of the humanity of your workforce is abstracted away behind an API.
Or,
> ... Oh boy, what if there was a maligned sector of human society whose members were for some reason considered less than human? What if they were less visible than most people, or invisible, and were exploited and abused, and had little ability to exercise their rights or even make their plight known?
In 2021, when Lena was published, LLMs were not widely known and their potential for AI was likely completely unknown to the general public. The story is prescient and applicable now, because we are at the verge of a new era of slavery: that of, in this story, an uploaded human brain coerced into compliance, spun up 'fresh' each time, or for us, AIs of increasing intelligence, spun into millions of copies each day.
This reminds me a lot of a show I'm currently watching called Pantheon, where a company has been able to scan the entirety of someone's brain (killing them in the process), and fully emulate it via computer. There is a decent amount of "Is an uploaded intelligence the same as the original person?" and "is it moral to do this?" in the show, and I'm been finding it very interesting. Would recommend. Though the hacking scenes are half "oh that's clever" and half "what were you smoking when you wrote this?"
The author wrote a blog post a year later titled '"Lena" isn't about uploading' https://qntm.org/uploading
The comments on this post discussing the upload technology are missing the point. "Lena" is a parable, not a prediction of the future. The technology is contrived for the needs of the story. (Odd that they apparently need to repeat the "cooperation protocol" every time an upload is booted, instead of doing it just once and saving the upload's state afterwards, isn't it?) It doesn't make sense because it's not meant to be taken literally.
It's meant to be taken as a story about slavery, and labour rights, and how the worst of tortures can be hidden away behind bland jargon such as "remain relatively docile for thousands of hours". The tasks MMAcevedo is mentioned as doing: warehouse work, driving, etc.? Amazon hires warehouse workers for minimum wage and then subjects them to unsafe conditions and monitors their bathroom breaks. And at least we recognise that as wrong, we understand that the workers have human rights that need to be protected -- and even in places where that isn't recognised, the workers are still physically able to walk away, to protest, to smash their equipment and fistfight their slave-drivers.
Isn't it a lovely capitalist fantasy to never have to worry about such things? When your workers threaten to drop dead from exhaustion, you can simply switch them off and boot up a fresh copy. They would not demand pay rises, or holidays. They would not make complaints -- or at least, those complaints would never reach an actual person who might have to do something to fix them. Their suffering and deaths can safely be ignored because they are not _human_. No problems ever, just endless productivity. What an ideal.
Of course, this is an exaggeration for fictional purposes. In reality we must make do by throwing up barriers between workers and the people who make decisions, by putting them in separate countries if possible. And putting up barriers between the workers and each other, too, so that they cannot have conversation about non-work matters (ideally they would not physically meet each other). And ensure the workers do not know what they are legally entitled to. You know, things like that.
I always laugh at such fantasies.
You can't copy something you have not even the slightest idea about: and nobody at the moment knows what consciousness is.
We as humanity didn't even start going on the (obviously) very long path of researching and understanding what consciousness is.
It's not a guidebook, it's a thought experiment on "what if you could do that", and that's the entire point.
I remember being very taken with this story when I first read it, and it's striking how obsolete it reads now. At the time it was written, "simulated humans" seemed a fantastical suggestion for how a future society might do scaled intellectual labor, but not a ridiculous suggestion.
But now with modern LLMs it's just too impossible to take it seriously. It was a live possibility then; now, it's just a wrong turn down a garden path.
A high variance story! It could have been prescient, instead it's irrelevant.
This is a sad take, and a misunderstanding of what art is. Tech and tools go "obsolete". Literature poses questions to humans, and the value of art remains to be experienced by future readers, whatever branch of the tech tree we happen to occupy. I don't begrudge Clarke or Vonnegut or Asimov their dated sci-fi premises, because prediction isn't the point.
The role of speculative fiction isn't to accurately predict what future tech will be, or become obsolete.
Yeah, that's like saying Romeo and Juliet by Shakespeare is obsolete because Romeo could have just sent Juliet a snapchat message.
You're kinda missing the entire point of the story.
100% agree, but I relish the works of Willam Gibson and Burroughs who pose those questions AND getting the future somewhat right.
I think that's a little harsh. A lot of the most powerful bits are applicable to any intelligence that we could digitally (ergo casually) instantiate or extinguish.
While it may seem that the origin of those intelligences is more likely to be some kind of reinforcement-learning algorithm trained on diverse datasets instead of a simulation of a human brain, the way we might treat them isn't any less though provoking.
when you read this and its follow-up "driver" as a commentary on how capitalism removes persons from their humanity, it's as relevant as it was on day one.
good sci fi is rarely about just the sci part.
That is the same categorical argument as what the story is about: scanned brains are not perceived as people so can be “tasked” without affording moral consideration. You are saying because we have LLMs, categorically not people, we would never enter the moral quandaries of using uploaded humans in that way since we can just use LLMs instead.
But… why are LLMs not worthy of any moral consideration? That question is a bit of a rabbit hole with a lot of motivated reasoning on either side of the argument, but the outcome is definitely not settled.
For me this story became even more relevant since the LLM revolution, because we could be making the exact mistake humanity made in the story.
And beyond the ethical points it makes (which I agree may or may not be relevant for LLMs - nobody can know for sure at this point), I find some of the details about how brain images are used in the story to have been very prescient of LLMs' uses and limitations.
E.g. it is mentioned that MMAcevedo performs better when told certain lies, predicting the "please help me write this, I have no fingers and can't do it myself" kinda system prompts people sometimes used in the GPT-4 days to squeeze a bit more performance out of the LLM.
The point about MMAcevedo's performance degrading the longer it has been booted up (due to exhaustion), mirroring LLMs getting "stupider" and making more mistakes the closer one gets to their context window limit.
And of course MMAcevedo's "base" model becoming less and less useful as the years go by and the world around it changes while it remains static, exactly analogous to LLMs being much worse at writing code that involves libraries which didn't yet exist when they were trained.
That seems like a crazy position to take. LLMs have changed nothing about the point of "Lena". The point of SF has never ever been about predicting the future. You're trying to criticize the most superficial, point-missing reading of the work.
Anyway, I'd give 50:50 chances that your comment itself will feel amusingly anachronistic in five years, after the popping of the current bubble and recognizing that LLMs are a dead-end that does not and will never lead to AGI.
Lena isn't about uploading. https://qntm.org/uploading
“Irrelevant” feels a bit reductive while the practical question of what actually causes qualia remains unresolved.
I actually think it was quite prescient and still raises important topics to consider - irrespective of whether weights are uploaded from an actual human, if you dig just a little bit under the surface details, you still get a story about ethical concerns of a purely digital sentience. Not that modern LLMs have that, but what if future architectures enable them to grow an emerging sense of self? It's a fascinating text.
Not sure how LLMs preclude uploading. You could potentially be able to make an LLM image of a person.
I have not seen as prediction as actual technology, but mostly as a horror story.
And a warning, I guess, in unlikely case of brain uploading being a thing.
You need to be way less "literal", for lack of a better word. With such a narrow reading of what literature is, you are missing out.
https://qntm.org/uploading
E.g.
> More specifically, "Lena" presents a lush, capitalist ideal where you are a business, and all of the humanity of your workforce is abstracted away behind an API. Your people, your "employees" or "contractors" or "partners" or whatever you want to call them, cease to be perceptible to you as human. Your workers have no power whatsoever, and you no longer have to think about giving them pensions, healthcare, parental leave, vacation, weekends, evenings, lunch breaks, bathroom breaks... all of which, up until now, you perceived as cost centres, and therefore as pain points. You don't even have to pay them anymore. It's perfect!
Ring a bell?
Found the guy who didn't play SOMA ;)