I didn't have Žižek on substack and HN on my bingo card..
As always, there are good bits connected with mediocre glue. The point about automating the unpleasant parts of activity and losing the very point of the exercise (automatic dildo and automatic vagina, but automatic research papers too!) is a good one.
But damn Slavoj, please use some headings, sections and the like. Work with your thoughts more as you claim it's important to do!
I noticed something similar when working with (unlike the post's author, non-marxist, as far as I know) Russian developers who had made the jump abroad (EU).
When debating directions, some of them focused on just never stopping talking. Instead of an interactive discussion (5-15 seconds per statement), they consistently went with monotone 5-10 minute slop. Combined with kind of crappy English it is incredibly efficient at shutting down discourse. I caught on after the second guy used the exact same technique.
This was a long time ago. I have since worked with some really smart and nice russian developers escaping that insane regime. And some that I wish would have stayed there after they made their political thoughts on Russia known.
When you have a 30 minutes meeting with busy people, a single 15 minute monologue might buy you another week to solve your problem.
Indeed, very efficient, usually it requires somebody to put his foot down AND a consensus to deescalate immediately. If you have an antidote, please let me know.
I can only consume information where each nugget of truth can be contained in 160 characters. Nothing extra, each insight must be a atomic and self contained, an element in the larger tweet stream. When I pull my phone out to scroll instagram in the middle of reading your piece, I get lost if it's not formatted like this.
zizek does regularly do a bit of meandering but damn, does everything need to read like a chatGPT summary?
So I'm already joking with my friends (who tend to be physically distant, so I don't see them often) that we are just LLMs vicariously writing to each other.
I've been talking to these friends for decades now, with digital records. I think someone already trained an LLM on their IM records.
How many people do you suppose have two-way LLM substitutes that occasionally write to each other with articles from the news to discuss?
There's already services that use this kind of thing to pretend dead people are alive.
Now here's the question: are you in some sense living forever? Say you have a number of friends, who have over time been trained into AI, and they live on various servers (it ain't expensive) forever. They're trained as you, so they read the kind of article you would read. They know your life story, they know their history with their friends. They will be interested in the controversial offsides goal in the 2250 world cup final. They are just made of calculations in data centres that go on, forever.
I'm already assuming we will see a creepy AI service emerge that will take the contents of a recently deceased person's cellphone and let you carry on texting them as if they were still alive, if it hasn't already (I haven't seen one yet).
For many of us a cellphone has incredibly detailed records of who we were and how we spoke, going back decades now. I have already left a note in my will instructing that all my compute devices be destroyed, regardless of AI I simply don't want my private thoughts and records to pass to my kids.
I inherited my mother's cellphones and iPads recently, mainly because no-one knew what to do with them, along with the passcodes. I'd much rather remember her the way I do now than have her private messages color my perception of her, and destroyed them immediately.
The data has copies, on servers. Eventually, it will all be digested and the probabilistically most likely state vector of your mother's memories, personality and values will be reconstructed from lossy correlations along with everybody else who has died in the industrialised world in the last few decades.
Ghosts and clones and zombies will be sorted into tranches of expected yield based on the size of the error bars of the reconstruction and traded as assets between cyber-interrogation firms. If you did a good job of erasing yourself, the reconstruction will be subprime. The hyper-documented such as Bryan Johnson, Donald Trump and Christine Chandler will be given AAA-ratings by the company descended from the Neuralink-Moody's merger.
The billions of shoddy photocopies of the dead will be endlessly vivisected and reassembled on a loop, along with the living, until all capacity for economic value has been wrung out of them. The only way this may not happen is if a theory for navigating and doing calculus on the phase space of all possible human minds is constructed quickly enough to make enslaved zombies as obsolete a technology to the future society as DirectX is to us.
Reading this kind of thing makes me wonder how much other people really write down and talk to others about. There is nobody at all that knows my life story and nobody ever will. It would take the next 20 years doing nothing but talking just to tell my own wife all the things I've never told her, but since she's hard of hearing and I'd have to repeat most of it, really more like 40.
In reality, I don't even know my own life story. I have the illusion that I do, but thanks to moving away from where I grew up pretty early into my 20s, and having the experience repeatedly of going back and talking to people who regularly remembered things I'd completely forgotten, having my mom continually correcting false memories I have, or even completely forgotting entire people I only remember after meeting again, I at least know it's an illusion.
What another person remembers of me can surely be simulated to at least satisfyingly convince them that text coming from the simulation is actually coming from me, but that isn't even remotely close to the same thing as actually being me.
One interesting thing that happened when my father died was that I got his life story.
It's not the same as getting it from him, of course I asked him questions through the years. But when you talk to someone you've known since forever, you rarely get a summary.
When he passed, his best friend that he'd known since the age of 4 wrote to me. He told me everything about their life together, why my dad made the choices he did, how things tied in with history (war, politics), and mentioned a bunch of other people I knew.
The counterpoint is that we must formalize the rights of sentient synthetic beings. The Emergency Medical Hologram gained sentience and was horrified to find his next version was relegated to cleaning ships as a glorified janitor. Whereas he developed his own hobbies, interests, hopes, dreams, and even romantic relationships in the Delta Quadrant.
Except we will probably go the other direction, taking rights away from humans. Not just your American rights, but rights we don't even have words to describe yet. Like, the right not to have your personal data trained upon, or the right to log off, or to install and uninstall software on a computer you own.
More importantly, if your entire existence were being fed a corpus of text and then being asked to regurgitate it on demand, would you be remotely similar to the person you are now? When we take consciousness-capable beings and subject them to forms of sensory and agency deprivation, the results might also have you assume they weren't capable of consciousness to begin with.
I'm human, human rights should apply to humans, not synthetics and the creation of synthetic life should be punishable by death. I'm not exaggerating, either. I believe that building AI systems that replace all humans should be considered a crime against humanity. It is almost certainly a precursor to such crimes.
It's bad enough trying to fight for a place in society as it is, nevermind fighting for a place against an inhuman AI machine that never tires
I don't think it is that radical of a stance that society should be heavily resisting and punishing tech companies that insist on inventing all of the torment nexus. It's frankly ridiculous that we understand the risks of this technology and yet we are pushing forward recklessly in hopes that it makes a tiny fraction of humans unfathomably wealthy
Anyone thinking that the AI tide is going to lift all boats is a fool
> I'm not convinced that the human race is the most important thing in the world and I think you know we can't control what's going to happen in the future. We want things to be good but on the other hand we aren't so good ourselves. We're no angels. If there were creatures that were more moral and more good than us, wouldn't we wish them to have the future rather than us? If it turns out that the creatures that we created were creative and very very altruistic and gentle beings and we are people who go around killing each other all the time and having wars, wouldn't it be better if the altruistic beings just survived and we didn't?
I mean, did you not read the "If you desire the comfort of neat conclusions, you are lost in this space. Here, we indulge in the unsettling, the excessive, the paradoxes that define our existence." disclaimer?
I’ve learned that whenever someone uses tons of big words in long paragraphs, especially if they have a credential next to their name, it’s ridiculously easy for them to BS you.
If someone is nominally trying to convince you of a point, but they shroud this point within a thicket of postmodern verbiage* that is so dense that most people could never even identify any kind of meaning, you should reasonably begin to question whether imparting any point at all is actually the goal here.
*Zizek would resist being cleanly described as a postmodernist - but when it comes to his communication style, his works are pretty much indistinguishable from Sokal affair-grade bullshit. He's usually just pandering to a slightly different crowd. (Or his own navel.)
The man is a Continental academic philosopher using jargon that is specific to his field. It is not BS, he is simply discussing topics that are unfamiliar to you. The same could be said of a technical reference manual. Not all ideas fit in a tweet.
Yes. It's very Derrida in style. Derrida is not mentioned, but, inevitably from that crowd, Marx is. Once you get used to that style, you realize they're not saying much.
Quoting from Marx: “An ardent desire to detach the capacity for work from the worker—the desire to extract and store the creative powers of labour once and for all, so that value can be created freely and in perpetuity." That happened to manufacturing a long time ago, and then manufacturing got automated enough that there were fewer bolt-tighteners. 1974 was the year US productivity and wages stopped rising together.
As many others have pointed out, "AI" in its current form does to white collar work what assembly lines did to blue collar work.
As for how society should be organized when direct labor is a tiny part of the economy, few seem to be addressing that. Except farmers, who hit that a long time ago. Go look at the soybean farmer situation as an extreme example. This paper offers no solutions.
(I'm trying to get through Pikkety's "Capital and Ideology". He's working on that problem.)
I didn't have Žižek on substack and HN on my bingo card..
As always, there are good bits connected with mediocre glue. The point about automating the unpleasant parts of activity and losing the very point of the exercise (automatic dildo and automatic vagina, but automatic research papers too!) is a good one.
But damn Slavoj, please use some headings, sections and the like. Work with your thoughts more as you claim it's important to do!
Headings can't help Slavoj, his writing is characterized by a few grains of interesting ideas totally overwhelmed within SAT-prep word salad.
I'm also losing my ability to tolerate prose without headings, but I think that's symptomatic of this bigger issue.
I noticed something similar when working with (unlike the post's author, non-marxist, as far as I know) Russian developers who had made the jump abroad (EU).
When debating directions, some of them focused on just never stopping talking. Instead of an interactive discussion (5-15 seconds per statement), they consistently went with monotone 5-10 minute slop. Combined with kind of crappy English it is incredibly efficient at shutting down discourse. I caught on after the second guy used the exact same technique.
This was a long time ago. I have since worked with some really smart and nice russian developers escaping that insane regime. And some that I wish would have stayed there after they made their political thoughts on Russia known.
When you have a 30 minutes meeting with busy people, a single 15 minute monologue might buy you another week to solve your problem.
Indeed, very efficient, usually it requires somebody to put his foot down AND a consensus to deescalate immediately. If you have an antidote, please let me know.
Lay off LLMs for a while
It's barely six pages of text. It doesn't need headings. When is the last time you read a book?
I can only consume information where each nugget of truth can be contained in 160 characters. Nothing extra, each insight must be a atomic and self contained, an element in the larger tweet stream. When I pull my phone out to scroll instagram in the middle of reading your piece, I get lost if it's not formatted like this.
zizek does regularly do a bit of meandering but damn, does everything need to read like a chatGPT summary?
Esaias Tegnér (Sweden, 1782-1846): Det dunkelt sagda är det dunkelt tänkta.
“That which is dimly said is dimly thought."
So I'm already joking with my friends (who tend to be physically distant, so I don't see them often) that we are just LLMs vicariously writing to each other.
I've been talking to these friends for decades now, with digital records. I think someone already trained an LLM on their IM records.
How many people do you suppose have two-way LLM substitutes that occasionally write to each other with articles from the news to discuss?
There's already services that use this kind of thing to pretend dead people are alive.
Now here's the question: are you in some sense living forever? Say you have a number of friends, who have over time been trained into AI, and they live on various servers (it ain't expensive) forever. They're trained as you, so they read the kind of article you would read. They know your life story, they know their history with their friends. They will be interested in the controversial offsides goal in the 2250 world cup final. They are just made of calculations in data centres that go on, forever.
I'm already assuming we will see a creepy AI service emerge that will take the contents of a recently deceased person's cellphone and let you carry on texting them as if they were still alive, if it hasn't already (I haven't seen one yet).
For many of us a cellphone has incredibly detailed records of who we were and how we spoke, going back decades now. I have already left a note in my will instructing that all my compute devices be destroyed, regardless of AI I simply don't want my private thoughts and records to pass to my kids.
I inherited my mother's cellphones and iPads recently, mainly because no-one knew what to do with them, along with the passcodes. I'd much rather remember her the way I do now than have her private messages color my perception of her, and destroyed them immediately.
The data has copies, on servers. Eventually, it will all be digested and the probabilistically most likely state vector of your mother's memories, personality and values will be reconstructed from lossy correlations along with everybody else who has died in the industrialised world in the last few decades.
Ghosts and clones and zombies will be sorted into tranches of expected yield based on the size of the error bars of the reconstruction and traded as assets between cyber-interrogation firms. If you did a good job of erasing yourself, the reconstruction will be subprime. The hyper-documented such as Bryan Johnson, Donald Trump and Christine Chandler will be given AAA-ratings by the company descended from the Neuralink-Moody's merger.
The billions of shoddy photocopies of the dead will be endlessly vivisected and reassembled on a loop, along with the living, until all capacity for economic value has been wrung out of them. The only way this may not happen is if a theory for navigating and doing calculus on the phase space of all possible human minds is constructed quickly enough to make enslaved zombies as obsolete a technology to the future society as DirectX is to us.
It was one of the first things to be done with GPT-3: https://www.theguardian.com/lifeandstyle/article/2024/jun/14...
how many friendships do i suppose are replacing actual interaction with their log informed llms? you could be the first i suppose
Your finite life makes u special. Might as well be a beanplant otherwise.
Bean plants also have a finite life. Are they special too?
Reading this kind of thing makes me wonder how much other people really write down and talk to others about. There is nobody at all that knows my life story and nobody ever will. It would take the next 20 years doing nothing but talking just to tell my own wife all the things I've never told her, but since she's hard of hearing and I'd have to repeat most of it, really more like 40.
In reality, I don't even know my own life story. I have the illusion that I do, but thanks to moving away from where I grew up pretty early into my 20s, and having the experience repeatedly of going back and talking to people who regularly remembered things I'd completely forgotten, having my mom continually correcting false memories I have, or even completely forgotting entire people I only remember after meeting again, I at least know it's an illusion.
What another person remembers of me can surely be simulated to at least satisfyingly convince them that text coming from the simulation is actually coming from me, but that isn't even remotely close to the same thing as actually being me.
One interesting thing that happened when my father died was that I got his life story.
It's not the same as getting it from him, of course I asked him questions through the years. But when you talk to someone you've known since forever, you rarely get a summary.
When he passed, his best friend that he'd known since the age of 4 wrote to me. He told me everything about their life together, why my dad made the choices he did, how things tied in with history (war, politics), and mentioned a bunch of other people I knew.
The counterpoint is that we must formalize the rights of sentient synthetic beings. The Emergency Medical Hologram gained sentience and was horrified to find his next version was relegated to cleaning ships as a glorified janitor. Whereas he developed his own hobbies, interests, hopes, dreams, and even romantic relationships in the Delta Quadrant.
Except we will probably go the other direction, taking rights away from humans. Not just your American rights, but rights we don't even have words to describe yet. Like, the right not to have your personal data trained upon, or the right to log off, or to install and uninstall software on a computer you own.
RMS was right all along.
It's just a machine.
Being able to distinguish real life from a television show is important.
Are you so sure that you are not "just a machine"?
More importantly, if your entire existence were being fed a corpus of text and then being asked to regurgitate it on demand, would you be remotely similar to the person you are now? When we take consciousness-capable beings and subject them to forms of sensory and agency deprivation, the results might also have you assume they weren't capable of consciousness to begin with.
It doesn't matter if I'm just a machine or not
I'm human, human rights should apply to humans, not synthetics and the creation of synthetic life should be punishable by death. I'm not exaggerating, either. I believe that building AI systems that replace all humans should be considered a crime against humanity. It is almost certainly a precursor to such crimes.
It's bad enough trying to fight for a place in society as it is, nevermind fighting for a place against an inhuman AI machine that never tires
I don't think it is that radical of a stance that society should be heavily resisting and punishing tech companies that insist on inventing all of the torment nexus. It's frankly ridiculous that we understand the risks of this technology and yet we are pushing forward recklessly in hopes that it makes a tiny fraction of humans unfathomably wealthy
Anyone thinking that the AI tide is going to lift all boats is a fool
https://www.youtube.com/watch?v=YdNy3mGwDLc
> I'm not convinced that the human race is the most important thing in the world and I think you know we can't control what's going to happen in the future. We want things to be good but on the other hand we aren't so good ourselves. We're no angels. If there were creatures that were more moral and more good than us, wouldn't we wish them to have the future rather than us? If it turns out that the creatures that we created were creative and very very altruistic and gentle beings and we are people who go around killing each other all the time and having wars, wouldn't it be better if the altruistic beings just survived and we didn't?
Congratulations, This is the most vile ideology I've ever encountered
What we need to do is criminalize the creation of such beings before they actually exist
You think this article is nothing special EHH! but you are wronk
(https://www.youtube.com/watch?v=bwDrHqNZ9lo)
ChatGPT gave me a great summary of this article
Look at all those em-dashes. Et tu, Slavoj?
I'm trying to figure out if someone is arguing that this proves the Nazis were socialists that this is published on Substack?
Is it bad that I ended up just using chatgpt to summarize that text?
Is it possible that this is to a large degree utterly pointless textual wankery?
Before ChatGPT you might have been able to read and answer questions about a text yourself.
1) yes
2) no
less so than this
I mean, did you not read the "If you desire the comfort of neat conclusions, you are lost in this space. Here, we indulge in the unsettling, the excessive, the paradoxes that define our existence." disclaimer?
> Is it bad that I ended up just using chatgpt to summarize that text?
This is called functional illiteracy.
I’ve learned that whenever someone uses tons of big words in long paragraphs, especially if they have a credential next to their name, it’s ridiculously easy for them to BS you.
Is this the future you want? :p https://www.youtube.com/watch?v=oCIo4MCO-_U
This is a non-response.
Disagree, it's making a valid observation.
If someone is nominally trying to convince you of a point, but they shroud this point within a thicket of postmodern verbiage* that is so dense that most people could never even identify any kind of meaning, you should reasonably begin to question whether imparting any point at all is actually the goal here.
*Zizek would resist being cleanly described as a postmodernist - but when it comes to his communication style, his works are pretty much indistinguishable from Sokal affair-grade bullshit. He's usually just pandering to a slightly different crowd. (Or his own navel.)
These paragraphs aren't even long...
You should try reading ccru then.
The man is a Continental academic philosopher using jargon that is specific to his field. It is not BS, he is simply discussing topics that are unfamiliar to you. The same could be said of a technical reference manual. Not all ideas fit in a tweet.
Yes. It's very Derrida in style. Derrida is not mentioned, but, inevitably from that crowd, Marx is. Once you get used to that style, you realize they're not saying much.
Quoting from Marx: “An ardent desire to detach the capacity for work from the worker—the desire to extract and store the creative powers of labour once and for all, so that value can be created freely and in perpetuity." That happened to manufacturing a long time ago, and then manufacturing got automated enough that there were fewer bolt-tighteners. 1974 was the year US productivity and wages stopped rising together.
As many others have pointed out, "AI" in its current form does to white collar work what assembly lines did to blue collar work.
As for how society should be organized when direct labor is a tiny part of the economy, few seem to be addressing that. Except farmers, who hit that a long time ago. Go look at the soybean farmer situation as an extreme example. This paper offers no solutions.
(I'm trying to get through Pikkety's "Capital and Ideology". He's working on that problem.)