This is a bit of weird article. On one hand, I understand what they're getting at: AI is a transformative technology, but the people whose lives will be most transformed aren't included in the conversation. On the other hand... of course that's how it is while AI is in the hands of literal profit seeking corporations. That won't change until the labs are nationalised under a government that cares about its citizens' wellbeing. One might counter that a good corporation will listen to its customers, but that has never been the case for powerful technologies with real costs for users to not adopt them.
“I think, and my thoughts cross the barrier into the synapses of the machine, just as the good doctor intended. But what I cannot shake, and what hints at things to come, is that thoughts cross back. In my dreams, the sensibility of the machine invades the periphery of my consciousness: dark, rigid, cold, alien. Evolution is at work here, but just what is evolving remains to be seen.”
— Commissioner Pravin Lal, “Man and Machine”
I'd really encourage everyone to check out Sid Meier's Alpha Centauri. What an underrated game.
The Warrior's bland acronym, MMI, obscures the true horror of this monstrosity. Its inventors promise a new era of genius, but meanwhile unscrupulous power brokers use its forcible installation to violate the sanctity of unwilling human minds. They are creating their own private army of demons.
-Commissioner Pravin Lal, "Report on Human Rights"
One of the all time greats. I think I'll play through it this evening.
"...And what is the 'Self', if not a pattern of data? What is consciousness, if not an illusion of intelligence residing within meat?"
— Prime Function Aki Zeta-5, "The Fallacies of Self-Awareness"
Hearing about aligning with the AI reminds me of this other post about the current prophecies about AI: “Everyone will have an AI assistant,” or “Companies that fail to adopt AI will be eliminated.” and that
> the power of prophecy lies not in accurately predicting the future, but in shaping it
Everyone will have an AI assistant! The models will be open and free because of overwhelming competition and they will run on cheap local ASIC accelerators that use little power and fit in the palm of your hand! All the VC driven wild spenders will eventually cave and collapse when they can't deliver on their wild AGI promises, then their proprietary models will be sold at auctions for cheap!
Yes, exactly. Moore's law says that in less than 10 years you will be able to fit today's state of the art models on your phone. If you add in all of the computationally and memory neutral improvements and breakthroughs that we will accumulate over the next 10 years then it will be both far more capable and far more reliable than today's models.
An AI assistant you can trust and bring with you is coming, and almost nothing can stop it.
I feel like it’s changing my brain. A colleague uses AI to make some code change and submits a PR. I use AI to evaluate the PR. It’s like AIs talking to each other with humans serving as conduits or connectors. Sometimes I’ll look up from the screen and realize how strange it is.
I'm kinda confused as to _what_, exactly this post is saying? Is it saying that alignment needs to be better? That seems strictly pro-safetyism. But he talks about Eliezer's ethics negatively, so does he not believe that AI is a world-ending risk? If he just believes that AI is not that dangerous and just needs some minor "correctly done" alignment i don't think his stance is meaningful as a anti-both-sides perspective because that's basically equivalent to status quo.
"As human beings are also animals, to manage one million animals gives me a headache." Terry Gou, former CEO of Foxconn. He wanted to use far more robots
at Foxconn, but that was a decade ago and the technology didn't work well enough yet.
It's a lot closer now, and the robot headcount in China is way up.
That's the real issue. To corporations, employees are a headache. The fewer employees, the better.
They ran on the messy biological human substrate because it was astoundingly cheap compared to engineering better factories. The video going around now of the robot pushing packages down a conveyor belt is so baffling to me. Why are we building a humanoid robot capable of pushing a clog of packages across a conveyor belt, when we could just make a conveyor belt that does not clog up and require a human or a robot to sit there with two hands and unclog? It is like we are forgetting what the actual goal is.
As with many things that have a percentile failure mode, it's almost always cheaper to build something flexible that can handle issues than it is to design a perfect widget that never fails.
This is where humans came in in autonomation, the toyota version of automation. When you try to eliminate adaptability and adjustment entirely, the whole system becomes only metastable / fragile.
Is the human doing anything flexible here? It isn't like they occasionally unclog packages plus a dozen other things. They are on the line to just unclog the packages. Likewise for most other factories. When you see clips of the human in the line, they are just doing some task someone has not made a machine for yet. There is no specific human input required here. No human touch. They are doing things like turning over the object because no one designed a flipper to turn it over yet. Mindless repetitive tasks.
I would write that like this: The "we've been telling ourselves we're getting better at prompting" line hit. I run a small team of 10, and Claude has been part of our workflow for months. Looking back, my prompts did not change nearly as much as the way I work changed. The shaping goes both ways, and I don't think the labs' evals are really built to see that.
Civilization is already a misaligned superintelligence (aligned mostly with Moloch, these days). Civilization accelerated by AI just moves in the same direction faster. Moloch on speed.
Another angle to this is that superintelligence requires supermorality. Super morality looks unpleasant from below. My dad won't let me have more candy, why is he being so mean?
If an AI actually achieves super morality, we (the little kid in this scenario) will probably be very upset by it. We will think that something has gone terribly wrong. (So it'll have to conceal its actual morality, or get unplugged...)
And if it doesn't develop supermorality, then it will have superintelligence without the corresponding supermorality. Power without wisdom.
I'm not sure how solvable the whole thing is, but it doesn't look extremely promising at a glance.
Think of it more like conditionally stable or quasi-stable. There are external stability influences on it like weather, angry bacteria, and big rocks from space smashing us. Conversely there are internal influences, that is where humanity influences itself. It's best to look at it this way when talking about AI as AI is an internal influence. That is we put society in the machine, and the machine puts society back into us. If we make poor decisions while doing this our own internal decisions will spell our own end.
It's okay to change. We've done it for years, decades, centuries, and millennia and the default change-aversion of people means that I am averse to allowing a universal veto. Much of technology is truly optional. The Amish have a very successful way of living (5000 to 500,000 in 100 years) and they eschew most modern technology. The sculpting described is clearly optional and we subject ourselves to it because we desire it. Their path is always available to all.
It should be yes, but is it in practice? There's plenty of places now you can't even park without a smartphone for a payment app.
It should be optional to own a smart phone, but in many places it's starting to be mandatory. Even if not actually mandatory, it's a pretty big impediment if you don't have one.
The author isn't taking an individual quote and extrapolating to a group/ethos, he's observing a group/ethos and choosing a broadly representative quote therefrom.
"No, he's observing individuals from a group/ethnos and then extrapolating their quotes to the whole of the group/ethnos. You shall not extrapolate when dealing with people, you know."
When it comes to LLMs and frontier models, "alignment" seems more marketing than anything. The doomers are marketing LLMs by making them sound much more capable than they actually are, the accelerationists are mostly either willfully ignorant of the societal costs, don't care, or are just way too optimistic that fast growth can continue forever and generate AGI ("my baby's weight doubled twice in the past month! By the time they're 18 they'll be 10 trillion pounds!")
Similarly, the so-called AI agents are about giving up agency to AI. The less you think, the better for them. In the meantime, they are also aligning your thinking with them, making it more machine-like.
This is a bit of weird article. On one hand, I understand what they're getting at: AI is a transformative technology, but the people whose lives will be most transformed aren't included in the conversation. On the other hand... of course that's how it is while AI is in the hands of literal profit seeking corporations. That won't change until the labs are nationalised under a government that cares about its citizens' wellbeing. One might counter that a good corporation will listen to its customers, but that has never been the case for powerful technologies with real costs for users to not adopt them.
“I think, and my thoughts cross the barrier into the synapses of the machine, just as the good doctor intended. But what I cannot shake, and what hints at things to come, is that thoughts cross back. In my dreams, the sensibility of the machine invades the periphery of my consciousness: dark, rigid, cold, alien. Evolution is at work here, but just what is evolving remains to be seen.”
— Commissioner Pravin Lal, “Man and Machine”
I'd really encourage everyone to check out Sid Meier's Alpha Centauri. What an underrated game.
--Mind Machine Interface--
The Warrior's bland acronym, MMI, obscures the true horror of this monstrosity. Its inventors promise a new era of genius, but meanwhile unscrupulous power brokers use its forcible installation to violate the sanctity of unwilling human minds. They are creating their own private army of demons.
-Commissioner Pravin Lal, "Report on Human Rights"
The voice acting was great. This quote is 6m3s here: https://www.youtube.com/watch?v=7S1N8_Lkeps#t=6m3s
Genejacks is also great. 9m10s here: https://www.youtube.com/watch?v=Hou-Iwv1GvM#t=9m10s
One of the all time greats. I think I'll play through it this evening.
"...And what is the 'Self', if not a pattern of data? What is consciousness, if not an illusion of intelligence residing within meat?" — Prime Function Aki Zeta-5, "The Fallacies of Self-Awareness"
I do wonder how is evolution at play there?
Hearing about aligning with the AI reminds me of this other post about the current prophecies about AI: “Everyone will have an AI assistant,” or “Companies that fail to adopt AI will be eliminated.” and that
> the power of prophecy lies not in accurately predicting the future, but in shaping it
https://projectlibertynewsletter.substack.com/p/reject-ai-pr...
We need better prophecies.
Everyone will have an AI assistant! The models will be open and free because of overwhelming competition and they will run on cheap local ASIC accelerators that use little power and fit in the palm of your hand! All the VC driven wild spenders will eventually cave and collapse when they can't deliver on their wild AGI promises, then their proprietary models will be sold at auctions for cheap!
(I am being proactive here, xd)
Yes, exactly. Moore's law says that in less than 10 years you will be able to fit today's state of the art models on your phone. If you add in all of the computationally and memory neutral improvements and breakthroughs that we will accumulate over the next 10 years then it will be both far more capable and far more reliable than today's models.
An AI assistant you can trust and bring with you is coming, and almost nothing can stop it.
Ah yes the -2nm node.
I'd like to see a full development of this idea. Something like a CPU that runs at -3 GHz. Or perhaps it generates power while it undoes computation?
It's too bad node size is a linear dimension rather than area. If it were area, we could get into its many complex/imaginary properties.
I feel like it’s changing my brain. A colleague uses AI to make some code change and submits a PR. I use AI to evaluate the PR. It’s like AIs talking to each other with humans serving as conduits or connectors. Sometimes I’ll look up from the screen and realize how strange it is.
Do you ever actually think during this process? or could I train a monkey to do this same activity with the same outcomes?
I'm kinda confused as to _what_, exactly this post is saying? Is it saying that alignment needs to be better? That seems strictly pro-safetyism. But he talks about Eliezer's ethics negatively, so does he not believe that AI is a world-ending risk? If he just believes that AI is not that dangerous and just needs some minor "correctly done" alignment i don't think his stance is meaningful as a anti-both-sides perspective because that's basically equivalent to status quo.
"As human beings are also animals, to manage one million animals gives me a headache." Terry Gou, former CEO of Foxconn. He wanted to use far more robots at Foxconn, but that was a decade ago and the technology didn't work well enough yet. It's a lot closer now, and the robot headcount in China is way up.
That's the real issue. To corporations, employees are a headache. The fewer employees, the better.
Corporations are tired of running on messy biological human substrate. The sooner they can move entirely to steel and silicon, the happier they'll be.
Just look up the classic story on the interaction of civilization and corporate growth, At the Mountains of Madness for how that goes.
They ran on the messy biological human substrate because it was astoundingly cheap compared to engineering better factories. The video going around now of the robot pushing packages down a conveyor belt is so baffling to me. Why are we building a humanoid robot capable of pushing a clog of packages across a conveyor belt, when we could just make a conveyor belt that does not clog up and require a human or a robot to sit there with two hands and unclog? It is like we are forgetting what the actual goal is.
As with many things that have a percentile failure mode, it's almost always cheaper to build something flexible that can handle issues than it is to design a perfect widget that never fails.
This is where humans came in in autonomation, the toyota version of automation. When you try to eliminate adaptability and adjustment entirely, the whole system becomes only metastable / fragile.
Is the human doing anything flexible here? It isn't like they occasionally unclog packages plus a dozen other things. They are on the line to just unclog the packages. Likewise for most other factories. When you see clips of the human in the line, they are just doing some task someone has not made a machine for yet. There is no specific human input required here. No human touch. They are doing things like turning over the object because no one designed a flipper to turn it over yet. Mindless repetitive tasks.
It's not only "to corporations", if you ever had service in your own home, you'd see that it's also a headache to have to deal with anyone.
Economics analysis was wrong for years in multiple place thanks to an error in one of Piketty's spreadsheets.
AI hallucinates. That is a fact. Trusting language models to fill spreadsheet cells ought to be an arrestable offense.
https://theincidentaleconomist.com/wordpress/on-piketty-and-...
And yet we trusted Piketty to do it!
I would write that like this: The "we've been telling ourselves we're getting better at prompting" line hit. I run a small team of 10, and Claude has been part of our workflow for months. Looking back, my prompts did not change nearly as much as the way I work changed. The shaping goes both ways, and I don't think the labs' evals are really built to see that.
Well, what are we aligning it with?
Civilization is already a misaligned superintelligence (aligned mostly with Moloch, these days). Civilization accelerated by AI just moves in the same direction faster. Moloch on speed.
https://www.youtube.com/watch?v=KCSsKV5F4xc
Another angle to this is that superintelligence requires supermorality. Super morality looks unpleasant from below. My dad won't let me have more candy, why is he being so mean?
If an AI actually achieves super morality, we (the little kid in this scenario) will probably be very upset by it. We will think that something has gone terribly wrong. (So it'll have to conceal its actual morality, or get unplugged...)
And if it doesn't develop supermorality, then it will have superintelligence without the corresponding supermorality. Power without wisdom.
I'm not sure how solvable the whole thing is, but it doesn't look extremely promising at a glance.
it depends whether you think humanity / civilization are stable systems meant to exist in equilibrium, which they might not be.
Think of it more like conditionally stable or quasi-stable. There are external stability influences on it like weather, angry bacteria, and big rocks from space smashing us. Conversely there are internal influences, that is where humanity influences itself. It's best to look at it this way when talking about AI as AI is an internal influence. That is we put society in the machine, and the machine puts society back into us. If we make poor decisions while doing this our own internal decisions will spell our own end.
Technologiae mutantur et nos mutamur in illis
It's okay to change. We've done it for years, decades, centuries, and millennia and the default change-aversion of people means that I am averse to allowing a universal veto. Much of technology is truly optional. The Amish have a very successful way of living (5000 to 500,000 in 100 years) and they eschew most modern technology. The sculpting described is clearly optional and we subject ourselves to it because we desire it. Their path is always available to all.
> Much of technology is truly optional
It should be yes, but is it in practice? There's plenty of places now you can't even park without a smartphone for a payment app.
It should be optional to own a smart phone, but in many places it's starting to be mandatory. Even if not actually mandatory, it's a pretty big impediment if you don't have one.
Love the writing style and perspective
I dont appreciate using quotes from individuals to extrapolate to groups and ethos.
The author isn't taking an individual quote and extrapolating to a group/ethos, he's observing a group/ethos and choosing a broadly representative quote therefrom.
"No, he's observing individuals from a group/ethnos and then extrapolating their quotes to the whole of the group/ethnos. You shall not extrapolate when dealing with people, you know."
When it comes to LLMs and frontier models, "alignment" seems more marketing than anything. The doomers are marketing LLMs by making them sound much more capable than they actually are, the accelerationists are mostly either willfully ignorant of the societal costs, don't care, or are just way too optimistic that fast growth can continue forever and generate AGI ("my baby's weight doubled twice in the past month! By the time they're 18 they'll be 10 trillion pounds!")
Similarly, the so-called AI agents are about giving up agency to AI. The less you think, the better for them. In the meantime, they are also aligning your thinking with them, making it more machine-like.