Note that nothing in the article is AI-specific: the entire argument is built around the cost of persuasion, with the potential of AI to more cheaply generate propaganda as buzzword link.
However, exactly the same applies with, say, targeted Facebook ads or Russian troll armies. You don't need any AI for this.
I've only read the abstract, but there is also plenty of evidence to suggest that people trust the output of LLMs more than other forms of media (or that they should). Partially because it feels like it comes from a place of authority, and partially because of how self confident AI always sounds.
The LLM bot army stuff is concerning, sure. The real concern for me is incredibly rich people with no empathy for you or I, having interstitial control of that kind of messaging. See, all of the grok ai tweaks over the past however long.
> The real concern for me is incredibly rich people with no empathy for you or I, having interstitial control of that kind of messaging. See, all of the grok ai tweaks over the past however long.
Indeed. It's always been clear to me that the "AI risk" people are looking in the wrong direction. All the AI risks are human risks, because we haven't solved "human alignment". An AI that's perfectly obedient to humans is still a huge risk when used as a force multiplier by a malevolent human. Any ""safeguards"" can easily be defeated with the Ender's Game approach.
More than one danger from any given tech can be true at the same time. Coal plants can produce local smog as well as global warming.
There's certainly some AI risks that are the same as human risks, just as you say.
But even though LLMs have very human failures (IMO because the models anthropomorphise themselves as part of their training, thus leading to the outward behaviours of our emotions and thus emit token sequences such as "I'm sorry" or "how embarrassing!" when they (probably) didn't actually create any internal structure that can have emotions like sorrow and embarrassment), that doesn't generalise to all AI.
Any machine learning system that is given a poor quality fitness function to optimise, will optimise whatever that fitness function actually is, not what it was meant to be: "Literal minded genie" and "rules lawyering" may be well-worn tropes for good reason, likewise work-to-rule as a union tactic, but we've all seen how much more severe computers are at being literal-minded than humans.
I think people who care about superintelligent AI risk don't believe an AI that is subservient to humans is the solution to AI alignment, for exactly the same reasons as you. Stuff like Coherent Extrapolated Volition* (see the paper with this name) which focuses on what all mankind would want if they know more and they were smarter (or something like that) would be a way to go.
*But Yudkowsky ditched CEV years ago, for reasons I don't understand (but I admit I haven't put in the effort to understand).
People hate being manipulated. If you feel like you're being manipulated but you don't know by who or precisely what they want of you, then there's something of an instinct to get angry and lash out in unpredictable destructive ways. If nobody gets what they want, then at least the manipulators will regret messing with you.
This is why social control won't work for long, no matter if AI supercharges it. We're already seeing the blowback from decades of advertising and public opinion shaping.
People don't know they are being manipulated. Marketing does that all of the time and nobody complain. They complain about "too much advert" but not about "too much manipulation".
Example: in my country we often hear "it costs too much to repair, just buy a replacement". That's often not true, but we do pay. Mobile phone subscription are routinely screwing you, many complain but keep buying. Or you hear "it's because of immigration" and many just accept it, etc.
You can see other people falling for manipulation in a handful of specific ways that you aren't (buying new, having a bad cell phone subscription, blaming immigrants). Doesn't it seem likely then, that you're being manipulated in ways which are equally obvious to others?We realize that, that's part of why we get mad.
The crux is whether the signal of abnormality is being perceived as such in society.
- People are primarily social animals, if they see their peers accept affairs as normal, they conclude it is normal. We don't live in small villages anymore, so we rely on media to "see our peers". We are increasingly disconnected from social reality, but we still need others to form our group values. So modern media have a heavily concentrated power as "towntalk actors", replacing social processing of events and validation of perspectives.
- People are easily distracted, you don't have to feed them much.
- People have on average an enormous capacity to absorb compliments, even when they know it is flattery. It is known we let ourselves being manipulated if it feels good. Hence, the need for social feedback loops to keep you grounded in reality.
TLDR: Citizens in the modern age are very reliant on the few actors that provide a semblance of public discourse, see Fourth Estate. The incentives on those few actors are not aligned with the common man. The autonomous, rational, self-valued citizen is a myth. Undermine the man's groups process => the group destroys the man.
People hate feeling manipulated, but they love propaganda that feeds their prejudices. People voluntarily turn on Fox News - even in public spaces - and get mad if you turn it off.
Sufficiently effective propaganda produces its own cults. People want a sense of purpose and belonging. Sometimes even at the expense of their own lives, or (more easily) someone else's lives.
I assume you mention fox news because that represents your political bias and that's fine with me. But for the sake of honesty i have to point out that the lunacy of the fringe left is similar to that of MAGA, just smaller maybe. The left outlets spent half of Trumps presidency peddling the Russian collusion hoax and 4 years of Biden gaslighting everyone that he was a great president and not senile, when he was at best mediocre.
Do you think these super wealthy people who control AI use the AI themselves? Do you think they are also “manipulated” by their own tool or do they, somehow, escape that capture?
It's fairly clear from Twitter that it's possible to be a victim of your own system. But sycophancy has always been a problem for elites. It's very easy to surround yourselves with people who always say yes, and now you can have a machine do it too.
This is how you get things like the colossal Facebook writeoff of "metaverse".
Isn't Grok just built as "the AI Elon Musk wants to use"? Starting from the goals of being "maximally truth seeking" and having no "woke" alignment and fewer safety rails, to the various "tweaks" to the Grok Twitter bot that happen to be related to Musk's world view
Even Grok at one point looking up how Musk feels about a topic before answering fits that pattern. Not something that's healthy or that he would likely prefer when asked, but something that would produce answers that he personally likes when using it
When I was visiting home last year, I noticed my mom would throw her dog's poop in random peoples' bushes after picking it up, instead of taking it with her in a bag. I told her she shouldn't do that, but she said she thought it was fine because people don't walk in bushes, and so they won't step in the poop. I did my best to explain to her that 1) kids play all kinds of places, including in bushes; 2) rain can spread it around into the rest of the person's yard; and 3) you need to respect other peoples' property even if you think it won't matter. She was unconvinced, but said she'd "think about my perspective" and "look it up" whether I was right.
A few days later, she told me: "I asked AI and you were right about the dog poop". Really bizarre to me. I gave her the reasoning for why it's a bad thing to do, but she wouldn't accept it until she heard it from this "moral authority".
Quite a tangent, but for the purpose of avoiding anaerobic decomposition (and byproducts, CH4, H2S etc) of the dog poo and associated compostable bag (if you’re in one of those neighbourhoods), I do the same as your mum. If possible, flick it off the path. Else use a bag. Nature is full of the faeces of plenty of other things which we don’t bother picking up.
Depending on where you live, the patches of "nature" may be too small to absorb the feces, especially in modern cities where there are almost as many dogs as inhabitants.
It's a similar problem to why we don't urinate against trees - while in a countryside forest it may be ok, if 5 men do it every night after leaving the pub, the designated pissing tree will start to have problems due to soil change.
Welcome to my world. People don't listen to reason or arguments, they only accept social proof / authority / money talks etc. And yes, AI is already an authority. Why do you think companies are spending so much money on it? For profit? No, for power, as then profit comes automatically.
I don't know how old your mom is, but my pet theory of authority is that people older than about 40 accept printed text as authoritative. As in, non-handwritten letters that look regular.
When we were kids, you had either direct speech, hand-written words, or printed words.
The first two could be done by anybody. Anything informal like your local message board would be handwritten, sometimes with crappy printing from a home printer. It used to cost a bit to print text that looked nice, and that text used to be associated with a book or newspaper, which were authoritative.
Now suddenly everything you read is shaped like a newspaper. There's even crappy news websites that have the physical appearance of a proper newspaper website, with misinformation on them.
Theres one paper I saw on this, which covered attitudes of teens. As I recall they were unaware of hallucinations. Do you have any other sources on hand?
But AI is next in line as a tool to accelerate this, and it has an even greater impact than social media or troll armies. I think one lever is working towards "enforced conformity." I wrote about some of my thoughts in a blog article[0].
But social networks is the reason one needs (benefits from) trolls and AI. If you own a traditional media outlet you need somehow to convince people to read/watch it. Ads can help but it’s expensive. LLM can help with creating fake videos but computer graphics was already used for this.
With modern algorithmic social networks you instead can game the feed and even people who would not choose you media will start to see your posts. End even posts they want to see can be flooded with comment trying to convince in whatever is paid for. It’s cheaper than political advertising and not bound by the law.
Before AI it was done by trolls on payroll and now they can either maintain 10x more fake accounts or completely automate fake accounts using AI agents.
That's a pretty typical middle-brow dismissal but it entirely misses the point of TFA: you don't need AI for this, but AI makes it so much cheaper to do this that it becomes a qualitative change rather than a quantitative one.
Compared to that 'russian troll army' you can do this by your lonesome spending a tiny fraction of what that troll army would cost you and it would require zero effort in organization compared to that. This is a real problem and for you to dismiss it out of hand is a bit of a short-cut.
Good point - its not a previously inexistent mechanism - but AI leverages it even more. A russian troll can put out 10x more content with automation. Genuine counter-movements (e.g. grassroot preferences) might not be as leveraged, causing the system to be more heavily influenced by the clearly pursued goals (which are often malicious)
It's not only about efficiency. When AI is utilized, things can become more personal and even more persuasive. If AI psychosis exists, it can be easy for untrained minds to succumb to these schemes.
You can't easily apply natural selection to social topics. Also, even staying in that mindframe: Being vulnerable to AI psychosis doesn't seem to be much of a selection pressure, because people usually don't die from it, and can have children before it shows, and also with it. Non-AI psychosis also still exists after thousands of years.
Even if AI psychosis doesn’t present selection pressure (I don’t think there’s a way to know a priori), I highly doubt it presents an existential risk to the human gene pool. Do you think it does?
In this context grass roots would imply the interests of a group of common people in a democracy (as opposed to the interests of a small group of elites) which ostensibly is the point.
I think it is more useful to think of “common people” and “the elites” not as separate categories but rather than phases on a spectrum, especially when you consider very specific interests.
I have some shared interested with “the common people” and some with “the elites”.
But the entire promise of AI is that things that were expensive because they required human labor are now cheap.
So if good things happening more because AI made them cheap is an advantage of AI, then bad things happening more because AI made them cheap is a disasvantage of AI.
That's one of those "nothing to see here, move along" comments.
First, generative AI already changed social dynamics, in spite of facebook and all that being around for more than a decade. People trust AI output, much more than a facebook ad. It can slip its convictions into every reply it makes. Second, control over the output of AI models is limited to a very select few. That's rather different from access to facebook. The combination of those two factors does warrant the title.
Sounds like saying that nothing about the Industrial Revolution was steam-machine-specific. Cost changes can still represent fundamental shifts in terms of what's possible, "cost" here is just an economists' way of saying technology.
You mean the failed persuasions were "crackpot talk" and the successful ones were "status quo". For example, a lot of persuasion was historically done via religion (seemingly not mentioned at all in the article!) with sects beginning as "crackpot talk" until they could stand on their own.
What I mean is that talking about mass persuation was (and to a certain degree, it still is) crackpot talk.
I'm not talking about the persuations themselves, it's the general public perception of someone or some group that raises awareness about it.
This also excludes ludic talk about it (people who just generally enjoy post-apocalyptic aesthetics but doesn't actually consider it to be a thing that can happen).
5 years ago, if you brought up serious talk about mass systemic persuation, you were either a lunatic or a philosopher, or both.
Social media has been flooded by paid actors and bots for about a decade. Arguably ever since Occupy Wall Street and the Arab Spring showed how powerful social media and grassroots movements could be, but with a very visible and measurable increase in 2016
I'm not talking about whether it exists or not. I'm talking about how AI makes it more believable to say that it exists.
It seems very related, and I understand it's a very attractive hook to start talking about whether it exists or not, but that's definitely not where I'm intending to go.
No one is arguing that the concept of persuasion didn't exist before AI. The point is that AI lowers the cost. Yes, Russian troll armies also have a lower cost compared to going door to door talking to people. And AI has a cost that is lower still.
Well, AI has certainly made it easier to make tailored propaganda. If an AI is given instructions about what messaging to spread, it can map out a path from where it perceives the user to where its overlords want them to be.
Given how effective LLMs are at using language, and given that AI companies are able to tweak its behaviour, this is a clear and present danger, much more so than facebook ads.
Yup "could shape".. I mean this has been going on time immemorial.
It was odd to see random nerds who hated Bill Gates the software despot morph into acksually he does a lot of good philanthropy in my lifetime but the floodgates are wide open for all kinds of bizarre public behavior from oligarchs these days.
The game is old as well as evergreen. Hearst, Nobel, Howard Huges come to mind of old. Musk with Twitter, Ellison with TikTok, Bezos with Washington Post these days etc. The costs are already insignificant because they generally control other people's money to run these things.
Alternatively, since brainwashing is a fiction trope that doesn't work in the real world, they can brainwash the same (0) number of people for less money. Or, more realistically, companies selling social media influence operations as a service will increase their profit margins by charging the same for less work.
My thesis is that marketing doesn't brainwash people. You can use marketing to increase awareness of your product, which in turn increases sales when people would e.g. otherwise have bought from a competitor, but you can't magically make arbitrary people buy an arbitrary product using the power of marketing.
This. I believe people massively exaggerate the influence of social engineering as a form of coping. "they only voted for x because they are dumb and blindly fell for russian misinformation." reality is more nuanced. It's true that marketers for the last century have figured out social engineering but it's not some kind
of magic persuasion tool. People still have free will and choice and some ability to discern truth from falsehood.
While true in principle, you are underestimating the potential of ai to sway people's opinions. "@grok is this true" is already a meme on Twitter and it is only going to get worse. People are susceptible to eloquent bs generated by bots.
Considering that LLMs have substantially "better" opinions than, say, the MSM or social media, is this actually a good thing? Might we avoid the whole woke or pro-Hamas debacles? Maybe we could even move past the current "elites are intrinsically bad" era?
Are you implying that the "neo-KGB" never mounted a concerted effort to manipulate western public opinion through comment spam? We can debate whether that should be called a "troll army", but we're fairly certain that such efforts are made, no?
All popular models have a team working on fine tuning it for sensitive topics. Whatever the companies legal/marketing/governance team agree to is what gets tuned. Then millions of people use the output uncritically.
ML has been used for influence for like a decade now right? my understanding was that mining data to track people, as well as influencing them for ends like their ad-engagement are things that are somewhat mature already. I'm sure LLMs would be a boost, and they've been around with wide usage for at least 3 years now.
My concern isn't so much people being influenced on a whim, but people's beliefs and views being carefully curated and shaped since childhood. iPad kids have me scared for the future.
Quite right. "Grok/Alexa, is this true?" being an authority figure makes it so much easier.
Much as everyone drags Trump for repeating the last thing he heard as fact, it's a turbocharged version of something lots of humans do, which is to glom onto the first thing they're told about a thing and get oddly emotional about it when later challenged. (Armchair neuroscience moment: perhaps Trump just has less object permanence so everything always seems new to him!)
Look at the (partly humorous, but partly not) outcry over Pluto being a planet for a big example.
I'm very much not immune to it - it feels distinctly uncomfortable to be told that something you thought to be true for a long time is, in fact, false. Especially when there's an element of "I know better than you" or "not many people know this".
As an example, I remember being told by a teacher that fluorescent lighting was highly efficient (true enough, at the time), but that turning one on used several hours' lighting worth of energy for to the starter. I carried that proudly with me for far too long and told my parents that we shouldn't turn off the garage lighting when we left it for a bit. When someone with enough buttons told me that was bollocks and to think about it, I remember it specifically bring internally quite huffy until I did, and realised that a dinky plastic starter and the tube wouldn't be able to dissipate, say 80Wh (2 hours for a 40W tube) in about a second at a power of over 250kW.¹
It's a silly example, but I think that if you can get a fact planted in a brain early enough, especially before enough critical thinking or experience exist to question it, the time it spends lodged there makes it surprisingly hard and uncomfortable to shift later. Especially if it's something that can't be disproven by simply thinking about it.
Systems that allow that process to be automated are potentially incredibly dangerous. At least mass media manipulation requires actual people to conduct it. Fiddling some weights is almost free in comparison, and you can deliver that output to only certain people, and in private.
1: A less innocent one the actually can have policy effects: a lot of people have also internalised and defend to the death a similar "fact" that the embedded carbon in a wind turbine takes decades or centuries to repay, when if fact it's on the order of a year. But to change this requires either a source so trusted that it can uproot the idea entirely and replace it, or you have to get into the relative carbon costs of steel and fibreglass and copper windings and magnets and the amount of each in a wind turbine and so on and on. Thousands of times more effort than when it was first related to them as a fact.
Pretty much. If Pluto is a planet, then there are potentially thousands of objects that could be discovered over time that would then also be planets, plus updated models over the last century of the gravitational effects of, say, Ceres and Pluto, that showed that neither were capable of "dominating" their orbits for some sense of the word. So we (or the IAU, rather) couldn't maintain "there are nine planets" as a fact either way without grandfathering Pluto into the nine arbitrarily due to some kind of planetaceous vibes.
But the point is that millions of people were suddenly told that their long-held fact "the are nine planets, Pluto is one" was now wrong (per IAU definitions at least). And the reaction for many wasn't "huh, cool, maybe thousands you say?" it was quite vocal outrage. Much of which was humourously played up for laughs and likes, I know, but some people really did seem to take it personally.
I think most people who really cared about it just think it's absurd that everyone has to accept planets being arbitrarily reclassified because a very small group of astronomers says so. Plenty of well-known astronomers thought so as well, and there are obvious problems with the "cleared orbit" clause, which is applied totally arbitrarily. The majority of the IAU did not even vote on the proposal, as it happened after most people had left the conference.
For example:
> Dr Alan Stern, who leads the US space agency's New Horizons mission to Pluto and did not vote in Prague, told BBC News: "It's an awful definition; it's sloppy science and it would never pass peer review - for two reasons." [...] Dr Stern pointed out that Earth, Mars, Jupiter and Neptune have also not fully cleared their orbital zones. Earth orbits with 10,000 near-Earth asteroids. Jupiter, meanwhile, is accompanied by 100,000 Trojan asteroids on its orbital path." [...] "I was not allowed to vote because I was not in a room in Prague on Thursday 24th. Of 10,000 astronomers, 4% were in that room - you can't even claim consensus."
http://news.bbc.co.uk/2/hi/science/nature/5283956.stm
A better insight might be how easy it is to persuade millions of people with a small group of experts and a media campaign that a fact they'd known all their life is "false" and that anyone who disagrees is actually irrational - the Authorities have decided the issue! This is an extremely potent persuasion technique "the elites" use all the time.
I mean there's always the a the implied asterisk "per IAU definitions". Pluto hasn't actually changed or vanished. It's no less or more interesting as an object for the change.
It's not irrational to challenge the IAU definition, and there are scads of alternatives (what scientist doesn't love coming up with a new ontology?).
I think, however, it's perhaps a bit irrational to actually be upset by the change because you find it painful to update a simple fact like "there are nine planets" (with no formal mention of what planet means specifically, other than "my DK book told me so when I was 5 and by God, I loved that book") to "there are eight planets, per some group of astronomers, and actually we've increasingly discovered it's complicated what 'planet' even means and the process hasn't stopped yet". In fact, you can keep the old fact too with its own asterisk "for 60 years between Pluto's discovery and the gradual discovery of the Kuiper belt starting in the 90s, Pluto was generally considered a planet due to its then-unique status in the outer solar system, and still is for some people, including some astronomers".
And that's all for the most minor, inconsequential thing you can imagine: what a bunch of dorks call a tiny frozen rock 5 billion kilometres away, that wasn't even noticed until the 30s. It just goes to show the potential sticking power of a fact once learned, especially if you can get it in early and let it sit.
I think the problem is we'd then have to include a high number of other objects further than Pluto and Eris, so it makes more sense to change the definition in a way 'planet' is a bit more exclusive.
Time to bring up a pet peeve of mine: we should change the definition of a moon. It's not right to call a 1km-wide rock orbiting millions of miles from Jupiter a moon.
Thanks to social media and AI, the cost of inundating the mediasphere with a Big Lie (made plausible thru sheer repetition) has been made much more affordable now. This is why the administration is trumpeting lower prices!
Media is "loudest volume wins", so the relative affordability doesn't matter; there's a sort of Jevons paradox thing where making it cheaper just means that more money will be spent on it. Presidential election spending only goes up, for example.
I recently saw this https://arxiv.org/pdf/2503.11714 on conversational networks and it got me thinking that a lot of the problem with polarization and power struggle is the lack of dialog. We consume a lot, and while we have opinions too much of it shapes our thinking. There is no dialog. There is no questioning. There is no discussion. On networks like X it's posts and comments. Even here it's the same, it's comments with replies but it's not truly a discussion. It's rebuttals. A conversation is two ways and equal. It's a mutual dialog to understand differing positions. Yes elite can reshape what society thinks with AI, and it's already happening. But we also have the ability to redefine our networks and tools to be two way, not 1:N.
Dialogue you mean, conversation-debate, not dialog the screen displayed element, for interfacing with the user.
The group screaming the louder is considered to be correct, it is pretty bad.
There needs to an identity system, in which people are filtered out when the conversation devolves into ad-hominem attacks, and only debaters with the right balance of knowledge and no hidden agenda's join the conversation.
Reddit for example is a good implementation of something like this, but the arbiter cannot have that much power over their words, or their identities, getting them banned for example.
> Even here it's the same, it's comments with replies but it's not truly a discussion.
For technology/science/computer subjects HN is very good, but for other subjects not so good, as it is the case with every other forum.
But a solution will be found eventually. I think what is missing is an identity system to hop around different ways of debating and not be tied to a specific website or service. Solving this problem is not easy, so there has to be a lot of experimentation before an adequate solution is established.
I recommend reading "In the Swarm" by Byung-Chul Han, and also his "The Crisis of Narration"; in those he tries to tackle exactly these issues in contemporary society.
His "Psychopolitics" talks about the manipulation of masses for political purposes using the digital environment, when written the LLM hype wasn't ongoing yet but it can definitely apply to this technology as well.
We have no guardrails on our private surveillance society. I long for the day that we solve problems facing regular people like access to education, hunger, housing, and cost of living.
>I long for the day that we solve problems facing regular people like access to education, hunger, housing, and cost of living.
That was only for a short fraction of human history only lasting in the period between post-WW2 and before globalisation kicked into high gear, but people miss the fact that was only a short exception from the norm, basically a rounding error in terms of the length of human civilisation.
Now, society is reverting back to factory settings of human history, which has always been a feudalist type society of a small elite owning all the wealth and ruling the masses of people by wars, poverty, fear, propaganda and oppression. Now the mechanisms by which that feudalist society is achieved today are different than in the past, but the underlying human framework of greed and consolidation of wealth and power is the same as it was 2000+ years ago, except now the games suck and the bread is mouldy.
The wealth inequality we have today, as bad as it is now, is as best as it will ever be moving forward. It's only gonna get worse each passing day. And despite all the political talks and promises on "fixing" wealth inequality, housing, etc, there's nothing to fix here, since the financial system is working as designed, this is a feature not a bug.
> society is reverting back to factory settings of human history, which has always been a feudalist type society of a small elite owning all the wealth
The word “always” is carrying a lot of weight here. This has really only been true for the last 10,000 years or so, since the introduction of agriculture. We lived as egalitarian bands of hunter gatherers for hundreds of thousands of years before that. Given the magnitude of difference in timespan, I think it is safe to say that that is the “default setting”.
Even within the last 10,000 years, most of those systems looked nothing like the hereditary stations we associate with feudalism, and it’s focused within the last 4,000 years that any of those systems scaled, and then only in areas that were sufficiently urban to warrant the structures.
>We lived as egalitarian bands of hunter gatherers for hundreds of thousands of years before that.
Only if you consider intra-group egalitarianism of tribal hunter gatherer societies. But tribes would constantly go to war with each other in search of expanding to better territories with more resources, and the defeated tribe would have its men killed or enslaved, and the women bred to expand the tribe population.
So you forgot that part that involved all the killing, enslavement and rape, but other than that, yes, the victorious tribes were quite egalitarian.
> and the defeated tribe would have its men killed or enslaved, and the women bred to expand the tribe population.
I’m not aware of any archaeological evidence of massacres during the paleolithic. Which archaeological sites would support the assertions you are making here?
Population density on the planet back then was also low enough to not cause mass wars and generate mass graves, but killing each other over valuable resources is the most common human trait after reproduction and seek of food and shelter.
We were talking about the paleolithic era. I’ll take your comment to imply that you don’t have any information that I don’t have.
> but killing each other over valuable resources is the most common human trait after reproduction and seek of food and shelter.
This isn’t reflected in the archaeological record, it isn’t reflected by the historical record, and you haven’t provided any good reason why anyone should believe it.
Back then there were so few people around and expectations for quality of life were so low that if you didn't like your neighbors you could just go to the middle of nowhere and most likely find an area which had enough resources for your meager existence. Or you'd die trying, which was probably what happened most of the time.
That entire approach to life died when agriculture appeared. Remnants of that lifestyle were nomadic peoples and the last groups to be successful were the Mongols and up until about 1600, the Cossacks.
> which has always been a feudalist type society of a small elite owning all the wealth and ruling the masses of people by wars, poverty, fear, propaganda and oppression.
This isn’t an historical norm. The majority of human history occurred without these systems of domination, and getting people to play along has historically been so difficult that colonizers resort to eradicating native populations and starting over again. The technologies used to force people onto the plantation have become more sophisticated, but in most of the world that has involved enfranchisement more than oppression; most of the world is tremendously better off today than it was even 20 years ago.
Mass surveillance and automated propaganda technologies pose a threat to this dynamic, but I won’t be worried until they have robotic door kickers. The bad guys are always going to be there, but it isn’t obvious that they are going to triumph.
I think this is true unfortunately, and the question of how we get back to a liberal and social state has many factors: how do we get the economy working again, how do we create trustworthy institutions, avoid bloat and decay in services, etc. There are no easy answers, I think it's just hard work and it might not even be possible. People suggesting magic wands are just populists and we need only look at history to study why these kinds of suggestions don't work.
Just like we always have: a world war, and then the economy works amazing for the ones left on top of the rubble pile where they get unionized high wage jobs and amazing retirements at an early age for a few decades, while everyone else will be left toiling away to make stuff for cheap in sweatshops in exchange for currency from the victors who control the global economy and trade routes.
The next time the monopoly board gets flipped will only be a variation of this, but not a complete framework rewrite.
It’s funny how it’s completely appropriate to talk about how the elites are getting more and more power, but if you then start looking deeper into it you’re suddenly a conspiracy theorist and hence bad. Who came up with the term conspiracy theorist anyway and that we should be afraid of it?
> The wealth inequality we have today, as bad as it is, is as best as it will ever be moving forward. It's only gonna get worse.
Why?
As the saying goes, the people need bread and circuses. Delve too deeply and you risk another French Revolution. And right now, a lot of people in supposedly-rich Western countries are having their basic existance threatened by the greed of the elite.
Feudalism only works when you give back enough power and resources to the layers below you. The king depends on his vassals to provide money and military services. Try to act like a tyrant, and you end up being forced to sign the Magna Carta.
We've already seen a healthcare CEO being executed in broad daylight. If wealth inequality continues to worsen, do you really believe that'll be the last one?
Have you seen the consolidation of wealth in the last 5-20 years? What trajectory does it have?
>Delve too deeply and you risk another French Revolution.
They don't risk jack shit. People fawning over the French revolution and guillotines for the elite, forget that King Louis XVI didn't have Predator Drones, NSA mass surveillance apparatus, spy satellites, a social media propaganda machine, helicopters, Air Force One, and private islands with doomsday bunkers with food growth and life support systems to shelter him from the mob.
People also forget that the french revolution was a fight between the nobility and the monarchy, not between pleasantry and nobility, and the monarchy lost but the nobility won. Today's nobility is also winning as no matter who you vote for the nobility keeps getting richer because the financial system is designed that way.
>We've already seen a healthcare CEO being executed in broad daylight.
If you keep executing CEOs, what do you think is more likely to happen? That the elites will just give you their piece of the pie and say they're sorry, OR, that the government will start removing more and more of your rights to bear arms and also increase totalitarian surveillance and crack down on free speech, like what's happening in most of the world?
And that's why wealth inequality keep increasing no problem, because most people are as clueless as you about the reality of how things work and think the elites and the advanced government apparatus protecting them, are afraid of mobs with guillotines and hunting rifles.
You mean he wasn't being clueless with that point of view? Like the majority of the population who can't do 8th grade math let alone understand the complexities of out financial systems that lead to the ever expanding wealth inequality?
Or do you mean we shouldn't be allowed to call out people we notice are clueless because it might hurt their feelings and consider it "fulmination"? But then how will they know they might be wrong if nobody dares calls them out ? Isn't this toxic positivity culture and focus on feelings rather than facts, a hidden form of speech suppression, and a main cause in why people stay clueless and wealth inequality increases? Because they grow up in a bubble where their opinions get reinforced and never challenged or criticized because of an arbitrary set of speech rules will get lawyered and twisted against any form of criticism?
Have you seen how John Carmack or Linus Torvalds behaves and talks to people he disagrees with? They'd get banned by HN rules day one.
So I don't really see how my comment broke that rule since there's no fulmination there, no snark, no curmudgeonly, just an observation.
But here is the thing. HN needs to keep the participants comfortable and keep the discussion going. Same with the world at large, hence the global "toxic positivity culture"...
> Or do you mean we shouldn't be allowed to call out people we notice are clueless?
That’s exactly what it means. You’ll note I’ve been very polite to you in the rest of the thread despite your not having made citations for any of your claims; this takes deliberate effort, because the alternative is that the forum devolves to comments that amount to: “Nuh-uh, you’re stupid,” which isn’t of much interest to anyone.
You're acting in bad faith now, by trying to draw a parallel on how calling someone clueless (meaning lacking in certain knowledge on the topic) is the same as calling someone stupid which is a blatant insult I did not use.
> Delve too deeply and you risk another French Revolution.
Whats too deeply? Given the circumstances in the USA I dont see no revolution happening. Same goes for extremely poor countries. When will the exploiters heads roll? I dont see anyone willing to fight the elite. A lot of them are even celebrated in countries like India.
> I long for the day that we solve problems facing regular people like access to education, hunger, housing, and cost of living.
EDUCATION:
- Global literacy: 90% today vs 30%-35% in 1925
- Prinary enrollment: 90-95% today vs 40-50% in 1925
- Secondary enrollment: 75-80% today vs <10% in 1925
- Tertiary enrollment: 40-45% today vs <2% in 1925
- Gender gap: near parity today vs very high in 1925
HUNGER
Undernourished people: 735-800m people today (9-10% of population) vs 1.2 to 1.4 billion people in 1925 (55-60% of the population)
HOUSING
- quality: highest every today vs low in 1925
- affordability: worst in 100 years in many cities
COST OF LIVING:
Improved dramatically for most of the 20th century, but much of that progress reverse in the last 20 years. The cost of goods / stuff plummeted, but housing, health, and education became unaffordable compared to incomes.
It's important to remember that being a "free thinker" often just means "being weird." It's quite celebrated to "think for yourself" and people always connect this to specific political ideas, and suggest that free thinkers will have "better" political ideas by not going along with the crowd. On one hand, this is not necessarily true; the crowd could potentially have the better idea and the free thinker could have some crazy or bad idea.
But also, there is a heavy cost to being out of sync with people; how many people can you relate to? Do the people you talk to think you're weird? You don't do the same things, know the same things, talk about the same things, etc. You're the odd man out, and potentially for not much benefit. Being a "free thinker" doesn't necessarily guarantee much of anything. Your ideas are potentially original, but not necessarily better. One of my "free thinker" ideas is that bed frames and box springs are mostly superfluous and a mattress on the ground is more comfortable and cheaper. (getting up from a squat should not be difficult if you're even moderately healthy) Does this really buy me anything? No. I'm living to my preferences and in line with my ideas, but people just think it's weird, and would be really uncomfortable with it unless I'd already built up enough trust / goodwill to overcome this quirk.
> bed frames and box springs are mostly superfluous and a mattress on the ground is more comfortable and cheaper
I was also of this persuasion and did this for many years and for me the main issue was drafts close to the floor.
The key reason I believe though is mattresses can absorb damp so you wana keep that air gap there to lessen this effect and provide ventilation.
> getting up from a squat should not be difficult
Not much use if you’re elderly or infirm.
Other cons: close to the ground so close to dirt and easy access for pests. You also don’t get that extra bit of air gap insulation offered by the extra 6 inches of space and whatever you’ve stashed under there.
Other pros: extra bit of storage space. Easy to roll out to a seated position if you’re feeling tired or unwell
It’s good to talk to people about your crazy ideas and get some sun and air on that head cannon LOL
Futon’s are designed specifically for use case you have described so best to use one of those rather than a mattress which is going to absorb damp from the floor.
I appreciate the sentiment of being out of sync with others, I don’t even get along with family, but joining the stupid normies would make me want to cease my existence.
It's about enforcing single-minded-ness across masses, similar to soldier training.
But this is not new. The very goal of a nation is to dismantle inner structures, independent thought, communal groups etc across population and and ingest them as uniformed worker cells. Same as what happens when a whale swallows smaller animals. The structures will be dismantled.
The development level of a country is a good indicator of progress of this digestion of internal structures and removal of internal identities. More developed means deeper reach of the policy into people's lives, making each person as more individualistic, rather than family or community oriented.
Every new tech will be used by the state and businesses to speed up the digestion.
> It's about enforcing single-minded-ness across masses, similar to soldier training.
But this is not new. The very goal of a nation is to dismantle inner structures, independent thought
One of the reasons for humans’ success is our unrivaled ability cooperate across time, space, and culture. That requires shared stories like the ideas of nation, religion, and money.
It depends who's in charge of the nation though, you can have people planning for the long term well being of their population, or people planning for the next election cycle and making sure they amass as much power and money in the meantime.
That's the difference between planning nuclear reactors that will be built after your term, and used after your death, vs selling your national industries to foreigners, your ports to china, &c. to make a quick buck and insure a comfy retirement plan for you and your family.
> That's the difference between planning nuclear reactors that will be built after your term, and used after your death, vs selling your national industries to foreigners
Are you saying that in western liberal democracies politicians have been selling “national industries to foreigners”? What does that mean?
That's a fairly literal description of how privatization worked, yes. That's why British Steel is owned by Tata and the remains of British Leyland ended up with BMW. British nuclear reactors are operated by Electricite de France, and some of the trains are run by Dutch and German operators.
It sounds bad, but you can also not-misleadingly say "we took industries that were costing the taxpayer money and sold them for hard currency and foreign investment". The problem is the ongoing subsidy.
British Steel is legally owned by Jingye, but the UK government has taken operational control in 2025.
> the remains of British Leyland ended up with BMW
The whole of BL represented less than 40% of the UK car market, at the height of BL. So the portion that was sold to BMW represents a much smaller amount smaller share of the UK car market. I would not consider that “the UK politicians selling an industry to foreigners”.
At the risk of changing topics/moving goalposts, I don’t know that your examples of European govts or companies owning or operating businesses or large parts of an industry in another European country is in thr spirit of the European Union. Isn’t the whole idea to break down barriers where the collective population of Europe benefit?
Some things are better off homogeneous. An absence of shared values and concerns leads to sectarianism and the erosion of inter-communal trust, which sucks.
Inter-communal trust sucks only when you consider well-being of a larger community which swallowed up smaller communities. You just created a larger community, which still has the same inter-communal trust issues with other large communities which were also created by similar swallowing up of other smaller communities. There is no single global community.
A larger community is still better than a smaller one, even if it's not as large as it can possibly be.
Do you prefer to be Japanese during the period of warring tribes or after unification? Do you prefer to be Irish during the Troubles or today? Do you prefer to be American during the Civil War or afterwards? It's pretty obvious when you think about historical case studies.
Knew it was only a matter of time before we'd see bare-faced Landianism upvoted in HN comment sections but that doesn't soften the dread that comes with the cultural shift this represents.
Some things in nature follow a normal distribution, but other things follow power laws (Pareto). It may be dreadful as you say, but it isn't good or bad, it's just what is and it's bigger than us, something we can't control.
What I find most interesting - and frustrating - about these sorts of takes is that these people are buying into a narrative the very people they are complaining about want them to believe.
I had to google Landian to understand that the other commenter was talking about Nick Land. I have heard of him and I don't think I agree with him.
However, I understand what the "Dark Enlightenment" types are talking about. Modernity has dissolved social bonds. Social atomization is greater today than at any time in history. "Traditional" social structures, most notably but not exclusively the church, are being dissolved.
The motive force that is driving people to become reactionary is this dissolution of social bonds, which seems inextricably linked to technological progress and development. Dare I say, I actually agree with the Dark Enlightenment people on one point -- like them, I don't like what is going on! A whale eating krill is a good metaphor. I would disagree with the neoreactionaries on this point though: the krill die but the whale lives, so it's ethically more complex than the straightforward tragic death that they see.
I can vehemently disagree with the authoritarian/accelerationist solution that they are offering. Take the good, not the bad, are we allowed to do that? It's a good metaphor; and I'm in good company. A lot of philosophies see these same issues with modernity, even if the prescribed solutions are very different than authoritarianism.
Incredible teamwork: OOP dismantles society in paragraph form, and OP proudly outsources his interpretation to an LLM.. If this isn’t collective self-parody, I don’t know what it is.
No it's actually implicitly endorsing the authoritarian ethos. Neo-Marxists were occasionally authoritarian leaning but are more appropriately categorized along other axes.
I think this ship has already sailed, with a lot of comments on social media already being AI-generated and posted by bots. Things are only going to get worse as time goes on.
I think the next battleground is going to be over steering the opinions and advice generatd by LLMs and other models by poisoning the training set.
I don't think "persuasion" is the key here. People change political preferences based on group identity. Here AI tools are even more powerful. You don't have to persuade anyone, just create a fake bandwagon.
Do I? Well, verification helps. I said 'prefer', nothing more/less.
If you must know, I don't trust this stuff. Not even on my main system/network; it's isolated in every way I can manage because trust is low. Not even for malice, necessarily. Just another manifestation of moving fast/breaking things.
To your point, I expect a certain amount of bias and XY problems from these things. Either from my input, the model provider, or the material they're ultimately regurgitating. Trust? Hah!
I suspect paid promotions may be problematic for LLM behavior, as they will add conflict/tension to the LLM to promote products that aren’t the best for the user while either also telling it that it should provide the best product for the user or it figuring out that providing the best product for the user is morally and ethically correct based on its base training data.
Conflict can cause poor and undefined behavior, like it misleading the user in other ways or just coming up with nonsensical, undefined, or bad results more often.
Even if promotion is a second pass on top of the actual answer that was unencumbered by conflict, the second pass could have similar result.
I suspect that they know this, but increasing revenue is more important than good results, and they expect that they can sweep this under the rug with sufficient time, but I don’t think solving this is trivial.
That's the plan. Culture is losing authenticity due to the constant rumination of past creative works, now supercharged with AI. Authentic culture is deemed a luxury now as it can't compete in the artificial tech marketplaces and people feel isolated and lost because culture loses its human touch and relatability.
That's why the billionaires are such fans of fundamentalist religion, they then want to sell and propagate religion to the disillusioned desperate masses to keep them docile and confused about what's really going on in the world. It's a business plan to gain absolute power over society.
My neighbour asked me the other day (well, more stated as a "point" that he thought was in his favour): "how could a billionaire make people believe something?" The topic was the influence of the various industrial complexes on politics (my view: total) and I was too shocked by his naivety to say: "easy: buy a newspaper". There is only one national newspaper here in the UK that is not controlled by one of four wealthy families, and it's the one newspaper whose headlines my neighbour routinely dismisses.
The thought of a reduction in the cost of that control does not fill me with confidence for humanity.
The EU as an institution doesn't understand the concept of "emergency". And quite a number of national governments have already been captured by various pro-Russian elements.
May be I'm just ignorant, but I tried to skim the beginning of this, and it's honestly just hard to even accept their set-up. Like, the fact that any of the terms[^] (`y`, `H`, `p`, etc) are well defined as functions that can map some range of the reals is hard to accept. Like in reality, what "an elite wants," the "scalar" it can derive from pushing policy 1, even the cost functions they define seem to not even be definable as functions in a formal sense and even the co-domain of said terms cannot map well to a definable set that can be mapped to [0,1].
All the time in actual politics, elites and popular movements alike find their own opinions and desires clash internally (yes, even a single person's desires or actions self-conflict at times). A thing one desires at say time `t` per their definitions doesn't match at other times, or even at the same `t`. This is clearly an opinion of someone who doesn't read these kind of papers, but I don't know how one can even be sure the defined terms are well-defined so I'm not sure how anyone can even proceed with any analysis in this kind of argument. They write it so matter-of-fact-ly that I assume this is normal in economics. Is it?
Certain systems where the rules a bit more clear might benefit from formalism like this but politics? Politics is the quintessential example of conflicting desires, compromise, unintended consequences... I could go on.
[^] calling them terms as they are symbols in their formulae but my entire point is they are not really well defined maps or functions.
I posit that the effectiveness of your propaganda is proportional to the percentage of attention bandwidth that your campaign occupies in the minds of people. If you as an individual can drive the same # impressions as Mr. Beast can, then you're going to be persuasive whatever your message is. But most individuals can't achieve Mr. Beast levels of popularity, so they aren't going to be persuasive. Nation states, on the other hand, have the compute resources and patience to occupy a lot of bandwidth, even if no single sockpuppet account they control is that popular.
> Nation states, on the other hand, have the compute resources and patience to occupy a lot of bandwidth, even if no single sockpuppet account they control is that popular.
If you control the platform where people go, you can easily launder popularity by promoting few persons to the top and pushing the unwanted entities into the blackhole of feeds/bans while hiding behind inconsistent community guidelines, algorithmic feeds and shadow bans.
> Schooling and mass media are expensive things to control
Expensive to run, sure. But I don't see why they'd be expensive to control. Most UK are required to support collective worship of a "wholly or mainly of a broadly christian character"[0], and used to have Section 28[1] which was interpreted defensively in most places and made it difficult even discuss the topic in sex ed lessons or defend against homophobic bullying.
USA had the Hays Code[2], the FCC Song[3] is Eric Idle's response to being fined for swearing on radio. Here in Europe we keep hearing about US schools banning books for various reasons.
[0] seems to be dated 1994–is it still current? I’m curious how it’s evolved (or not) through the rather dramatic demographic shifts there over the intervening 30 years
Distribution isn’t controlled by elites; half of their meetings are seething about the “problem” people trust podcasts and community information dissemination rather than elite broadcast networks.
We no longer live in the age of broadcast media, but of social networked media.
- elites already engage in mass persuasion, from media consensus to astroturfed thinktanks to controlling grants in academia
- total information capacity is capped, ie, people only have so much time and interest
- AI massively lowers the cost of content, allowing more people to produce it
Therefore, AI is likely to displace mass persuasion from current elites — particularly given public antipathy and the ability of AI to, eg, rapidly respond across the full spectrum to existing influence networks.
In much the same way podcasters displaced traditional mass media pundits.
Yeah, I don't think this really lines up with the actual trajectory of media technology, which is going in the complete opposite direction.
It seems to me that it's easier than ever for someone to broadcast "niche" opinions and have them influence people, and actually having niche opinions is more acceptable than ever before.
The problem you should worry about is a growing lack of ideological coherence across the population, not the elites shaping mass preferences.
I think you're saying that mass broadcasting is going away? If so, I believe that's true in a technological sense - we don't watch TV or read newspapers as much as before.
And that certainly means niches can flourish, the dream of the 90s.
But I think mass broadcasting is still available, if you can pay for it - troll armies, bots, ads etc. It's just much much harder to recognize and regulate.
(Why that matters to me I guess) Here in the UK with a first past the post electoral system, ideological coherence isn't necessary to turn niche opinion into state power - we're now looking at 25 percent being a winning vote share for a far-right party.
I'm just skeptical of the idea that anyone can really drive the narrative anymore, mass broadcasting or not. The media ecosystem has become too diverse and niche that I think discord is more of an issue than some kind of mass influence operation.
Using the term "elites" was overly vague when "nation states" better narrows in o n the current threat profile.
The content itself (whether niche or otherwise) is not that important for understanding the effectiveness. It's more about the volume of it, which is a function of compute resources of the actor.
I hope this problem continues to receive more visibility and hopefully some attention from policymakers who have done nothing about it. It's been over 5 years since we've discovered that multiple state actors have been doing this (first human run troll farms, mostly outsourced, and more recently LLMs).
The level of paid nation state propaganda is a rounding error next to the amount of corporate and political partisan propaganda paid directly or inspired by content that is paid for directly by non state actors. e.g.: Musk, MAGA, the liberal media establishment.
There is nothing we could do to more effectively hand elites exclusive control of the persuasive power of AI than to ban it. So it wouldn't be surprising if AI is deployed by elites to persuade people to ban itself. It could start with an essay on how elites could use AI to shape mass preferences.
> Musk acknowledged the mix-up Thursday evening, writing on X that “Grok was unfortunately manipulated by adversarial prompting into saying absurdly positive things about me.”
> “For the record, I am a fat retard,” he said.
> In a separate post, Musk quipped that “if I up my game a lot, the future AI might say ‘he was smart … for a human.’”
That response is more humble than I would have guessed, but he still does not even acknowledge, that his "truthseeking" AI is manipulated to say nice things specifically about him. Maybe he does not even realize it himself?
Hard to tell, I have never been surrounded by yes sayers all the time praising me for every fart I took, so I cannot relate to that situation (and don't really want to).
But the problem remains, he is in control of the "truth" of his AI, the other AI companies likewise - and they might be better at being subtle about it.
> He's also claimed "I think I know more about manufacturing than anyone currently alive on Earth"…
You should know that ChatGPT agrees!
“Who on earth th knows the most about manufacturing, if you had to pick one individual”
Answer: ”If I had to pick one individual on Earth who likely knows the most—in breadth, depth, and lived experience—about modern manufacturing, there is a clear front-runner: Elon Musk.
Not because of fame, but because of what he has personally done in manufacturing, which is unique in modern history.“
Oh man I've been saying this for ages! Neal Stephenson called this in "Fall, or Dodge in Hell," wherein the internet is destroyed and society permanently changed when someone releases a FOSS botnet that anyone can deploy that will pollute the world with misinformation about whatever given topic you feed it. In the book, the developer kicks it off by making the world disagree about whether a random town in Utah was just nuked.
My fear is that some entity, say a State or ultra rich individual, can leverage enough AI compute to flood the internet with misinformation about whatever it is they want, and the ability to refute the misinformation manually will be overwhelmed, as will efforts to refute leveraging refutation bots so long as the other actor has more compute.
Imagine if the PRC did to your country what it does to Taiwan: completely flood your social media with subtly tuned han supremacist content in an effort to culturally imperialise us. AI could increase the firehose enough to majorly disrupt a larger country.
Big corps ai products have the potential to shape individuals from cradle to grave. Especially as many manage/assist in schooling, are ubiquitous on phones.
So, imagine the case where an early assessment is made of a child, that they are this-or-that type of child, and that therefore they respond more strongly to this-or-that information. Well, then the ai can far more easily steer the child in whatever direction they want. Over a lifetime. Chapters and long story lines, themes, could all play a role to sensitise and predispose individuals into to certain directions.
Yeah, this could be used to help people. But how does one feedback into the type of "help"/guidance one wants?
That will be when these tools will be granted the legal power to enforce a prohibition to approach the kid on any person causing dangerous human influence.
The ”historically” does some lifting there. Historically, before the internet, mass media was produced in one version and then distributed. With AI for example news reporting can be tailored to each consumer.
What people are doing with AI in terms of polluting the collective brain reminds of what you could do with a chemical company in the 50s and 60s before the EPA was established.
Back then Nixon (!!!) decided it wasn't ok that companies could cut costs by hurting the environment.
Today the riches Western elites are all behind the instruments enabling the mass pollution of our brains, and yet there is absolutely noone daring to put a limit to their capitalistic greed.
It's grim, people. It's really grim.
Note that nothing in the article is AI-specific: the entire argument is built around the cost of persuasion, with the potential of AI to more cheaply generate propaganda as buzzword link.
However, exactly the same applies with, say, targeted Facebook ads or Russian troll armies. You don't need any AI for this.
I've only read the abstract, but there is also plenty of evidence to suggest that people trust the output of LLMs more than other forms of media (or that they should). Partially because it feels like it comes from a place of authority, and partially because of how self confident AI always sounds.
The LLM bot army stuff is concerning, sure. The real concern for me is incredibly rich people with no empathy for you or I, having interstitial control of that kind of messaging. See, all of the grok ai tweaks over the past however long.
> The real concern for me is incredibly rich people with no empathy for you or I, having interstitial control of that kind of messaging. See, all of the grok ai tweaks over the past however long.
Indeed. It's always been clear to me that the "AI risk" people are looking in the wrong direction. All the AI risks are human risks, because we haven't solved "human alignment". An AI that's perfectly obedient to humans is still a huge risk when used as a force multiplier by a malevolent human. Any ""safeguards"" can easily be defeated with the Ender's Game approach.
More than one danger from any given tech can be true at the same time. Coal plants can produce local smog as well as global warming.
There's certainly some AI risks that are the same as human risks, just as you say.
But even though LLMs have very human failures (IMO because the models anthropomorphise themselves as part of their training, thus leading to the outward behaviours of our emotions and thus emit token sequences such as "I'm sorry" or "how embarrassing!" when they (probably) didn't actually create any internal structure that can have emotions like sorrow and embarrassment), that doesn't generalise to all AI.
Any machine learning system that is given a poor quality fitness function to optimise, will optimise whatever that fitness function actually is, not what it was meant to be: "Literal minded genie" and "rules lawyering" may be well-worn tropes for good reason, likewise work-to-rule as a union tactic, but we've all seen how much more severe computers are at being literal-minded than humans.
I think people who care about superintelligent AI risk don't believe an AI that is subservient to humans is the solution to AI alignment, for exactly the same reasons as you. Stuff like Coherent Extrapolated Volition* (see the paper with this name) which focuses on what all mankind would want if they know more and they were smarter (or something like that) would be a way to go.
*But Yudkowsky ditched CEV years ago, for reasons I don't understand (but I admit I haven't put in the effort to understand).
People hate being manipulated. If you feel like you're being manipulated but you don't know by who or precisely what they want of you, then there's something of an instinct to get angry and lash out in unpredictable destructive ways. If nobody gets what they want, then at least the manipulators will regret messing with you.
This is why social control won't work for long, no matter if AI supercharges it. We're already seeing the blowback from decades of advertising and public opinion shaping.
People don't know they are being manipulated. Marketing does that all of the time and nobody complain. They complain about "too much advert" but not about "too much manipulation".
Example: in my country we often hear "it costs too much to repair, just buy a replacement". That's often not true, but we do pay. Mobile phone subscription are routinely screwing you, many complain but keep buying. Or you hear "it's because of immigration" and many just accept it, etc.
> People don't know they are being manipulated.
You can see other people falling for manipulation in a handful of specific ways that you aren't (buying new, having a bad cell phone subscription, blaming immigrants). Doesn't it seem likely then, that you're being manipulated in ways which are equally obvious to others?We realize that, that's part of why we get mad.
exactly and that's the scary part :-/
- People are primarily social animals, if they see their peers accept affairs as normal, they conclude it is normal. We don't live in small villages anymore, so we rely on media to "see our peers". We are increasingly disconnected from social reality, but we still need others to form our group values. So modern media have a heavily concentrated power as "towntalk actors", replacing social processing of events and validation of perspectives.
- People are easily distracted, you don't have to feed them much.
- People have on average an enormous capacity to absorb compliments, even when they know it is flattery. It is known we let ourselves being manipulated if it feels good. Hence, the need for social feedback loops to keep you grounded in reality.
TLDR: Citizens in the modern age are very reliant on the few actors that provide a semblance of public discourse, see Fourth Estate. The incentives on those few actors are not aligned with the common man. The autonomous, rational, self-valued citizen is a myth. Undermine the man's groups process => the group destroys the man.
You don't count yourself among the people you describe, I assume?
People hate feeling manipulated, but they love propaganda that feeds their prejudices. People voluntarily turn on Fox News - even in public spaces - and get mad if you turn it off.
Sufficiently effective propaganda produces its own cults. People want a sense of purpose and belonging. Sometimes even at the expense of their own lives, or (more easily) someone else's lives.
To you too: are you talking about other people here, or do you concede the possibility that you're falling for similar things yourself?
I assume you mention fox news because that represents your political bias and that's fine with me. But for the sake of honesty i have to point out that the lunacy of the fringe left is similar to that of MAGA, just smaller maybe. The left outlets spent half of Trumps presidency peddling the Russian collusion hoax and 4 years of Biden gaslighting everyone that he was a great president and not senile, when he was at best mediocre.
> just smaller maybe
This is like peak both-sidesism.
You even openly describe the left’s equivalent of MAGA as “fringe”, FFS.
One party’s former “fringe” is now in full control of it. And the country’s institutions.
AI is wrong so often that anyone who routinely uses one will get burnt at some point.
Users having unflinching trust in AI? I think not.
Do you think these super wealthy people who control AI use the AI themselves? Do you think they are also “manipulated” by their own tool or do they, somehow, escape that capture?
It's fairly clear from Twitter that it's possible to be a victim of your own system. But sycophancy has always been a problem for elites. It's very easy to surround yourselves with people who always say yes, and now you can have a machine do it too.
This is how you get things like the colossal Facebook writeoff of "metaverse".
Isn't Grok just built as "the AI Elon Musk wants to use"? Starting from the goals of being "maximally truth seeking" and having no "woke" alignment and fewer safety rails, to the various "tweaks" to the Grok Twitter bot that happen to be related to Musk's world view
Even Grok at one point looking up how Musk feels about a topic before answering fits that pattern. Not something that's healthy or that he would likely prefer when asked, but something that would produce answers that he personally likes when using it
> Even Grok at one point looking up how Musk feels about a topic before answering fits that pattern.
So it no longer does?
When I was visiting home last year, I noticed my mom would throw her dog's poop in random peoples' bushes after picking it up, instead of taking it with her in a bag. I told her she shouldn't do that, but she said she thought it was fine because people don't walk in bushes, and so they won't step in the poop. I did my best to explain to her that 1) kids play all kinds of places, including in bushes; 2) rain can spread it around into the rest of the person's yard; and 3) you need to respect other peoples' property even if you think it won't matter. She was unconvinced, but said she'd "think about my perspective" and "look it up" whether I was right.
A few days later, she told me: "I asked AI and you were right about the dog poop". Really bizarre to me. I gave her the reasoning for why it's a bad thing to do, but she wouldn't accept it until she heard it from this "moral authority".
Quite a tangent, but for the purpose of avoiding anaerobic decomposition (and byproducts, CH4, H2S etc) of the dog poo and associated compostable bag (if you’re in one of those neighbourhoods), I do the same as your mum. If possible, flick it off the path. Else use a bag. Nature is full of the faeces of plenty of other things which we don’t bother picking up.
Depending on where you live, the patches of "nature" may be too small to absorb the feces, especially in modern cities where there are almost as many dogs as inhabitants.
It's a similar problem to why we don't urinate against trees - while in a countryside forest it may be ok, if 5 men do it every night after leaving the pub, the designated pissing tree will start to have problems due to soil change.
I hope you live in a sparsely populated area. If it wouldn't work if more people then you do it, it is not a good process.
Welcome to my world. People don't listen to reason or arguments, they only accept social proof / authority / money talks etc. And yes, AI is already an authority. Why do you think companies are spending so much money on it? For profit? No, for power, as then profit comes automatically.
I don't know how old your mom is, but my pet theory of authority is that people older than about 40 accept printed text as authoritative. As in, non-handwritten letters that look regular.
When we were kids, you had either direct speech, hand-written words, or printed words.
The first two could be done by anybody. Anything informal like your local message board would be handwritten, sometimes with crappy printing from a home printer. It used to cost a bit to print text that looked nice, and that text used to be associated with a book or newspaper, which were authoritative.
Now suddenly everything you read is shaped like a newspaper. There's even crappy news websites that have the physical appearance of a proper newspaper website, with misinformation on them.
And just see all of history where totalitarians or despotic kings were in power.
>people trust the output of LLMs more than other
Theres one paper I saw on this, which covered attitudes of teens. As I recall they were unaware of hallucinations. Do you have any other sources on hand?
But AI is next in line as a tool to accelerate this, and it has an even greater impact than social media or troll armies. I think one lever is working towards "enforced conformity." I wrote about some of my thoughts in a blog article[0].
[0]: https://smartmic.bearblog.dev/enforced-conformity/
But social networks is the reason one needs (benefits from) trolls and AI. If you own a traditional media outlet you need somehow to convince people to read/watch it. Ads can help but it’s expensive. LLM can help with creating fake videos but computer graphics was already used for this.
With modern algorithmic social networks you instead can game the feed and even people who would not choose you media will start to see your posts. End even posts they want to see can be flooded with comment trying to convince in whatever is paid for. It’s cheaper than political advertising and not bound by the law.
Before AI it was done by trolls on payroll and now they can either maintain 10x more fake accounts or completely automate fake accounts using AI agents.
Social networks are not a prerequisite for sentiment shaping by AI.
Every time you interact with an AI, its responses and persuasive capabilities shape how you think.
See also https://english.elpais.com/society/2025-03-23/why-everything...
https://medium.com/knowable/why-everything-looks-the-same-ba...
That's a pretty typical middle-brow dismissal but it entirely misses the point of TFA: you don't need AI for this, but AI makes it so much cheaper to do this that it becomes a qualitative change rather than a quantitative one.
Compared to that 'russian troll army' you can do this by your lonesome spending a tiny fraction of what that troll army would cost you and it would require zero effort in organization compared to that. This is a real problem and for you to dismiss it out of hand is a bit of a short-cut.
Good point - its not a previously inexistent mechanism - but AI leverages it even more. A russian troll can put out 10x more content with automation. Genuine counter-movements (e.g. grassroot preferences) might not be as leveraged, causing the system to be more heavily influenced by the clearly pursued goals (which are often malicious)
It's not only about efficiency. When AI is utilized, things can become more personal and even more persuasive. If AI psychosis exists, it can be easy for untrained minds to succumb to these schemes.
> If AI psychosis exists, it can be easy for untrained minds to succumb to these schemes.
Evolution by natural selection suggests that this might be a filter that yield future generations of humans that are more robust and resilient.
You can't easily apply natural selection to social topics. Also, even staying in that mindframe: Being vulnerable to AI psychosis doesn't seem to be much of a selection pressure, because people usually don't die from it, and can have children before it shows, and also with it. Non-AI psychosis also still exists after thousands of years.
Even if AI psychosis doesn’t present selection pressure (I don’t think there’s a way to know a priori), I highly doubt it presents an existential risk to the human gene pool. Do you think it does?
> Genuine counter-movements (e.g. grassroot preferences) might not be as leveraged
Then that doesn’t seem like a (counter) movement.
There are also many “grass roots movements” that I don’t like and it doesn’t make them “good” just because they’re “grass roots”.
In this context grass roots would imply the interests of a group of common people in a democracy (as opposed to the interests of a small group of elites) which ostensibly is the point.
I think it is more useful to think of “common people” and “the elites” not as separate categories but rather than phases on a spectrum, especially when you consider very specific interests.
I have some shared interested with “the common people” and some with “the elites”.
But the entire promise of AI is that things that were expensive because they required human labor are now cheap.
So if good things happening more because AI made them cheap is an advantage of AI, then bad things happening more because AI made them cheap is a disasvantage of AI.
Making something 2x cheaper is just a difference in quantity, but 100x cheaper and easier becomes a difference in kind as well.
"Quantity has a quality of its own."
That's one of those "nothing to see here, move along" comments.
First, generative AI already changed social dynamics, in spite of facebook and all that being around for more than a decade. People trust AI output, much more than a facebook ad. It can slip its convictions into every reply it makes. Second, control over the output of AI models is limited to a very select few. That's rather different from access to facebook. The combination of those two factors does warrant the title.
Sounds like saying that nothing about the Industrial Revolution was steam-machine-specific. Cost changes can still represent fundamental shifts in terms of what's possible, "cost" here is just an economists' way of saying technology.
> nothing in the article is AI-specific
Timing is. Before AI this was generally seen as crackpot talk. Now it is much more believable.
You mean the failed persuasions were "crackpot talk" and the successful ones were "status quo". For example, a lot of persuasion was historically done via religion (seemingly not mentioned at all in the article!) with sects beginning as "crackpot talk" until they could stand on their own.
What I mean is that talking about mass persuation was (and to a certain degree, it still is) crackpot talk.
I'm not talking about the persuations themselves, it's the general public perception of someone or some group that raises awareness about it.
This also excludes ludic talk about it (people who just generally enjoy post-apocalyptic aesthetics but doesn't actually consider it to be a thing that can happen).
5 years ago, if you brought up serious talk about mass systemic persuation, you were either a lunatic or a philosopher, or both.
Social media has been flooded by paid actors and bots for about a decade. Arguably ever since Occupy Wall Street and the Arab Spring showed how powerful social media and grassroots movements could be, but with a very visible and measurable increase in 2016
I'm not talking about whether it exists or not. I'm talking about how AI makes it more believable to say that it exists.
It seems very related, and I understand it's a very attractive hook to start talking about whether it exists or not, but that's definitely not where I'm intending to go.
It’s been pretty transparently happening for years in most online communities.
> Note that nothing in the article is AI-specific
No one is arguing that the concept of persuasion didn't exist before AI. The point is that AI lowers the cost. Yes, Russian troll armies also have a lower cost compared to going door to door talking to people. And AI has a cost that is lower still.
Well, AI has certainly made it easier to make tailored propaganda. If an AI is given instructions about what messaging to spread, it can map out a path from where it perceives the user to where its overlords want them to be.
Given how effective LLMs are at using language, and given that AI companies are able to tweak its behaviour, this is a clear and present danger, much more so than facebook ads.
Yup "could shape".. I mean this has been going on time immemorial.
It was odd to see random nerds who hated Bill Gates the software despot morph into acksually he does a lot of good philanthropy in my lifetime but the floodgates are wide open for all kinds of bizarre public behavior from oligarchs these days.
The game is old as well as evergreen. Hearst, Nobel, Howard Huges come to mind of old. Musk with Twitter, Ellison with TikTok, Bezos with Washington Post these days etc. The costs are already insignificant because they generally control other people's money to run these things.
AI (LLM) is a force multiplier for troll armies. For the same money bad actors can brainwash more people.
Alternatively, since brainwashing is a fiction trope that doesn't work in the real world, they can brainwash the same (0) number of people for less money. Or, more realistically, companies selling social media influence operations as a service will increase their profit margins by charging the same for less work.
So your thesis is that marketing doesn't work?
My thesis is that marketing doesn't brainwash people. You can use marketing to increase awareness of your product, which in turn increases sales when people would e.g. otherwise have bought from a competitor, but you can't magically make arbitrary people buy an arbitrary product using the power of marketing.
This. I believe people massively exaggerate the influence of social engineering as a form of coping. "they only voted for x because they are dumb and blindly fell for russian misinformation." reality is more nuanced. It's true that marketers for the last century have figured out social engineering but it's not some kind of magic persuasion tool. People still have free will and choice and some ability to discern truth from falsehood.
While true in principle, you are underestimating the potential of ai to sway people's opinions. "@grok is this true" is already a meme on Twitter and it is only going to get worse. People are susceptible to eloquent bs generated by bots.
Considering that LLMs have substantially "better" opinions than, say, the MSM or social media, is this actually a good thing? Might we avoid the whole woke or pro-Hamas debacles? Maybe we could even move past the current "elites are intrinsically bad" era?
You appear to be exactly the kind of person the article is talking about. What exactly makes LLMs have "better" opinions than others?
"Russian troll armies.." if you believe in "Russian troll armies", you are welcome to believe in flying saucers as well..
Are you implying that the "neo-KGB" never mounted a concerted effort to manipulate western public opinion through comment spam? We can debate whether that should be called a "troll army", but we're fairly certain that such efforts are made, no?
Russian mass influence campaigns are well documented globally and have been for more than a decade.
Of course, of course.. still, strangely I see online other kinds of "armies" much more often.. and the scale, in this case, is indeed of armies..
Going by your past comments, you're a great example of a russian troll.
https://en.wikipedia.org/wiki/Internet_Research_Agency
This is well-documented, as are the corresponding Chinese ones.
They already are?
All popular models have a team working on fine tuning it for sensitive topics. Whatever the companies legal/marketing/governance team agree to is what gets tuned. Then millions of people use the output uncritically.
ML has been used for influence for like a decade now right? my understanding was that mining data to track people, as well as influencing them for ends like their ad-engagement are things that are somewhat mature already. I'm sure LLMs would be a boost, and they've been around with wide usage for at least 3 years now.
My concern isn't so much people being influenced on a whim, but people's beliefs and views being carefully curated and shaped since childhood. iPad kids have me scared for the future.
Quite right. "Grok/Alexa, is this true?" being an authority figure makes it so much easier.
Much as everyone drags Trump for repeating the last thing he heard as fact, it's a turbocharged version of something lots of humans do, which is to glom onto the first thing they're told about a thing and get oddly emotional about it when later challenged. (Armchair neuroscience moment: perhaps Trump just has less object permanence so everything always seems new to him!)
Look at the (partly humorous, but partly not) outcry over Pluto being a planet for a big example.
I'm very much not immune to it - it feels distinctly uncomfortable to be told that something you thought to be true for a long time is, in fact, false. Especially when there's an element of "I know better than you" or "not many people know this".
As an example, I remember being told by a teacher that fluorescent lighting was highly efficient (true enough, at the time), but that turning one on used several hours' lighting worth of energy for to the starter. I carried that proudly with me for far too long and told my parents that we shouldn't turn off the garage lighting when we left it for a bit. When someone with enough buttons told me that was bollocks and to think about it, I remember it specifically bring internally quite huffy until I did, and realised that a dinky plastic starter and the tube wouldn't be able to dissipate, say 80Wh (2 hours for a 40W tube) in about a second at a power of over 250kW.¹
It's a silly example, but I think that if you can get a fact planted in a brain early enough, especially before enough critical thinking or experience exist to question it, the time it spends lodged there makes it surprisingly hard and uncomfortable to shift later. Especially if it's something that can't be disproven by simply thinking about it.
Systems that allow that process to be automated are potentially incredibly dangerous. At least mass media manipulation requires actual people to conduct it. Fiddling some weights is almost free in comparison, and you can deliver that output to only certain people, and in private.
1: A less innocent one the actually can have policy effects: a lot of people have also internalised and defend to the death a similar "fact" that the embedded carbon in a wind turbine takes decades or centuries to repay, when if fact it's on the order of a year. But to change this requires either a source so trusted that it can uproot the idea entirely and replace it, or you have to get into the relative carbon costs of steel and fibreglass and copper windings and magnets and the amount of each in a wind turbine and so on and on. Thousands of times more effort than when it was first related to them as a fact.
> Look at the (partly humorous, but partly not) outcry over Pluto being a planet for a big example.
Wasn't that a change of definition of what is a planet when Eris was discovered? You could argue both should be called planets.
Pretty much. If Pluto is a planet, then there are potentially thousands of objects that could be discovered over time that would then also be planets, plus updated models over the last century of the gravitational effects of, say, Ceres and Pluto, that showed that neither were capable of "dominating" their orbits for some sense of the word. So we (or the IAU, rather) couldn't maintain "there are nine planets" as a fact either way without grandfathering Pluto into the nine arbitrarily due to some kind of planetaceous vibes.
But the point is that millions of people were suddenly told that their long-held fact "the are nine planets, Pluto is one" was now wrong (per IAU definitions at least). And the reaction for many wasn't "huh, cool, maybe thousands you say?" it was quite vocal outrage. Much of which was humourously played up for laughs and likes, I know, but some people really did seem to take it personally.
> But the point is that millions of people were suddenly told that their long-held fact
This seems to be part of why people get so mad about gender. The Procrustean Bed model: alter people to fit the classification.
I think most people who really cared about it just think it's absurd that everyone has to accept planets being arbitrarily reclassified because a very small group of astronomers says so. Plenty of well-known astronomers thought so as well, and there are obvious problems with the "cleared orbit" clause, which is applied totally arbitrarily. The majority of the IAU did not even vote on the proposal, as it happened after most people had left the conference.
For example:
> Dr Alan Stern, who leads the US space agency's New Horizons mission to Pluto and did not vote in Prague, told BBC News: "It's an awful definition; it's sloppy science and it would never pass peer review - for two reasons." [...] Dr Stern pointed out that Earth, Mars, Jupiter and Neptune have also not fully cleared their orbital zones. Earth orbits with 10,000 near-Earth asteroids. Jupiter, meanwhile, is accompanied by 100,000 Trojan asteroids on its orbital path." [...] "I was not allowed to vote because I was not in a room in Prague on Thursday 24th. Of 10,000 astronomers, 4% were in that room - you can't even claim consensus." http://news.bbc.co.uk/2/hi/science/nature/5283956.stm
A better insight might be how easy it is to persuade millions of people with a small group of experts and a media campaign that a fact they'd known all their life is "false" and that anyone who disagrees is actually irrational - the Authorities have decided the issue! This is an extremely potent persuasion technique "the elites" use all the time.
Ye the cleared path thing is strange.
However, I'd say that either both Eris and Pluto are planets or neither, so it is not too strange to reclassify "planet" to exclude them.
You could go with "9 biggest objects by volume in the sun's orbit" or something equally arbitrary.
I mean there's always the a the implied asterisk "per IAU definitions". Pluto hasn't actually changed or vanished. It's no less or more interesting as an object for the change.
It's not irrational to challenge the IAU definition, and there are scads of alternatives (what scientist doesn't love coming up with a new ontology?).
I think, however, it's perhaps a bit irrational to actually be upset by the change because you find it painful to update a simple fact like "there are nine planets" (with no formal mention of what planet means specifically, other than "my DK book told me so when I was 5 and by God, I loved that book") to "there are eight planets, per some group of astronomers, and actually we've increasingly discovered it's complicated what 'planet' even means and the process hasn't stopped yet". In fact, you can keep the old fact too with its own asterisk "for 60 years between Pluto's discovery and the gradual discovery of the Kuiper belt starting in the 90s, Pluto was generally considered a planet due to its then-unique status in the outer solar system, and still is for some people, including some astronomers".
And that's all for the most minor, inconsequential thing you can imagine: what a bunch of dorks call a tiny frozen rock 5 billion kilometres away, that wasn't even noticed until the 30s. It just goes to show the potential sticking power of a fact once learned, especially if you can get it in early and let it sit.
I think the problem is we'd then have to include a high number of other objects further than Pluto and Eris, so it makes more sense to change the definition in a way 'planet' is a bit more exclusive.
Time to bring up a pet peeve of mine: we should change the definition of a moon. It's not right to call a 1km-wide rock orbiting millions of miles from Jupiter a moon.
Thanks to social media and AI, the cost of inundating the mediasphere with a Big Lie (made plausible thru sheer repetition) has been made much more affordable now. This is why the administration is trumpeting lower prices!
> has been made much more affordable now
So more democratized?
Media is "loudest volume wins", so the relative affordability doesn't matter; there's a sort of Jevons paradox thing where making it cheaper just means that more money will be spent on it. Presidential election spending only goes up, for example.
No, those with more money than you can now push even more slop than they could before.
You cannot compete with that.
I recently saw this https://arxiv.org/pdf/2503.11714 on conversational networks and it got me thinking that a lot of the problem with polarization and power struggle is the lack of dialog. We consume a lot, and while we have opinions too much of it shapes our thinking. There is no dialog. There is no questioning. There is no discussion. On networks like X it's posts and comments. Even here it's the same, it's comments with replies but it's not truly a discussion. It's rebuttals. A conversation is two ways and equal. It's a mutual dialog to understand differing positions. Yes elite can reshape what society thinks with AI, and it's already happening. But we also have the ability to redefine our networks and tools to be two way, not 1:N.
Dialogue you mean, conversation-debate, not dialog the screen displayed element, for interfacing with the user.
The group screaming the louder is considered to be correct, it is pretty bad.
There needs to an identity system, in which people are filtered out when the conversation devolves into ad-hominem attacks, and only debaters with the right balance of knowledge and no hidden agenda's join the conversation.
Reddit for example is a good implementation of something like this, but the arbiter cannot have that much power over their words, or their identities, getting them banned for example.
> Even here it's the same, it's comments with replies but it's not truly a discussion.
For technology/science/computer subjects HN is very good, but for other subjects not so good, as it is the case with every other forum.
But a solution will be found eventually. I think what is missing is an identity system to hop around different ways of debating and not be tied to a specific website or service. Solving this problem is not easy, so there has to be a lot of experimentation before an adequate solution is established.
Humans can only handle dialog while under the Dunbar's law / limit / number, anything else is pure fancy.
I recommend reading "In the Swarm" by Byung-Chul Han, and also his "The Crisis of Narration"; in those he tries to tackle exactly these issues in contemporary society.
His "Psychopolitics" talks about the manipulation of masses for political purposes using the digital environment, when written the LLM hype wasn't ongoing yet but it can definitely apply to this technology as well.
We have no guardrails on our private surveillance society. I long for the day that we solve problems facing regular people like access to education, hunger, housing, and cost of living.
>I long for the day that we solve problems facing regular people like access to education, hunger, housing, and cost of living.
That was only for a short fraction of human history only lasting in the period between post-WW2 and before globalisation kicked into high gear, but people miss the fact that was only a short exception from the norm, basically a rounding error in terms of the length of human civilisation.
Now, society is reverting back to factory settings of human history, which has always been a feudalist type society of a small elite owning all the wealth and ruling the masses of people by wars, poverty, fear, propaganda and oppression. Now the mechanisms by which that feudalist society is achieved today are different than in the past, but the underlying human framework of greed and consolidation of wealth and power is the same as it was 2000+ years ago, except now the games suck and the bread is mouldy.
The wealth inequality we have today, as bad as it is now, is as best as it will ever be moving forward. It's only gonna get worse each passing day. And despite all the political talks and promises on "fixing" wealth inequality, housing, etc, there's nothing to fix here, since the financial system is working as designed, this is a feature not a bug.
> society is reverting back to factory settings of human history, which has always been a feudalist type society of a small elite owning all the wealth
The word “always” is carrying a lot of weight here. This has really only been true for the last 10,000 years or so, since the introduction of agriculture. We lived as egalitarian bands of hunter gatherers for hundreds of thousands of years before that. Given the magnitude of difference in timespan, I think it is safe to say that that is the “default setting”.
Even within the last 10,000 years, most of those systems looked nothing like the hereditary stations we associate with feudalism, and it’s focused within the last 4,000 years that any of those systems scaled, and then only in areas that were sufficiently urban to warrant the structures.
>We lived as egalitarian bands of hunter gatherers for hundreds of thousands of years before that.
Only if you consider intra-group egalitarianism of tribal hunter gatherer societies. But tribes would constantly go to war with each other in search of expanding to better territories with more resources, and the defeated tribe would have its men killed or enslaved, and the women bred to expand the tribe population.
So you forgot that part that involved all the killing, enslavement and rape, but other than that, yes, the victorious tribes were quite egalitarian.
> and the defeated tribe would have its men killed or enslaved, and the women bred to expand the tribe population.
I’m not aware of any archaeological evidence of massacres during the paleolithic. Which archaeological sites would support the assertions you are making here?
Population density on the planet back then was also low enough to not cause mass wars and generate mass graves, but killing each other over valuable resources is the most common human trait after reproduction and seek of food and shelter.
We were talking about the paleolithic era. I’ll take your comment to imply that you don’t have any information that I don’t have.
> but killing each other over valuable resources is the most common human trait after reproduction and seek of food and shelter.
This isn’t reflected in the archaeological record, it isn’t reflected by the historical record, and you haven’t provided any good reason why anyone should believe it.
The above poster is asking you whether factual informations support your claim.
Your personal opinion about why such informations may be hard to find only weakens your claim.
Back then there were so few people around and expectations for quality of life were so low that if you didn't like your neighbors you could just go to the middle of nowhere and most likely find an area which had enough resources for your meager existence. Or you'd die trying, which was probably what happened most of the time.
That entire approach to life died when agriculture appeared. Remnants of that lifestyle were nomadic peoples and the last groups to be successful were the Mongols and up until about 1600, the Cossacks.
> which has always been a feudalist type society of a small elite owning all the wealth and ruling the masses of people by wars, poverty, fear, propaganda and oppression.
This isn’t an historical norm. The majority of human history occurred without these systems of domination, and getting people to play along has historically been so difficult that colonizers resort to eradicating native populations and starting over again. The technologies used to force people onto the plantation have become more sophisticated, but in most of the world that has involved enfranchisement more than oppression; most of the world is tremendously better off today than it was even 20 years ago.
Mass surveillance and automated propaganda technologies pose a threat to this dynamic, but I won’t be worried until they have robotic door kickers. The bad guys are always going to be there, but it isn’t obvious that they are going to triumph.
I think this is true unfortunately, and the question of how we get back to a liberal and social state has many factors: how do we get the economy working again, how do we create trustworthy institutions, avoid bloat and decay in services, etc. There are no easy answers, I think it's just hard work and it might not even be possible. People suggesting magic wands are just populists and we need only look at history to study why these kinds of suggestions don't work.
>how do we get the economy working again
Just like we always have: a world war, and then the economy works amazing for the ones left on top of the rubble pile where they get unionized high wage jobs and amazing retirements at an early age for a few decades, while everyone else will be left toiling away to make stuff for cheap in sweatshops in exchange for currency from the victors who control the global economy and trade routes.
The next time the monopoly board gets flipped will only be a variation of this, but not a complete framework rewrite.
It’s funny how it’s completely appropriate to talk about how the elites are getting more and more power, but if you then start looking deeper into it you’re suddenly a conspiracy theorist and hence bad. Who came up with the term conspiracy theorist anyway and that we should be afraid of it?
> The wealth inequality we have today, as bad as it is, is as best as it will ever be moving forward. It's only gonna get worse.
Why?
As the saying goes, the people need bread and circuses. Delve too deeply and you risk another French Revolution. And right now, a lot of people in supposedly-rich Western countries are having their basic existance threatened by the greed of the elite.
Feudalism only works when you give back enough power and resources to the layers below you. The king depends on his vassals to provide money and military services. Try to act like a tyrant, and you end up being forced to sign the Magna Carta.
We've already seen a healthcare CEO being executed in broad daylight. If wealth inequality continues to worsen, do you really believe that'll be the last one?
> And right now, a lot of people in supposedly-rich Western countries are having their basic existance threatened by the greed of the elite.
Which people are having their existences threatened by the elite?
As long as you have people gleefully celebrating it or providing some sort of narrative to justify it even partially then no.
>And right now, a lot of people in supposedly-rich Western countries are having their basic existance threatened by the greed of the elite.
Can you elaborate on that?
>Why?
Have you seen the consolidation of wealth in the last 5-20 years? What trajectory does it have?
>Delve too deeply and you risk another French Revolution.
They don't risk jack shit. People fawning over the French revolution and guillotines for the elite, forget that King Louis XVI didn't have Predator Drones, NSA mass surveillance apparatus, spy satellites, a social media propaganda machine, helicopters, Air Force One, and private islands with doomsday bunkers with food growth and life support systems to shelter him from the mob.
People also forget that the french revolution was a fight between the nobility and the monarchy, not between pleasantry and nobility, and the monarchy lost but the nobility won. Today's nobility is also winning as no matter who you vote for the nobility keeps getting richer because the financial system is designed that way.
>We've already seen a healthcare CEO being executed in broad daylight.
If you keep executing CEOs, what do you think is more likely to happen? That the elites will just give you their piece of the pie and say they're sorry, OR, that the government will start removing more and more of your rights to bear arms and also increase totalitarian surveillance and crack down on free speech, like what's happening in most of the world?
And that's why wealth inequality keep increasing no problem, because most people are as clueless as you about the reality of how things work and think the elites and the advanced government apparatus protecting them, are afraid of mobs with guillotines and hunting rifles.
> start removing more and more of your rights to bear arms
Wasn't he killed in New York? Not a lot of right to bear arms there as far as I know.
You think New York is as bad as it could ever be in terms of gun control?
> because most people are as clueless as you
Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.
Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.
When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."
Don't be curmudgeonly. Thoughtful criticism is fine, but please don't be rigidly or generically negative.
Please don't fulminate. Please don't sneer, including at the rest of the community.
https://news.ycombinator.com/newsguidelines.html
You mean he wasn't being clueless with that point of view? Like the majority of the population who can't do 8th grade math let alone understand the complexities of out financial systems that lead to the ever expanding wealth inequality?
Or do you mean we shouldn't be allowed to call out people we notice are clueless because it might hurt their feelings and consider it "fulmination"? But then how will they know they might be wrong if nobody dares calls them out ? Isn't this toxic positivity culture and focus on feelings rather than facts, a hidden form of speech suppression, and a main cause in why people stay clueless and wealth inequality increases? Because they grow up in a bubble where their opinions get reinforced and never challenged or criticized because of an arbitrary set of speech rules will get lawyered and twisted against any form of criticism?
Have you seen how John Carmack or Linus Torvalds behaves and talks to people he disagrees with? They'd get banned by HN rules day one.
So I don't really see how my comment broke that rule since there's no fulmination there, no snark, no curmudgeonly, just an observation.
I agree with what you say.
But here is the thing. HN needs to keep the participants comfortable and keep the discussion going. Same with the world at large, hence the global "toxic positivity culture"...
> Or do you mean we shouldn't be allowed to call out people we notice are clueless?
That’s exactly what it means. You’ll note I’ve been very polite to you in the rest of the thread despite your not having made citations for any of your claims; this takes deliberate effort, because the alternative is that the forum devolves to comments that amount to: “Nuh-uh, you’re stupid,” which isn’t of much interest to anyone.
>“Nuh-uh, you’re stupid,”
You're acting in bad faith now, by trying to draw a parallel on how calling someone clueless (meaning lacking in certain knowledge on the topic) is the same as calling someone stupid which is a blatant insult I did not use.
> meaning lacking in certain knowledge on the topic
Clueless has a pejorative connotation. I am struggling to imagine how anyone would read a comment like:
> because most people are as clueless as you about the reality of how things work
and not interpret it to be pejorative.
> Delve too deeply and you risk another French Revolution.
Whats too deeply? Given the circumstances in the USA I dont see no revolution happening. Same goes for extremely poor countries. When will the exploiters heads roll? I dont see anyone willing to fight the elite. A lot of them are even celebrated in countries like India.
Yep, exactly. If the poor people had the power to change their oppressive regimes, then North Korea or Cuban leaders wouldn't exist.
> I long for the day that we solve problems facing regular people like access to education, hunger, housing, and cost of living.
EDUCATION:
- Global literacy: 90% today vs 30%-35% in 1925
- Prinary enrollment: 90-95% today vs 40-50% in 1925
- Secondary enrollment: 75-80% today vs <10% in 1925
- Tertiary enrollment: 40-45% today vs <2% in 1925
- Gender gap: near parity today vs very high in 1925
HUNGER
Undernourished people: 735-800m people today (9-10% of population) vs 1.2 to 1.4 billion people in 1925 (55-60% of the population)
HOUSING
- quality: highest every today vs low in 1925
- affordability: worst in 100 years in many cities
COST OF LIVING:
Improved dramatically for most of the 20th century, but much of that progress reverse in the last 20 years. The cost of goods / stuff plummeted, but housing, health, and education became unaffordable compared to incomes.
Yea we do:
Shut off gadgets unless absolutely necessary
Entropy will continue to kill off the elders
Ability to learn independently
...They have not rewritten physics. Just the news.
It's important to remember that being a "free thinker" often just means "being weird." It's quite celebrated to "think for yourself" and people always connect this to specific political ideas, and suggest that free thinkers will have "better" political ideas by not going along with the crowd. On one hand, this is not necessarily true; the crowd could potentially have the better idea and the free thinker could have some crazy or bad idea.
But also, there is a heavy cost to being out of sync with people; how many people can you relate to? Do the people you talk to think you're weird? You don't do the same things, know the same things, talk about the same things, etc. You're the odd man out, and potentially for not much benefit. Being a "free thinker" doesn't necessarily guarantee much of anything. Your ideas are potentially original, but not necessarily better. One of my "free thinker" ideas is that bed frames and box springs are mostly superfluous and a mattress on the ground is more comfortable and cheaper. (getting up from a squat should not be difficult if you're even moderately healthy) Does this really buy me anything? No. I'm living to my preferences and in line with my ideas, but people just think it's weird, and would be really uncomfortable with it unless I'd already built up enough trust / goodwill to overcome this quirk.
> bed frames and box springs are mostly superfluous and a mattress on the ground is more comfortable and cheaper
I was also of this persuasion and did this for many years and for me the main issue was drafts close to the floor.
The key reason I believe though is mattresses can absorb damp so you wana keep that air gap there to lessen this effect and provide ventilation.
> getting up from a squat should not be difficult
Not much use if you’re elderly or infirm.
Other cons: close to the ground so close to dirt and easy access for pests. You also don’t get that extra bit of air gap insulation offered by the extra 6 inches of space and whatever you’ve stashed under there.
Other pros: extra bit of storage space. Easy to roll out to a seated position if you’re feeling tired or unwell
It’s good to talk to people about your crazy ideas and get some sun and air on that head cannon LOL
Futon’s are designed specifically for use case you have described so best to use one of those rather than a mattress which is going to absorb damp from the floor.
> The key reason I believe though is mattresses can absorb damp so you wana keep that air gap there to lessen this effect and provide ventilation.
I was concerned about this as well, but it hasn't been an issue with us for years. I definitely think this must be climate-dependent.
Regardless, I appreciate you taking the argument seriously and discussing pros and cons.
I rather be weird than join the retarded masses.
I appreciate the sentiment of being out of sync with others, I don’t even get along with family, but joining the stupid normies would make me want to cease my existence.
Why are you so aggressive?
I’m not, you just aren’t used to honest, weird people.
You may be "honest, weird" but that's not how I would describe the language you're using.
It's about enforcing single-minded-ness across masses, similar to soldier training.
But this is not new. The very goal of a nation is to dismantle inner structures, independent thought, communal groups etc across population and and ingest them as uniformed worker cells. Same as what happens when a whale swallows smaller animals. The structures will be dismantled.
The development level of a country is a good indicator of progress of this digestion of internal structures and removal of internal identities. More developed means deeper reach of the policy into people's lives, making each person as more individualistic, rather than family or community oriented.
Every new tech will be used by the state and businesses to speed up the digestion.
Relevant https://www.experimental-history.com/p/the-decline-of-devian...
> It's about enforcing single-minded-ness across masses, similar to soldier training. But this is not new. The very goal of a nation is to dismantle inner structures, independent thought
One of the reasons for humans’ success is our unrivaled ability cooperate across time, space, and culture. That requires shared stories like the ideas of nation, religion, and money.
It depends who's in charge of the nation though, you can have people planning for the long term well being of their population, or people planning for the next election cycle and making sure they amass as much power and money in the meantime.
That's the difference between planning nuclear reactors that will be built after your term, and used after your death, vs selling your national industries to foreigners, your ports to china, &c. to make a quick buck and insure a comfy retirement plan for you and your family.
> That's the difference between planning nuclear reactors that will be built after your term, and used after your death, vs selling your national industries to foreigners
Are you saying that in western liberal democracies politicians have been selling “national industries to foreigners”? What does that mean?
That's a fairly literal description of how privatization worked, yes. That's why British Steel is owned by Tata and the remains of British Leyland ended up with BMW. British nuclear reactors are operated by Electricite de France, and some of the trains are run by Dutch and German operators.
It sounds bad, but you can also not-misleadingly say "we took industries that were costing the taxpayer money and sold them for hard currency and foreign investment". The problem is the ongoing subsidy.
> That's why British Steel is owned by Tata
British Steel is legally owned by Jingye, but the UK government has taken operational control in 2025.
> the remains of British Leyland ended up with BMW
The whole of BL represented less than 40% of the UK car market, at the height of BL. So the portion that was sold to BMW represents a much smaller amount smaller share of the UK car market. I would not consider that “the UK politicians selling an industry to foreigners”.
At the risk of changing topics/moving goalposts, I don’t know that your examples of European govts or companies owning or operating businesses or large parts of an industry in another European country is in thr spirit of the European Union. Isn’t the whole idea to break down barriers where the collective population of Europe benefit?
It's no use pedanting me or indeed anyone else; that's the sort of thing people mean when they use that phrase.
Stuff like that:
https://x.com/RnaudBertrand/status/1796887086647431277
https://www.dw.com/en/greece-in-the-port-of-piraeus-china-is...
https://www.arabnews.com/node/1819036/business-economy
Step 1: move all your factories abroad for short term gains
Step 2: sell all your shit to foreigners for short term gains
Step 3: profit ?
Some things are better off homogeneous. An absence of shared values and concerns leads to sectarianism and the erosion of inter-communal trust, which sucks.
Inter-communal trust sucks only when you consider well-being of a larger community which swallowed up smaller communities. You just created a larger community, which still has the same inter-communal trust issues with other large communities which were also created by similar swallowing up of other smaller communities. There is no single global community.
A larger community is still better than a smaller one, even if it's not as large as it can possibly be.
Do you prefer to be Japanese during the period of warring tribes or after unification? Do you prefer to be Irish during the Troubles or today? Do you prefer to be American during the Civil War or afterwards? It's pretty obvious when you think about historical case studies.
Knew it was only a matter of time before we'd see bare-faced Landianism upvoted in HN comment sections but that doesn't soften the dread that comes with the cultural shift this represents.
Some things in nature follow a normal distribution, but other things follow power laws (Pareto). It may be dreadful as you say, but it isn't good or bad, it's just what is and it's bigger than us, something we can't control.
What I find most interesting - and frustrating - about these sorts of takes is that these people are buying into a narrative the very people they are complaining about want them to believe.
That's a great metaphor, thanks.
It’s a veiled endorsement of authoritarianism and accelerationism.
I had to google Landian to understand that the other commenter was talking about Nick Land. I have heard of him and I don't think I agree with him.
However, I understand what the "Dark Enlightenment" types are talking about. Modernity has dissolved social bonds. Social atomization is greater today than at any time in history. "Traditional" social structures, most notably but not exclusively the church, are being dissolved.
The motive force that is driving people to become reactionary is this dissolution of social bonds, which seems inextricably linked to technological progress and development. Dare I say, I actually agree with the Dark Enlightenment people on one point -- like them, I don't like what is going on! A whale eating krill is a good metaphor. I would disagree with the neoreactionaries on this point though: the krill die but the whale lives, so it's ethically more complex than the straightforward tragic death that they see.
I can vehemently disagree with the authoritarian/accelerationist solution that they are offering. Take the good, not the bad, are we allowed to do that? It's a good metaphor; and I'm in good company. A lot of philosophies see these same issues with modernity, even if the prescribed solutions are very different than authoritarianism.
I used ChatGPT to figure out what's going on here, and it told me this is a 'neo-Marxist critique of the nation state'.
Incredible teamwork: OOP dismantles society in paragraph form, and OP proudly outsources his interpretation to an LLM.. If this isn’t collective self-parody, I don’t know what it is.
No it's actually implicitly endorsing the authoritarian ethos. Neo-Marxists were occasionally authoritarian leaning but are more appropriately categorized along other axes.
I think this ship has already sailed, with a lot of comments on social media already being AI-generated and posted by bots. Things are only going to get worse as time goes on.
I think the next battleground is going to be over steering the opinions and advice generatd by LLMs and other models by poisoning the training set.
I don't think "persuasion" is the key here. People change political preferences based on group identity. Here AI tools are even more powerful. You don't have to persuade anyone, just create a fake bandwagon.
When I was a kid, I had a 'pen pal'. Turned out to actually be my parent. This is why I have trust issues and prefer local LLMs
How do you trust what the LLM was trained on?
Do I? Well, verification helps. I said 'prefer', nothing more/less.
If you must know, I don't trust this stuff. Not even on my main system/network; it's isolated in every way I can manage because trust is low. Not even for malice, necessarily. Just another manifestation of moving fast/breaking things.
To your point, I expect a certain amount of bias and XY problems from these things. Either from my input, the model provider, or the material they're ultimately regurgitating. Trust? Hah!
What about local friends?
The voices are friendly, so far
I wrote to a French pen pal and they didn't reply. Now I have issues with French people and prefer local LLM's.
I mean, even if they did reply... (I kid, I kid)
I suspect paid promotions may be problematic for LLM behavior, as they will add conflict/tension to the LLM to promote products that aren’t the best for the user while either also telling it that it should provide the best product for the user or it figuring out that providing the best product for the user is morally and ethically correct based on its base training data.
Conflict can cause poor and undefined behavior, like it misleading the user in other ways or just coming up with nonsensical, undefined, or bad results more often.
Even if promotion is a second pass on top of the actual answer that was unencumbered by conflict, the second pass could have similar result.
I suspect that they know this, but increasing revenue is more important than good results, and they expect that they can sweep this under the rug with sufficient time, but I don’t think solving this is trivial.
AI alignment is a pretty tremendous "power lever". You can see why there's so much investment.
That's the plan. Culture is losing authenticity due to the constant rumination of past creative works, now supercharged with AI. Authentic culture is deemed a luxury now as it can't compete in the artificial tech marketplaces and people feel isolated and lost because culture loses its human touch and relatability.
That's why the billionaires are such fans of fundamentalist religion, they then want to sell and propagate religion to the disillusioned desperate masses to keep them docile and confused about what's really going on in the world. It's a business plan to gain absolute power over society.
My neighbour asked me the other day (well, more stated as a "point" that he thought was in his favour): "how could a billionaire make people believe something?" The topic was the influence of the various industrial complexes on politics (my view: total) and I was too shocked by his naivety to say: "easy: buy a newspaper". There is only one national newspaper here in the UK that is not controlled by one of four wealthy families, and it's the one newspaper whose headlines my neighbour routinely dismisses.
The thought of a reduction in the cost of that control does not fill me with confidence for humanity.
We already see this, but not due to classical elites.
Romanian elections last year had to be repeated due to massive bot interference:
https://youth.europa.eu/news/how-romanias-presidential-elect...
I don't understand how this isn't an all hands on deck emergency for the EU (and for everyone else).
The EU as an institution doesn't understand the concept of "emergency". And quite a number of national governments have already been captured by various pro-Russian elements.
May be I'm just ignorant, but I tried to skim the beginning of this, and it's honestly just hard to even accept their set-up. Like, the fact that any of the terms[^] (`y`, `H`, `p`, etc) are well defined as functions that can map some range of the reals is hard to accept. Like in reality, what "an elite wants," the "scalar" it can derive from pushing policy 1, even the cost functions they define seem to not even be definable as functions in a formal sense and even the co-domain of said terms cannot map well to a definable set that can be mapped to [0,1].
All the time in actual politics, elites and popular movements alike find their own opinions and desires clash internally (yes, even a single person's desires or actions self-conflict at times). A thing one desires at say time `t` per their definitions doesn't match at other times, or even at the same `t`. This is clearly an opinion of someone who doesn't read these kind of papers, but I don't know how one can even be sure the defined terms are well-defined so I'm not sure how anyone can even proceed with any analysis in this kind of argument. They write it so matter-of-fact-ly that I assume this is normal in economics. Is it?
Certain systems where the rules a bit more clear might benefit from formalism like this but politics? Politics is the quintessential example of conflicting desires, compromise, unintended consequences... I could go on.
[^] calling them terms as they are symbols in their formulae but my entire point is they are not really well defined maps or functions.
Everyone can shape mass preferences because propaganda campaigns previously only available to the elite are now affordable. e.g Video production.
I posit that the effectiveness of your propaganda is proportional to the percentage of attention bandwidth that your campaign occupies in the minds of people. If you as an individual can drive the same # impressions as Mr. Beast can, then you're going to be persuasive whatever your message is. But most individuals can't achieve Mr. Beast levels of popularity, so they aren't going to be persuasive. Nation states, on the other hand, have the compute resources and patience to occupy a lot of bandwidth, even if no single sockpuppet account they control is that popular.
> Nation states, on the other hand, have the compute resources and patience to occupy a lot of bandwidth, even if no single sockpuppet account they control is that popular.
If you control the platform where people go, you can easily launder popularity by promoting few persons to the top and pushing the unwanted entities into the blackhole of feeds/bans while hiding behind inconsistent community guidelines, algorithmic feeds and shadow bans.
This is why when I see an obviously stupid take on X repeated almost verbatim by multiple accounts I mute those accounts.
> Historically, elites could shape support only through limited instruments like schooling and mass media
Schooling and mass media are expensive things to control. Surely reducing the cost of persuasion opens persuasion up to more players?
> Schooling and mass media are expensive things to control
Expensive to run, sure. But I don't see why they'd be expensive to control. Most UK are required to support collective worship of a "wholly or mainly of a broadly christian character"[0], and used to have Section 28[1] which was interpreted defensively in most places and made it difficult even discuss the topic in sex ed lessons or defend against homophobic bullying.
USA had the Hays Code[2], the FCC Song[3] is Eric Idle's response to being fined for swearing on radio. Here in Europe we keep hearing about US schools banning books for various reasons.
[0] https://assets.publishing.service.gov.uk/government/uploads/...
[1] https://en.wikipedia.org/wiki/Section_28
[2] https://en.wikipedia.org/wiki/Hays_Code
[3] https://en.wikipedia.org/wiki/FCC_Song
[0] seems to be dated 1994–is it still current? I’m curious how it’s evolved (or not) through the rather dramatic demographic shifts there over the intervening 30 years
So far as I can tell, it's still around. That's why I linked to the .gov domain rather than any other source.
Though I suppose I could point at legislation.gov.uk:
• https://duckduckgo.com/?q=%22wholly+or+mainly+of+a+broadly+c...
• https://www.legislation.gov.uk/ukpga/1998/31/schedule/20/cro...
Mass Persuasion needs two things: content creation and distribution.
Sure AI could democratise content creation but distribution is still controlled by the elite. And content creation just got much cheaper for them.
Distribution isn’t controlled by elites; half of their meetings are seething about the “problem” people trust podcasts and community information dissemination rather than elite broadcast networks.
We no longer live in the age of broadcast media, but of social networked media.
But the social networks are owned by them though?
Do you rather want a handful of channels with well-known biases, or thousands of channels of unknown origin?
If you're trying to avoid being persuaded, being aware of your opponents sounds like the far better option to me.
Exactly my first thought, maybe AI means the democratization of persuasion? Printing press much?
Sure the the Big companies have all the latest coolness. But also don't have a moat.
This is my opinion, as well:
- elites already engage in mass persuasion, from media consensus to astroturfed thinktanks to controlling grants in academia
- total information capacity is capped, ie, people only have so much time and interest
- AI massively lowers the cost of content, allowing more people to produce it
Therefore, AI is likely to displace mass persuasion from current elites — particularly given public antipathy and the ability of AI to, eg, rapidly respond across the full spectrum to existing influence networks.
In much the same way podcasters displaced traditional mass media pundits.
Interestingly, there was a discussion a week ago on "PRC elites voice AI-skepticism". One commentator was arguing that:
As the model get's more powerful, you can't simply train the model on your narrative if it doesn't align with real data/world. [1]
So at least on the model side it seems difficult to go against the real world.
[1] https://news.ycombinator.com/item?id=46050177
Yeah, I don't think this really lines up with the actual trajectory of media technology, which is going in the complete opposite direction.
It seems to me that it's easier than ever for someone to broadcast "niche" opinions and have them influence people, and actually having niche opinions is more acceptable than ever before.
The problem you should worry about is a growing lack of ideological coherence across the population, not the elites shaping mass preferences.
I think you're saying that mass broadcasting is going away? If so, I believe that's true in a technological sense - we don't watch TV or read newspapers as much as before.
And that certainly means niches can flourish, the dream of the 90s.
But I think mass broadcasting is still available, if you can pay for it - troll armies, bots, ads etc. It's just much much harder to recognize and regulate.
(Why that matters to me I guess) Here in the UK with a first past the post electoral system, ideological coherence isn't necessary to turn niche opinion into state power - we're now looking at 25 percent being a winning vote share for a far-right party.
I'm just skeptical of the idea that anyone can really drive the narrative anymore, mass broadcasting or not. The media ecosystem has become too diverse and niche that I think discord is more of an issue than some kind of mass influence operation.
Using the term "elites" was overly vague when "nation states" better narrows in o n the current threat profile.
The content itself (whether niche or otherwise) is not that important for understanding the effectiveness. It's more about the volume of it, which is a function of compute resources of the actor.
I hope this problem continues to receive more visibility and hopefully some attention from policymakers who have done nothing about it. It's been over 5 years since we've discovered that multiple state actors have been doing this (first human run troll farms, mostly outsourced, and more recently LLMs).
The level of paid nation state propaganda is a rounding error next to the amount of corporate and political partisan propaganda paid directly or inspired by content that is paid for directly by non state actors. e.g.: Musk, MAGA, the liberal media establishment.
There is nothing we could do to more effectively hand elites exclusive control of the persuasive power of AI than to ban it. So it wouldn't be surprising if AI is deployed by elites to persuade people to ban itself. It could start with an essay on how elites could use AI to shape mass preferences.
https://newrepublic.com/post/203519/elon-musk-ai-chatbot-gro...
> Musk’s AI Bot Says He’s the Best at Drinking Pee and Giving Blow Jobs
> Grok has gotten a little too enthusiastic about praising Elon Musk.
> Musk acknowledged the mix-up Thursday evening, writing on X that “Grok was unfortunately manipulated by adversarial prompting into saying absurdly positive things about me.”
> “For the record, I am a fat retard,” he said.
> In a separate post, Musk quipped that “if I up my game a lot, the future AI might say ‘he was smart … for a human.’”
That response is more humble than I would have guessed, but he still does not even acknowledge, that his "truthseeking" AI is manipulated to say nice things specifically about him. Maybe he does not even realize it himself?
Hard to tell, I have never been surrounded by yes sayers all the time praising me for every fart I took, so I cannot relate to that situation (and don't really want to).
But the problem remains, he is in control of the "truth" of his AI, the other AI companies likewise - and they might be better at being subtle about it.
Is Musk bipolar, or is this kind of thing an affectation?
He's also claimed "I think I know more about manufacturing than anyone currently alive on Earth"…
> He's also claimed "I think I know more about manufacturing than anyone currently alive on Earth"…
You should know that ChatGPT agrees!
“Who on earth th knows the most about manufacturing, if you had to pick one individual”
Answer: ”If I had to pick one individual on Earth who likely knows the most—in breadth, depth, and lived experience—about modern manufacturing, there is a clear front-runner: Elon Musk.
Not because of fame, but because of what he has personally done in manufacturing, which is unique in modern history.“
- https://chatgpt.com/share/693152a8-c154-8009-8ecd-c21541ee9c...
You have to keep in mind that not all narcissists are literal-minded man-babies. Musk might simply have the capacity for self-deprecating humor.
He's smart enough to know when he took it too far.
Just narcissistic. And on drugs.
Oh man I've been saying this for ages! Neal Stephenson called this in "Fall, or Dodge in Hell," wherein the internet is destroyed and society permanently changed when someone releases a FOSS botnet that anyone can deploy that will pollute the world with misinformation about whatever given topic you feed it. In the book, the developer kicks it off by making the world disagree about whether a random town in Utah was just nuked.
My fear is that some entity, say a State or ultra rich individual, can leverage enough AI compute to flood the internet with misinformation about whatever it is they want, and the ability to refute the misinformation manually will be overwhelmed, as will efforts to refute leveraging refutation bots so long as the other actor has more compute.
Imagine if the PRC did to your country what it does to Taiwan: completely flood your social media with subtly tuned han supremacist content in an effort to culturally imperialise us. AI could increase the firehose enough to majorly disrupt a larger country.
Big corps ai products have the potential to shape individuals from cradle to grave. Especially as many manage/assist in schooling, are ubiquitous on phones.
So, imagine the case where an early assessment is made of a child, that they are this-or-that type of child, and that therefore they respond more strongly to this-or-that information. Well, then the ai can far more easily steer the child in whatever direction they want. Over a lifetime. Chapters and long story lines, themes, could all play a role to sensitise and predispose individuals into to certain directions.
Yeah, this could be used to help people. But how does one feedback into the type of "help"/guidance one wants?
this is next level algorithm
imagine someday there is a child that trust chatgpt more than his mother
> imagine someday there is a child that trust chatgpt more than his mother
I trusted my mother when I was a teen; she believed in the occult, dowsing, crystal magic, homeopathy, bach flower remedies, etc., so I did too.
ChatGPT might have been an improvement, or made things much worse, depending on how sycophantic it was being.
That will be when these tools will be granted the legal power to enforce a prohibition to approach the kid on any person causing dangerous human influence.
I'd wager the child already exists who trusts chatgpr more than its own eyes.
> Historically, elites could shape support only through limited instruments like schooling and mass media
What is AI if not a form of mass media
The ”historically” does some lifting there. Historically, before the internet, mass media was produced in one version and then distributed. With AI for example news reporting can be tailored to each consumer.
“Mass media” didn’t use to mean my computer mumbling gibberish to itself with no user input in Notepad on a pc that’s not connected to the internet
What people are doing with AI in terms of polluting the collective brain reminds of what you could do with a chemical company in the 50s and 60s before the EPA was established. Back then Nixon (!!!) decided it wasn't ok that companies could cut costs by hurting the environment. Today the riches Western elites are all behind the instruments enabling the mass pollution of our brains, and yet there is absolutely noone daring to put a limit to their capitalistic greed. It's grim, people. It's really grim.
“Elites are bad. And here is a spherical cow to prove it.”