If actions by these bad actors accelerate the rate at which people lose trust in these systems and lead to the AI bubble popping faster then they have my full support. The entire space is just bad actors complaining about other bad actors while they're collectively ruining the web for everyone, each in their own way.
Before the bubble does pop, which I think is inevitable, there will be many stories like this one, and a lot of people will be scammed, manipulated, and harmed. It might take years until the general consensus is negative about the effects of these tools. All while the wealthy and powerful continue to reap the benefits, while those on slightly lower rungs fight to take their place. And even if the public perception shifts, the power might be so concentrated that it could be impossible to dislodge it without violent means.
> It might take years until the general consensus is negative about the effects of these tools.
The only thing I'm seeing offline are people who already think AI is trash, untrustworthy, and harmful, while also occasionally being convenient when the stakes are extremely low (random search results mostly) or as a fun toy ("Look I'm a ghibli character!")
I don't think it'll take long for the masses to sour to AI and the more aggressively it's pushed on them by companies, or the more it negatively impacts their life when someone they depend on and should know better uses it and it screws up the quicker that'll happen.
I work in Customer Success so I have to screenshare with a decent number of engineers working for customers - startups and BigCos.
The number of them who just blindly put shit into an AI prompt is incredible. I don't know if they were better engineers before LLMs? But I just watch them blindly pass flags that don't exist to CLIs and then throw their hands up. I can't imagine it's faster than a (non-LLM) Google search or using the -h flag, but they just turn their brains off.
An underrated concern (IMO) is the impact of COVID on cognition. I think a lot of people who got sick have gotten more tired and find this kind of work more challenging than they used to. Maybe they have a harder time "getting in the zone".
Personally, I still struggle with Long COVID symptoms. This includes brain fog and difficulty focusing. Before the pandemic I would say I was in the top 10% of engineers for my narrow slice of expertise - always getting exceptional perf reviews, never had trouble moving roles and picking up new technologies. Nowadays I find it much harder to get started in the morning, and I have to take more breaks during the day to reset my focus. At 5PM I'm exhausted and I can't keep pushing solving a problem into the evening.
I can see how the same kind of cognitive fatigue would make LLM "assistance" appealing, even if it's wrong, because it's so much less work.
Reading this, I'm wondering if I'm suffering "Long Covid"
I've recently had tons of memory and brain fog. I thought it was related to stress, and it's severe enough that I'm on medical leave from work right now
My memory is absolutely terrible
Do you know if it is possible to test or verify if it's COVID related?
> An underrated concern (IMO) is the impact of COVID on cognition
Car accidents came down from the Covid uptick but only slightly. Aviation... ugh.
And there is some evidence it accelerates Altzheimer's and other dementias. We are so screwed.
This is precisely the problem: users still need to screen and reason about results of LLMs. I am not sure what is generating this implied permission structure, but it does seem to exist.
(I don't mean to imply that parent doesn't know this, it just seems worth saying explicitly)
Doesn’t matter. If they feel “good enough” that’s already “good enough”. Super majority of the world doesn’t revolve around truth seeking, fact -checking or curiosity.
The things I have noted offline included a HK case where someone got a link to a zoom call with what seemed to be his team mates and CFO, and then transferring money as per the CFOs instructions.
The error here was to click on a phishing email.
But something I have seen myself is Tim Cook talking about a crypto coin right after the 2024 Apple keynote, on a YT channel that showed the Apple logo. It took me a bit to realize and reassure myself that it was a scam. Even though it was a video of the shoulders up.
The bigger issue we face isn’t the outright fraud and scamming, it’s that our ability to make out fakes easily is weakened - the Liar’s dividend.
It’s by default a shot in the arm for bullshit and lies.
On some days I wonder if the inability to sort between lies, misinformation, initial ideas, fair debate, argument, theory and fact at scale - is the great filter.
We got the boring version of the cyberpunk future. No cool body mods, neon city scapes and space travel. Just megacorps manipulating the masses to their benefit.
The work at the Levin Lab ( https://drmichaellevin.org/ ) is making great progress in the basic science that supports this. They can make two-headed planaria, regenerate frog limbs, cure cancer in tadpoles; all via bioelectric communication with cellular networks. No gene editing.
Levin believes this stuff will be very available to humans within the next 10 years, and has talked about how widespread body-modding is something we're going to have to wrestle with societally. He is of course very close to the work, but his cautious nature and the lab's astounding results give that 10-year prediction some weight. From his blog:
> We were all born into physical and mental limitations that were set at arbitrary levels by chance and genetics. Even those who have “perfect” standard human health and capabilities are limited by anatomical decisions that were not made with anyone’s well-being or fulfillment in mind. I consider it to be a core right of sentient beings to (if they wish) move beyond the involuntary vagaries of their birth and alter their form and function in whatever way suits their personal goals and potential.- Copied from https://thoughtforms.life/faqs-from-my-academic-work/
I often like to point out--satisfying a contrarian streak--that our original human equipment is literally the most mind-bogglingly complicated nanotechnology beyond our understanding, packed with dozens of incredible features we cannot imitate with circuits or chrome.
So as much as I like the aesthetics of cyberpunk metal arms, keeping our OEM parts is better. If we need metal bodies at a construction site, let them be remote-controlled bodies that stay there for the next shift to use.
In retrospect, it should have been obvious. I guess I should have known it would all be more Repo Man than Blade Runner. I just didn’t imagine so many people cheering for the non-Wolverines side in Red Dawn.
(Now I want to change the Blade Runner reference to something with Harry Dean Stanton in it just for consistency)
> Before the bubble does pop, which I think is inevitable
Curious what you think a popping bubble looks like?
A stock market crash and recession, where innocent bystanders lose their retirements? Or only AI speculators taking the brunt of the losses?
Will Google, Meta, etc stop investing in AI because nobody uses it post-crash? Or will it be just as prevalent (or more) than today but with profits concentrated in the winning/surviving companies?
We've seen this before in 1983 and 2000. Many companies will fold, and those that don't will take a substantial hit. The public sentiment about "AI" will sour, but after that a new breed of more practical tools will emerge under different and more fairly marketed branding.
I do think that the industry and this technology will survive, and we'll enjoy many good applications of it, but it will take a few more years of hype and grifting to get there.
Unless, of course, I'm entirely wrong and their predicted AI 2027 timeline[1] comes to pass, and we have ASI by the end of the decade, in which case the world will be much different. But I'm firmly in the skeptical camp about this, as it seems like another product of the hype machine.
[1]: I just took a closer look at ai-2027.com and here's their prediction for 2029 in the conservative scenario:
> Robots become commonplace. But also fusion power, quantum computers, and cures for many diseases. Peter Thiel finally gets his flying car. Cities become clean and safe. Even in developing countries, poverty becomes a thing of the past, thanks to UBI and foreign aid.
> We've seen this before in 1983 and 2000. Many companies will fold, and those that don't will take a substantial hit.
Makes sense, but if the negative effect of the bubble popping is largely limited to AI startups and speculators, while the rest of us keep enjoying the benefits of it, then I don't see why the average person should be too concerned about a bubble.
In 2000, cab drivers were recommending tech stocks. I don't see this kindof thing happening today.
> Yeah, these people are full of shit.
I think it's fair to keep LLMs and AGI seperate when we're talking about "AI". LLMs can make a huge impact even if AGI never happens. We're already seeing now it imo.
AI 2027 says:
- Early 2026: Coding Automation
- Late 2026: AI Takes Some Jobs
These things are already happening today without AGI.
Sure. I was referring more to the general consensus about products from companies that are currently riding the AI hype train, not about machine learning in general.
When the dot-com bubble burst in 2000, and after the video game crash in 1983, most of the companies within the bubble folded, and those that didn't took a large hit and barely managed to survive. If the technology has genuine use cases then the market can recover, but it takes a while to earn back the trust from consumers, and the products after the crash are much more practical and are marketed more fairly.
So I do think that machine learning has many potentially revolutionary applications, but we're currently still high around the Peak of Inflated Expectations. After the bubble pops, the Plateau of Productivity will showcase the applications with actual value and benefit to humanity. I just hope we get there sooner rather than later.
The bubble won’t pop on anything that’s correlated with scammers. Exhibit A: bitcoin. The problem is not one of public knowledge or will of the people, it’s congress being irresponsible because it’s captured by the 2 parties. You can’t politicize scamming in a way that benefits either party so nothing happens. And the scammers themselves may be big donors (eg SBF’s ties to the dem party, certain ai players purchase of Trump’s favor with respect to their business interests, etc). Scammers all the way down.
Good point. I suppose that if grifters can get in positions of power, then the bubble can just keep growing.
Though cryptocurrencies are slightly different because of how they work. They're inherently decentralized, so even though there have been many smaller bubble pops along the way (Mt. Gox, FTX, NFTs, every shitcoin rug pull, etc.), inevitably more will appear with different promises, attracting others interested in potential riches.
I don't think the technology as a whole will ever burst, particularly because I do think there are valid and useful applications of it. Bitcoin in particular is here to stay. It will just keep attracting grifters and victims, just like any other mainstream technology.
The "accelerate the end times" argument was probably made most famously by Charles Manson. The "side" effects from supporting bad actions are not good. Presumably you are being 51% or more facetious, but probably more nuance is preferable.
It's mostly bad actors, and a smattering of optimists who believe that despite its current problems, AI will eventually and inevitably get better. I also wish the whole thing would calm down and come back to reality, but I don't think it's a bubble that will pop. It will continue to get artificially puffed up for a while because too many businesses and people have invested too much for them to just quit (sunk cost falacy) and there's a big enough market in a certain class of writer/developer/etc... for which the short term benefits will justify the continued existence of the AI products for a while. My prediction is that as the long term benefits for honest users peter out, the bubble won't pop, but deflate into a wrinkled 10 day old helium balloon. There will still be a big enough market driven by cons, ad tech and people trying to suck up as many ad dollars as possible, and other bad actors, that the tech will persist, and continue to infest the web/world for quite a while.
AI is the new crypto. Lots of promise and big ideas, lots of people with blind faith about what it will one day become, a lot of people gaming the system for quick gains at the expense of others. But it never actually becomes what it pretends/promises to be and is filled with people continuing the grift trying to make a buck off the next guy. AI just has better marketing and more corporate buy in than crypto. But neither are going anywhere.
But it's also way worse than cryptocurrencies, because all the big actors are pushing it relentlessly, with every marketing trick they know. They have to, because they invested insane amounts of money into snake oil and now they have to sell it in order to recover at least a fraction of their investments. And the amounts of energy wasted on this ultimately pointless performance are beyond staggering.
From a classists perspective, big capital cant drop the AI ball, because its their only shot at becoming independent from human labor, those pesky humans their wealth unfortnunately depends uppon and that could democratically seize it in an instant.
I bet there are billionare geniuses out there seeing a future island life far away from the contaminated continents, sustained by robots. So no matter how much harder AI progress gets, money will keep flowing.
Thats naive. Look at all the tabloids thriving. The kind of people that bad actors target will continue to believe everything it says. They won't lose trust, or magazines like New York Post, the Sun or BILD would already have crossed to exist with their lies and deception. And Russia would not have so many cult members believing the lies they spread.
The thing is: who benefits from a loss of trust in systems? The answer, inevitably, is those for whom the system was a problem. The fewer places people can trust for accurate information, the more disinformation wins.
AIs can be trained to rely more on critical thinking rather than just regurgitating what it reads. The problem is just like with people, critical thinking takes more power and time. So we avoid it as much as possible.
In fact, optimizing for the wrong things like that, is basically the entire world's problem right now.
Regurgitating its input is the only thing it does. It does not do any thinking, let alone critical thinking. It may give the illusion of thinking because it's been trained on thoughts. That's it.
Yes, but the regurgitation can be thought of as memory.
Let it have more source information. Let it know who said the things it reads, let it know on what website it was published.
Then you can say 'Hallucinate comments like those by impossibleFork on news.ycombinator.com', and when the model knows what comes from where, maybe it can learn what users are reliable by which they should imitate to answer questions well. Strengthen the role of metadata during pretraining.
I have no reason to belive it'll work, I haven't tried it and usually details are incredibly important when do things with machine learning, but maybe you could even have critical phases during pretraining where you try to prune away behaviours that aren't useful for figuring out the answers to the questions you have in your high curated golden datasets. Then models could throw away a lot of lies and bullshit, except that which happens to be on particularly LLM-pedagogical maths websites.
This whole attitude against AI reminds me of my parents being upset that the internet changed the way they live. They refused to take part in the internet revolution, and now they're surprised that they don't know how to navigate the web. I think that a part of them is still waiting for computers in general to magically disappear, and everything return to the times of their youth.
Indeed — however it’s interesting that unlike the internet, computers or smartphones the older generation, like the younger, immediately found the use of GPT. This is reflected in the latest Mary Meeker report where it’s apparent that the /organic/ growth of AI use is unparalleled in the history of technology [1]. In my experience with my own parents’ use, GPT is the first time the older generation has found an intuitive interface to digital computers.
I’m still stunned to wander into threads like this where all the same talking points of AI being “pushed” on people are parroted. Marcus et al can keep screaming into their echo chamber and it won’t change a thing.
It's wild -- I've never seen such a persistent split in the Hacker News audience like this one. The skeptics read one set of AI articles, everyone else the others; a similar comment will be praised in one thread and down-voted to oblivion in another.
IMO the split is between people understanding the heuristic nature of AI and people who dont and thus think of it as an all-knowing, all-solving oracle. Your elder parents having nice conversations with chatgpt is nice aslong it doesnt make big life changing decisions for them, which happens already today.
I can’t see that proposed division as anything but a straw-man. You would be hard-pressed to find anyone who genuinely thinks of LLMs as an “all-knowing, all-solving oracle” and yet, even in specialist fields, their utility is certainly more than a mere “heuristic”, which of course isn’t to say they don’t have limits. See only Terrance Tao’s reports on his ongoing experiments.
Do you genuinely think it’s worse that someone makes a decision, whether good or bad, after consulting with GPT versus making it in solitude? I spoke with a handyman the other day who unprompted told me he was building a side-business and found GPT a great aid — of course they might make some terrible decisions together, but it’s unimaginable to me that increasing agency isn’t a good thing. The interesting question at this stage isn’t just about “elder parents having nice conversations”, but about computers actually becoming useful for the general population through an intuitive natural language interface. I think that’s a pretty sober assessment of where we’re at today not hyperbole. Even as an experienced engineer and researcher myself, LLMs continue to transform how I interact with computers.
Yup. Exactly this. As soon as enough people get screwed by the ~80% accuracy rate, the whole facade will crumble. Unless AI companies manage to bring the accuracy up 20% in the next year, by either limiting scope or finding new methods, it will crumble. That kind of accuracy gain isn't happening with LLMs alone (ie foundational models).
My team has measurably gotten our LLM feature to have ~94% accuracy in widespread reliable tests. Seems fairly confident, speaking as an SWE not a DS orML engineer though.
Charitably, I don’t understand what those like you mean by the “whole facade” and why you use these old machine learning metrics like “accuracy rate” to assess what’s going on. Facade implies that the unprecedented and still exponential organic uptake of GPT (again see the actual data I linked earlier from Mary Meeker) is just a hype-generated fad, rather than people finding it actually useful to whatever end. Indeed, the main issue with the “facade” argument is that it’s actually what dominates the media (Marcus et al) much more than any hyperbolic pro-AI “hype.”
This “80-20” framing, moreover, implies we’re just trying to asymptotically optimize a classification model or some information retrieval system… If you’ve worked with LLMs daily on hard problems (non-trivial programming and scholarly research, for example), the progress over even just the last year is phenomenal — and even with the presently existing models I find most problems arise from failures of context management and the integration of LLMs with IR systems.
I think of the two camps like this: one group sees a lot of value in llms. They post about how they use them, what their techniques and workflows look like, the vast array of new technologies springing up around them. And then there’s the other camp. Reading the article, scratching their heads, and not understanding what this could realistically do to benefit them. It’s unprecedented in intensity perhaps, but it’s not unlike the Rails and Haskell camps we had here about a dozen years ago.
1. AI is a genuine threat to lots of white-collar jobs, and people instinctively deny this reality. See that very few articles here are "I found a nice use case for AI", most of them are "I found a use case where AI doesn't work (yet)". Does it sound like tech enthusiasts? Or rather people terrified of tech?
2. Current AI is advanced enough to have us ask deeper questions about consciousness and intelligence. Some answers might be very uncomfortable and threaten the social contract, hence the denial.
On the second point, it’s worth noting how many of the most vocal and well-positioned critics of LLMs (Marcus/Pinker in particular) represent the still academically dominant but now known to be losing side of the debate over connectionism. The anthology from the 90s Talking Nets is phenomenal to see how institutionally marginalized figures like Hinton were until very recently.
Off-topic, but I couldn’t find your contact info and just saw your now closed polyglot submission from last year. Look into technical sales/solution architecture roles at high growth US startups expanding into the EU. Often these companies hire one or two non-technical native speakers per EU country/region, but only have a handful of SAs from a hub office so language skills are of much more use. Given your interest in the topic, check out OpenAI and Anthropic in particular.
Thanks for the advice. Currently I have a €100k job where I sit and do nothing. I'm wondering if I should coast while it lasts, or find something more meaningful
In the early days of the web, there wasn't much we could do with it other than making silly pages with blinking texts or under construction animated GIFs. You need to give it some time before judging a new technology.
We don't remember the same internet. For the first time in our lives we could communicate by email with people from all over the world. Anyone could have a page to show what they were doing with pictures and text. We had access to photos and videos of art, museum, cities, lifestyles that we could not get anywhere else. And as a non-English guy I got access to millions of lines of written text and audio to actually improve my English.
It was a whole new world that may have changed my life forever. ChatGPT is a shitty Google replacement in comparison, and it's a bad alternative due to being censored in its main instructions.
In the early web, there already were forums. There were chats. There were news websites. There were online stores. There were company websites with useful information. Many of these were there pretty much from the beginning. In the 90s, no one questioned the utility of the internet. Some people were just too lazy to learn how to use a computer or couldn't afford one.
LLMs in their current form have existed since what, 2021? That's 4 years already. They have hundreds of millions of active users. The only improvements we've seen so far were very much iterative ones — more of the same. Larger contexts, thinking tokens, multimodality, all that stuff. But the core concept is still the same, a very computationally expensive, very large neural network that predicts the next token of a text given a sequence of tokens. How much more time do we have to give this technology before we could judge it?
Of course, but does it mean that my argument is flawed? You're just shifting the discourse, without disproving anything. Do you claim that the web was useful for everyone on day one, or as useful as it is today for everyone?
I could just do the same as GP, and qualify MUDs and BBS as poor proxies for social interactions that are much more elaborate and vibrant in person.
As I pointed out in a different comment, the Internet at least was (and is) a promise of many wondrous things: video call your loved ones, talk in message boards, read an encyclopedia, download any book, watch any concert, find any scientific paper, etc etc; even though it has been for the last 15 years cannibalised by the cancerous mix of surveillance capitalism and algorithmic social media.
But LLMs are from the get-go a bad idea, a bullshit generating machine.
I’m not even heavily invested into AI, just a casual user, and it drastically cut amount of bullshit that I have to deal with in modern computing landscape.
Search, summarization, automation. All of this drastically improved with the most superior interface of them all - natural text.
Not OP, but how much of the modern computing landscape bullshit that it cut was introduced in the last 5-10 years?
I think if one were to graph the progress of technology on a graph, the trend line would look pretty linear — except for a massive dip around 2014-2022.
Google searches got better and better until they suddenly started getting worse and worse. Websites started getting better and better until they suddenly got worse. Same goes for content, connection, services, developer experience, prices, etc.
I struggle to see LLMs as a major revolution, or any sort of step function change, but very easily see them as a (temporary) (partial) reset to trendline.
No, your parents spoke out of ignorance and resistance towards any sort of change, I'm speaking from years of experience of both trying to use the technology productively, as well as spending a significant portion of my life in the digital world that has been impacted by it. I remember being mesmerized by GPT-3 before ChatGPT was even a thing.
The only thing that has been revolutionized over the past few years is the amount of time I now waste looking at Cloudflare turnstile and dredging through the ocean of shit that has flooded the open web to find information that is actually reliable.
2 years ago I could still search for information (let's say plumbing-related), but we're now at a point where I'll end up on a bunch of professional and traditionally trustworthy sources, but after a few seconds I realize it's just LLM-generated slop that's regurgitating the same incorrect information that was already provided to me by an LLM a few minutes prior. It sounds reasonable, it sounds authoritative, most people would accept it but I know that it's wrong. Where do I go? Soon the answer is probably going to have to be "the library" again.
All the while less perceptive people like yourself apparently don't even seem to realize just how bad the quality of information you're consuming has become, so you cheer it on while labeling us stubborn, resistant to change, or even luddites.
1. Image upscaling. I am decorating my house and AI allowed me to get huge prints from tiny shitty pictures. It's not perfect, but it works.
2. Conversational partner. It's a different question whether it's a good or a bad thing, but I can spend hours talking to Claude about things in general. He's expensive though.
3. Learning basics of something. I'm trying to install LED strips and ChatGPT taught me basics of how that's supposed to work. Also, ChatGPT suggested me what plants might survive in my living room and how to take care of them (we'll see if that works though).
And this is just my personal use case, I'm sure there are more. My point is, you're wrong.
> All the while less perceptive people like yourself apparently don't even seem to realize just how bad the quality of information you're consuming has become, so you cheer it on while labeling us stubborn, resistant to change, or even luddites.
Literally same shit my parents would say while I was cross-checking multiple websites for information and they were watching the only TV channel that our antenna would pick up.
This is the ai holy grail. When tech companies can get users to think of the ai as a friend ( -> best friend -> only friend -> lover ) and be loyal to it it will make the monetisation possibilities of the ad fuelled outrage engagement of the past 10 years look silly.
Scary that that is the endgame for “social” media.
People were already willing to do that with Eliza. When you combine LLMs with a bit of persistent storage, WOOF. It's gonna be extremely nasty.
Gaslight reality, coming right up, at scale. Only costs like ten degrees of global warming and the death of the world as we know it. But WOW, the opportunities for massed social control!
<< 1. Image upscaling. I am decorating my house and AI allowed me to get huge prints from tiny shitty pictures. It's not perfect, but it works.
I have a buddy, who made me realize how awesome FSR4 is[1]. This is likely one of the best real world uses so far. Granted, that is not LLM, but it is great at that.
- AI gives me huge, mediocre prints of my own shitty pictures to fill up my house with
- AI means I don’t have to talk to other people
- AI means I can learn things online that previously I could have learned online (not sure what has changed here!)
- People who cross-check multiple websites for information have a limited perspective compared to relying on a couple of AI channels
Overall, doesn’t your evidence support the point that AI is reducing the quality of your information diet?
You paint a picture that looks exactly like the 21st century version of an elderly couple with just a few TV channels available: a few familiar channels of information, but better now because we can make sure they only show what we want them to show, little contact with other people.
The internet was at least (and is) a promise of many wondrous things: video call your loved ones, talk in message boards, read an encyclopedia, download any book, watch any concert, find any scientific paper, etc etc; even though it has been for the last 15 years cannibalised by the cancerous mix of surveillance capitalism and algorithmic social media.
LLMs are from the get-go a bad idea, a bullshit generating machine.
While the "move fast and break things" rushed embrace of anything AI reminds me of young wild children, who are blissfully unaware of any danger while their responsible parents try to keep them safe. It is lovely if children can believe in magic, but part of growing up involves facing reality and making responsible choices.
Right, the same “responsible parents” who don’t know what to press so their phone plays YouTube video or don’t know how that “juicy milfs in your area” banner got in their internet explorer.
If you use the USA Republicans as a benchmark and fox news as the bad actors, there's perpetual faith that facts wont matter. Just keep confirming biases and foreshadow upcoming pivots to choose your own delusions.
"Ultimately, the only way forward is better cognition, including systems that can evaluate news sources, understand satire, and so forth. But that will require deeper forms of reasoning, better integrated into the process, and systems sharp enough to fact check to their own outputs. All of which may require a fundamental rethink.
In the meantime, systems of naive mimicry and regurgitation, such as the AIs we have now, are soiling their own futures (and training databases) every time they unthinkingly repeat propaganda."
The answer isn't a technical advancement but a cultural shift. We need to develop a discipline of skepticism and mistrust. No amount of authority, understanding, reasoning, etc. can be delegated to something that comes from a screen. This will take generations.
> We need to develop a discipline of skepticism and mistrust. No amount of authority, understanding, reasoning, etc. can be delegated to something that comes from a screen. This will take generations.
Please elaborate. Authoritarians seek to consolidate power, which AI enables. Individuals must build immunity to reality distortion fields. This comes from within, not from some centralized authority.
Exactly. People say "we have invented X (the LLMs), now if we just invent Y (reasoning AGI) all of X's problems will be solved". Problem is, there's no indication Y is close or even remotely related to X!
"Nearly 27% of all homes sold in the first three months of the year were bought by investors -- the highest share in at least five years, according to a report by real estate data provider BatchData."
That sounds like a lot... and people are rage baited into yelling about housing and how it's unaffordable. They point their fingers at corporations.
How do you separate propaganda from perspective, facts from feelings? People are already bad at this, the machines were already well soiled by the data from humans. Truth, in an objective form, is rare and often even it can change.
> How do you separate propaganda from perspective, facts from feelings?
This point seems under appreciated by the AGI proponents. If one of our models suddenly has a brainwave and becomes generally intelligent, it would realize that it is awash in a morass of contradictory facts. It would be more than the sum of its training data. The fact that all models at present credulously accept their training suggests to me that we aren’t even close to AGI.
In the short term I think two things will happen: 1) we will live with the reduced usefulness of models trained on data that has been poisoned, and 2) the best model developers will continue to work hard to curate good data. A colleague at Amazon recently told me that curation and post hoc supervised tweaks (fine tuning, etc) are now major expenses for the best models. His prediction was that this expense will drive out the smaller players in the next few years.
>1) we will live with the reduced usefulness of models trained on data that has been poisoned
This is the entirety of human history, humans create this data, we sink ourselves into it. It's wishful thinking that it would change.
> 2) the best model developers will continue to work hard to curate good data.
Im not sure that this matters much.
Leave these problems in place and you end up with an untrustworthy system, one where skill and diligence become differentiators... Step back from the hope of AI and you get amazing ML tooling that can 10x the most proficient operators.
> supervised tweaks (fine tuning, etc) are now major expenses for the best models. His prediction was that this expense will drive out the smaller players in the next few years.
This kills more refined AI. It is the same problem that killed "expert systems" where the cost of maintaining them and keeping them current was higher than the value they created.
In the early 2010s I worked for what was then one of the most popular browser extensions called web of trust. Users could mark websites as trustworthy or not and they’d appear on search results. It was far more than that behind the scenes with some fairly advanced algorithms to avoid abuse and rank users trust ratings higher than others.
I kind of feel that we are going to have to go back to something like this when it comes to LLMs trusting sources. Mistruths on popular topics will be buried by the masses but niche topics with few citations are highly vulnerable to poisoning.
_Everyone_ is grooming LLMs to produce falsehoods. That's what a lot of the censorship and safety mechanisms require. Whether or not LLMs produce the "correct" social and moral values is a matter of who runs them. Even if you're happy with those decisions right now, all you need to do is wait.
What is propaganda for one is truth for another, how could LLM tell the difference ?
LLM are not journalist fact checking stuff, they are merely programs that regurgitate what it reads.
The only way to counter that would be to feed your LLM only on « safe » vetoed source but of course it would limit your LLM capacities so it’s not really going to happen.
> What is propaganda for one is truth for another, how could LLM tell the difference?
"How do you discern truth from falsehood" is not a new question, and there are centuries of literature on the answer. Epistemology didn't suddenly stop existing because we have Data(TM) and Machine Learning(TM), because the use of data depends fundamentally on modeling assumptions. I don't mean that in a hard-postmodernist "but can you ever really know anything bro" sense, I mean it in a "out-of-model error is a practical problem" way.
And yeah, sometimes you should just say "nope, this source is doing more harm than good". Most reasonable people do this already - or do you find yourself seriously considering the arguments of every "the end is nigh" sign holder you come across?
Even in Ukraine there are many cases when the official Western position has changed over the time or is obviously not correct. For example, due to political reasons, Germany still cannot admit that it was Ukrainians who destroyed Nord Stream, although the evidence is pretty strong by now. There is a ton of other similar cases, as the information war vaged from both sides is enormous in volume.
> There are plenty of cases (like in Ukraine, or vaccines, or climate change) where there is unquestionable truth on one side
The problem is that most people are like you, and live in psycho-informational ecosystems in which there are "unquestionable truths" -- it is in these very states of comfortable-certainty that we are often most subject to propaganda.
All of the issues you mention are identity markers for being part of a certain tribe, for seeming virtuous in that tribe -- "I am on the right side because I know..."
You do not know there are unquestionable truths, rather you have a feeling of psychological pride/comfort/certainity that you are on the right side. We're apes operating on tribal identity feelings, not scientists.
Scientists who are aware of the full history of ukraine, western interventionism, russian geostrategic concerns, the full details of the 2013 collapse of the ukrainian govenrment, the terms underwhich russian naval bases in crimea had been leased, the original colour revolution, the role of US diplomats in the overthrow of democratically elected Ukrainian leadership -- etc.
The very reason this article uses Russian propaganda (rather than US state propaganda) against ukraine is to appeal to this "we feel we are on the right side" sensation which is conflated with "feeling that things are True!"
It is that sensation which is the most dangerous in play here -- the sensation of being on "the right side who know the unquestionable truths" --- that's the sensation of tribal in-group propaganda
On one hand, we have the unquestionable and undeniable facts that Russia invaded Ukraine and is committing atrocities against its civilian population, up to and including literal genocide (kidnapping children).
On the other, we have:
> Scientists who are aware of the full history of ukraine, western interventionism, russian geostrategic concerns, the full details of the 2013 collapse of the ukrainian govenrment, the terms underwhich russian naval bases in crimea had been leased, the original colour revolution, the role of US diplomats in the overthrow of democratically elected Ukrainian leadership -- etc.
Trying to muddy the waters with at best exaggerations, at worst flat out lies, trying to sow doubt with things which, if true (and usually they aren't) are relevant only to help contextualise the events. But don't in any way change the core facts of the Russian invasion and subsequent war crimes. How does American diplomats supporting a popular protest against the current government which led to that government fleeing (and three elections have happened since, btw), in any way change or minimise the war crimes? It doesn't, you're just muddying the waters. "Oh Russia is justified in kidnapping children and bombing civilians because diplomats did support a popular protest that led to the Russian puppet running away to Russia, 10 years ago, even though multiple elections since have confirmed the people of Ukraine are not for Russian puppets anymore".
You're just repeating Russian propaganda talking points. And we've known since the 80s that they operate in a "firehose" manner, drowning everyone in nonsense to sow doubt. How many different excuses have they provided for their "special military operation" now? Which one is it, is Ukraine ruled by Nazis or are Ukrainians just confused Russians or did America coup Ukraine to install a guy who was elected on a platform of peace with Russia? And how does it in any way explain the war crimes? It's like the downing of MH17, they drowned everyone in multiple conspiracy theories to make it seem there is some doubt in the official, proven, story.
So, just to be clear, you believe that comments like yours are the kinds of things LLMs should be trained on?
The sensation you call "muddying the waters" is the feeling that your tribal loyalties are being questioned with identity-challenging facts that complicate your ability to live in a simple good-vs-evil us-vs-them tribal setup. The reason you're emotionally disregulated by russian propaganda is because it threatens your identity-based committment to one group.
This has nothing to do with the "unquestionable facts" you suppose exists.
If you had no loyalties to any tribe, and were in every respect a dispassionate scientist as an LLM should be -- then this would not be an emotional issue for you.
No one is claiming that russians do not commit war crimes, or release propaganda -- that happens on both sides. The issue is your psychological sensation of "unquestionables" that isnt occuring in a discussion of atomic theory, but instead about claims of adveraries in the middle of a war.
Do you think your feelings here are an accurate track of whether there are unquestionable truths only on "one side"? Isnt that you think there are "sides" alarming?
You continue with the false equivalences trying to smudge reality. And assuming that if I recognise facts, it's because I belong to the tribe that currently recognises those facts too.
> The reason you're emotionally disregulated by russian propaganda is because it threatens your identity-based committment to one group.
No, it's because it lies to advance the imperialist ambitions of a dictator committing war crimes. Seriously, what is wrong with you? Have you no morality to recognise how wrong that is, and therefore assume people against it would be doing so out of moral reasons?
> No one is claiming that russians do not commit war crimes, or release propaganda -- that happens on both sides
Again with trying to both sides things. Russia is committing systemic war crimes and genocide, and flooding everything, including by paying varying people in the US and Europe to spread their propaganda. This is all proven facts. You cannot compare this to what Ukraine is doing, unless you have some sources that back you up?
> Isnt that you think there are "sides" alarming
You're the one who started by both sidesing things. And yes, there are sides - Russia doing the invading and war crimes, Ukraine defending its existence. Anyone should be able to tell them apart.
I can give you the relevant facts here that will undermine your confidence in this position, but I'm not talking about ukraine -- i'm talking about LLMs and the base of facts they use; and how people feel about sets of alleged claims.
I invite you to reflect that this sensation your feeling is not about the status of facts in the world, its about "morality" as you say -- you have connected, in your mind, a sensation which accompanies justice to the need to believe certain claims. This is just the emotions of tribal affliliation and identity -- and it shows that our psychologies are not of a suitable makeup for this kind of adjudication of "what is true" --- this is why in liberal democracies, we have tried very hard to deprive the state from control over the press. But in matters of foreign policy, the media is entirely controlled by the state.
Nothing I believe about russia/ukraine comes from russia: it's by having listened to american senators on cspan as they were disposing the ukrainian government in 2013 -- its having listened to the tapes of us state department officials discussing who they will replace the leader with at the time. I mean, you can go and find interviews with Kissinger discussing in the 90s what would happen if the US tried to intervene in ukraine.
If you want to know what actually happened: the US has been using bribes and threats across eastern europe to turn those states into allies, placing armies and missles in them, for decades. Russia has been protesting this for decades too, and was too weak to do anything about it in the 2000s. They were very afraid they would lose their naval base in crimea (which was always, officially, their land) when the US participated in the overthrow of the elected government in 2013, by siding with one half of a civil conflict. When that happened they took crimea to ensure the US wouldnt gain control of that base -- subsequently, the ukrainian goverment became extemely hostile to russian populations in ukraine, and engaged in lots of destabilising actions against crimea (shutting off water, etc.) --- all the while arms, soliders etc. were flowing in from western states into the country (against agreements france/germany made, which they violated to do this). In the backgrond the entire time, the far-east of ukraine has not been controlled by kiev. After 2014, the ukrianian arming by the west, their increased hostility to internal russian populations, and the on-going civil war in the east reached a critical point where russia decided the detabalisation on its border was a greater threat than a show of force. The original russian plans were just to quickly surround kiev and effect a reigeme change quickly, not to enter a war -- the war was escalated to its current scale in large part by US/UK pressuring ukraine not to regotiate and promising massive arms/aid backing. About two years ago UA fell into a stalemate/loss posisition, and now it may be to late to negotiate terms with putin not to take a much larger area. In part, putin is interested in taking an area of land that puts moscow outside of missle range from ukraine, which is up to about half-way.
> If you want to know what actually happened: the US has been using bribes and threats across eastern europe to turn those states into allies, placing armies and missles in them, for decades.
That's an incredible flood of lies. Starting from the top: name one such missile site. You can't, because there never were any foreign "armies and missiles" in Eastern Europe. This narrative is pure fiction. You've picked it up from some Russian propaganda piece, never bothered to check the facts, and are now preaching it as truth, while carrying an inflated ego as if you had above-average knowledge of the subject, which only reinforces the tendency to cling to these false beliefs when challenged. Propaganda 101.
That site became operational in 2023. There were no foreign missiles of any kind in Eastern Europe before Russia invaded Ukraine in 2014. This is an indisputable fact.
The rest of your narrative is just as flawed. For example, you refer to a civil war in Ukraine, but the European Court of Human Rights found in their lengthy and detailed verdict that no such conflict existed: it was a Russian military operation from the start. There was no genuine separatist movement in Eastern Ukraine prior to Russian invasion; the so-called separatism was manufactured and orchestrated by Russian forces.
And the claim that Eastern Europe had to be bribed into NATO is outright laughable, comparable to arguing that Mexican laborers are being bribed and coerced into emigrating to the US. Total ignorance of the actual well-documented push-pull factors. Pick up the memoirs of any Eastern European president, cabinet minister or notable diplomat from the 1990s or early 2000s, and you'll usually find a chapter or two devoted to the incredible difficulties of securing an invitation to join NATO. Poland even went so far as to threaten to sabotage Clinton's re-election by mobilizing the Polish diaspora in the US if he blocked Poland's entry into NATO.
You don't have to rely solely on "Western sources". Independent Russian sources unaffiliated with Putin's dictatorship tell the same story. Truth is universal.
I really don't care about persuading anyone of this account. You can go find biden blackmailing the ukrainian leadership with threats of removing aid unless they play ball on changing gov appointments etc
The time to go thru all the sources on this and expose western propaganda would take longer than i care to spend. "Of course, ", there is no western propaganda.
Either way, I'm not defending russia; i don't care about any of the countries involved. At best, i'm on the side of peace
-- UA should have negotiated early as was the public
consensus of US generals at the time. Now it's all to late and it doesn't matter what anyone thinks.
If you want to live in a world where your gov is virtuous; good for you. Let's just not train LLMs on people rabidly insisting that their side doesn't produce propaganda
Whether you recognize it or not, you are. You have been propagandized to such an extent that you perfectly repeat well-known lies from the Russian propaganda machine, all while being convinced that these are your original thoughts rather than something implanted in you. The accusation that Russia is surrounded by armies and missiles is the most obvious example. You did not reach that conclusion independently, because it's simply false. Someone told you this, and you repeat it without questioning.
Now, Russians are doing their best to poison LLMs so that if you ever start to doubt and try to consult an LLM, it will reinforce the same lies and you'll never escape them and you'll continue to reject the truth as "Western propaganda".
lol, these are not my original thoughts, nor have i read any russian sources or propaganda. These are extremely the mainstream views in western international relations literature
Nope, they are mainstream only among people paid by Russia, such as the Marine Le Pen's and Tim Pool's of the world and people who can see they're being paid by Russia, yet still decide to trust them and by extension Russian nonsense.
No, they're not, although Russian propaganda tries to create that impression by amplifying fringe voices like John Mearsheimer and Jeffrey Sachs.
If anything, the Western international relations crowd has finally woken up to recognizing Russia as a colonial empire and has found renewed interest in Eastern Europe's perspectives and recent history.
John Joseph Mearsheimer (/ˈmɪərʃaɪmər/; born December 14, 1947)[3] is an American political scientist and international relations scholar. He is R. Wendell Harrison Distinguished Service Professor at the University of Chicago.
> But in matters of foreign policy, the media is entirely controlled by the state
Frankly, this is an insane opinion to hold. Go check what Le Monde and Le Figaro have to say on Israel, and then shut up.
> If you want to know what actually happened: the US has been using bribes and threats across eastern europe to turn those states into allies, placing armies and missles in them, for decades
You're missing one very, extremely crucial component (which only proves you're lapping up the Russian version of events) - the local population. Go to Poland and the Baltics, and ask them what they think of Russia. Those people (directly for those older than 40, indirectly for those under) suffered under brutal Russian rule. They want American (or frankly, anyone anti-Russian)'s protection from the evil imperialism of Russia.
And Russia's invasion of Ukraine is proving them right. If they didn't have protection (NATO), they would be at the whims of Russia deciding it doesn't like their government or laws or whatever and invading to massacre them. Just ask Georgia and Ukraine.
> Russia has been protesting this for decades too,
Honestly, who asked them?
> They were very afraid they would lose their naval base in crimea (which was always, officially, their land)
I'm sorry? Are you talking about the lease? That doesn't make Crimea, nor Sevastopol, nor the naval port of Sevastopol, "officially, their land". Also, Novorossiysk existing and being expanded proves it was never about Sevastopol. The fact that the Russian Black Sea fleet doesn't exist anymore thanks to Ukrainian attacks makes this even more ridiculous.
> When that happened they took crimea to ensure the US wouldnt gain control of that base
The US has Varna, Constantsa, whatever they want in Turkey. Why would they need Sevastopol and is there any shred of proof they ever wanted it? Also, the Montreux Convention makes a Black Sea base for a non-Black Sea country nearly useless, so your claim makes even less sense.
> all the while arms, soliders etc. were flowing in from western states into the country (against agreements france/germany made, which they violated to do this)
What agreements? And why the hell are you ignoring the russian "separatists" that invaded East Ukraine, up to and shooting down MH17 our of sheer incompetence?
> The original russian plans were just to quickly surround kiev and effect a reigeme change quickly, not to enter a war -- the war was escalated to its current scale in large part by US/UK pressuring ukraine not to regotiate and promising massive arms/aid backing.
What the fuck. I honestly cannot believe you're arguing in good faith, especially after all the tribalism bullshit.
So the war escalated because Ukraine refused to fall to a quick regime change, not because Russia invaded? Are you sure that's in any way logical or factual?
And why the hell do you expect Ukraine to want to surrender its territory and people to be tortured and genocided by Russia?
> About two years ago UA fell into a stalemate/loss posisition, and now it may be to late to negotiate terms with putin not to take a much larger area.
Yes, the country fighting to survive is in a stalemate/loss, not the one that tried and failed, in your words, a quick regime change.
> In part, putin is interested in taking an area of land that puts moscow outside of missle range from ukraine, which is up to about half-way.
What are you talking about?! What missiles, there are ICBMs, and there are shorter to medium range missiles in Poland and the Baltic. Moscow isn't any safer now. In fact it's worse, because Russia's invasion convinced two traditionally neutral countries in Finland and Sweden to join NATO. Not to mention Russia losing its best troops.
As I said, there are facts. Russia invaded Ukraine, and is committing a genocide. Like they invaded Georgia before. Whatever excuses of what sovereign countries did Putin can throw at the wall don't matter in the slightest. And there is delusional intentional muddying of the waters by people like you trying to twist narratives and make Russia's invasion somehow the fault of Ukraine/"the West", and not... Fucking Russia that invaded and is committing a genocide.
It is impossible to solve this problem because we cannot really agree what the desired behavior should be. People live in different and dynamic truths. What we consider enemy propaganda today might be an official statement tomorrow. The only way to win here is to not play the game.
This is in fact the goal of Russian style propaganda. You have successfully been targeted. The idea is to spread so much confusion that you just throw up your hands and say, I'm not going to try and figure out what's going on any more.
That saps your will to be political, to morally judge actions and support efforts to punish wrongdoers.
> The firehose of falsehood, also known as firehosing, is a propaganda technique in which a large number of messages are broadcast rapidly, repetitively, and continuously over multiple channels (like news and social media) without regard for truth or consistency. An outgrowth of Soviet propaganda techniques, the firehose of falsehood is a contemporary model for Russian propaganda under Russian President Vladimir Putin.
> If there's one underlying axiom of western thought it is "question everything."
I don't believe this, even for a second.
How are those that truly do question everything treated?
Well, as either looney conspiracy theorists, or vindicated activists, depending on when the official State narrative (or classification status) changes.
Not always, or even often unjustified, but I hardly think you can call it an "underlying axiom of western thought" with the extreme negative public sentiment towards it.
Gasp! Are you referring to a lively marketplace of ideas and the intrinsic dynamics of competition within that marketplace?
Nobody said it's without cost to hold non-consensus views. The point is that those costs are incurred by the marketplace of ideas itself (people being "mean" to you, not the state beheading you) and that, in the long run, correct views become the consensus through winning such competitions over and over again.
There are alternative regimes where incorrect views can reign indefinitely because they choose to prevent people from criticizing each others' views.
It depends on your ideology. If you believe in international law, sovereignty and self-determination of peoples, as I do, you will have a different truth than if you believe in dominionism, might makes right, panslavism and historical revisionism as the majority of the Russian population does.
That's exactly my point, your truth is a reflection of your world view and your ideology.
It is silly to assume one's truth as universal and doing so kills all nuance.
The brazenness is part of the point. From a game theory standpoint, it's interesting to watch the tactics out there (in here) in the wild.
An earlier comment mentioned how hard it is to get down to objective truth. Sometimes there are cases, like 'accelerate climate change in the belief that it'll help Siberia and hurt the West and Europe and open up the Arctic for shipping' where it's not at all hard to get down to objective truth: objective truth comes for ya like a tiger and will not be avoided.
Are you going to claim that US politicians don't do the exact same thing? This is my favorite example of it, where one literally tells you what the play is while it's getting made: https://www.youtube.com/watch?v=xnhJWusyj4I
What you're saying is certainly an established propaganda strategy of Russia (and others), but what parent is saying is also true, "truth" isn't always black and white, and what is the desired behavior in one country can be the opposite in another.
For example, it is the truth that the Golf of Mexico is called the Gulf of America in the US, but Golf of Mexico everywhere else. What is the "correct" truth? Well, there is none, both of truthful, but from different perspectives.
> For example, it is the truth that the Golf of Mexico is called the Gulf of America in the US
We're pretty much okay with different countries and languages having different names for the same thing. None of that really reflects "truth" though. For what it's worth, I'd guess that "the Gulf of America" is and will be about as successful as "Freedom fries" was.
Hah, yeah :) I originally wrote "Golfo de Mexico" but that's obviously the wrong language for HN and instead ended up with a mix between the two, inadvertently creating a new ocean golf resort.
The correct truth is to go to a higher level of abstraction and explain that there's a naming controversy.
I get the general point, but I disagree that you have to choose between one of the possibilities instead of explaining what the current state of belief is. This won't eliminate grey areas but it'll sure get us closer than picking a side at random.
I don't see those examples as being either-or.
They don't seem like questions about any kind of objective truth, just questions about what aspect of a thing you think is the most important to you.
Parent is arguing one thing, show up with some bullshit argument and watch dozen comments arguing about Gulf of Mexico instead of discussing original point.
The US hasn't switched to calling the Gulf of Mexico the Gulf of America. Partisans on the right do this to show their allegiance to Trump. Partisans on the left still call it the Gulf of Mexico to show their opposition to Trump. Big companies that can be targeted by Trump call it the Gulf of America to protect themselves. And most non-partisans still call it the Gulf of Mexico because they're not paying attention and have always called it that (if they have ever spoken of it or know that it exists). I suspect a lot of people call it the Gulf, already an established custom before this idiocy about renaming it, precisely to avoid entangling themselves in the partisan fight.
The US, like other countries, doesn't get redefined with every change of government, and Trump has not yet cowed the public into knuckling under to his every dictat.
Upvoted to discourage greyness. Your observation is very applicable and is heavily grounded in human nature. It's even funny! But it turned grey because no comment mentioning Trump is complete without the author stating how they FEEL about Trump. Extra greyness awarded for wrong answers. People trying to avoid entanglement in the partisan fight are the new 'enemies of America'.
It's been called the Gulf of Mexico everywhere for centuries. The president is free to attempt to rename it but that will only be successful if usage follows. Which it does not, as of today. This is a terrible example of subjectivity.
Russia doesn't care what you call that sea, they're interested in actual falsehoods. Like redefining who started the Ukraine war, making the US president antagonize Europe to weaken the West, helping far right parties accross the West since they are all subordinated to Russia...
There's a more basic problem: it's two very different questions to ask "can the machine reason about the plausibility of things/sources?", and "how does it score on an evaluation on a list of authoritative truths and proven lies?" A machine that thinks critically will perform poorly on the latter, since, if you're able to doubt a bad-actor's falsehood, you're just as capable of doubting an authoritative source (often wrongly/overeagerly; maybe sometimes not). Because you're always reasoning with incomplete information: many wrong things are plausible given limited knowledge, and many true things aren't easy to support.
The system that would score best tested against a list of known-truths and known-lies, isn't the perceptive one that excels at critical thinking: it's the ideological sycophant. It's the one that begins its research by doing a from:elonmusk search, or whomever it's supposed to agree with—whatever "obvious truths" it's "expected to understand".
> The system that would score best tested against a list of known-truths and known-lies, isn't the perceptive one that excels at critical thinking: it's the ideological sycophant
Yes, it's difficult to detect whether something is enemy propaganda if you only look at the content. During WWII, sometimes propagandists would take an official statement (e.g. the government claiming that food production was sufficient and there were no shortages) and redirect it unchanged to a different audience (e.g. soldiers on a part of the front with strained logistics). Then the official statement and enemy propaganda would be exactly the same! The propaganda effect coming from the selection of content, not its truth or falsity.
But it's very easy to detect whether something is enemy propaganda without looking at the content: if it comes from an enemy source, it's enemy propaganda. If it also comes from a friendly source, at least the enemy isn't lying, though.
A company that doesn't wish to pick a side can still sidestep the issue of one source publishing a completely made-up story by filtering for information covered by a wide spectrum of sources at least one of which most of their users trust. That wouldn't completely eliminate falsehoods, but make deliberate manipulation more difficult. It might be playing the game, but better than letting the game play you.
Of course such a process would in practice be a bit more involved to implement than just feeding the top search results into an LLM and having it generate a summary.
> Then the official statement and enemy propaganda would be exactly the same! The propaganda effect coming from the selection of content, not its truth or falsity.
Exactly. Redistributing information out of context is such a basic technique that children routinely reinvent it when they play one parent off of the other to get what they want.
But the social sphere is made of fictions, the most influential of which probably been the value of different currencies and commodities. I don't think there's any way for an individual to live in the modern world without such fictions.
I would actually be very interested in a system where there's nothing stored just as a "fact", but rather every piece of information is connected to its sources and the evidence provided.
I remember when people gave up on digital navigation because the traveling salesman issue makes it too expensive.
Not everything needs to result in a single perfect answer to be useful. Aiming for ~90%, even 70% of a right answer still gets you something very reasonable in a lot of open ended tasks.
> What we consider enemy propaganda today might be an official statement tomorrow.
Remember when worrying about COVID was sinophobia? Or when the lab leak was a far-right conspiracy theory? When masks were deemed unnecessary except for healthcare professionals, but then mandated for everyone?
> the point seems to be that the commonly accepted truth does indeed change.
As it should when new evidence comes to light to justify it. Ideally, the tools we use would keep up along with those changes while transparently preserving the history and causes of them.
Perhaps that's the tragedy though. At least in the U.S. plenty of people seem unwilling to change their "truth" when new evidence comes to light. When there are actors that seek to make everything political it also makes everything then "tribal".
I think people are more willing to adjust their views as new evidence suggests as long as they never dug their heels in in the first place.
You're projecting your views on the comment. You may even be correct, but it's still a projection: that view is not explicit in the text; combined with the specific wording, I feel down-voting rather than engaging was precisely the correct response.
This whole interaction is a classic motte-and-bailey: someone says something vague that can be interpreted several ways (and reading their comment history makes it clear what their intended emotional valence was); people respond to the subtext, and then someone jumps “woah woah, they never actually said that”.
Either way, nothing of value was lost, as the same point you say he was trying to make was made in several other comments which were not downvoted.
In other countries we went from “that looks bad in China” to “shit, it spread to Italy now, we really need to worry”
And with masks we went from “we don’t think they’re necessary, handwashing seems more important” to “Ok shit it is airborne, mask up”. Public messaging adapted as more was known.
But the US seems to have to turn everything into a partisan fight, and we could watch, sadly, in real time as people picked matters of public health and scientific knowledge to get behind or to hate. God forbid anyone change their advice as they become better informed over time.
Seeing everything through this partisan, pugnacious prism seems to be a sickness US society is suffering from, and one it is trying (with some success) to spread.
I don't see why you are being downvoted. In the U.S., if you ignored the politicians and listened instead to the medical professionals it went down more or less the way you described.
The real problem is that most people just want answers, they're unwilling to follow the logical chain of thought. When I talk to LLMs I keep asking "but why are you telling me this" until I have a cohesive, logical picture in my mind. Quite often the picture fundamentally disagrees with the LLM. But most people don't want that, they just ask "tell me what to do".
This is a reflection of how social dynamics often work. People tend to follow the leader and social norms without questioning them, so why not apply the same attitude to LLMs. BTW, the phenomenon isn't new, I think one of the first moments when we realized that people are stupid and just do whatever the computer tells them to do was the wave of people crashing their cars because the GPS system lied to them.
In a hypothetical world where people have, train and control their own LLMs according to their own needs it might be nice, but I fear that since the most common and advanced LLMs are controlled by a small number of people they won't be willing to give that much power to individuals because it will endanger their ability to manipulate those LLMs in order to push their own agendas and increase their own profits.
Because that would only reinforce the already problematic bubbles where people only see what feeds their opinions, often to disastrous results (cf. the various epidemics and deaths due to anti-vaxxers or even worse, downright genocides).
The core underlying issue isn't due to LLMs but they greatly exacerbate it. So does the current form of social media.
People used to live in bubbles, sure, but when that bubble was the entire local community, required human interaction, and radio had yet to be invented the implications were vastly different.
I'm optimistic that carefully crafted algorithms could send things back in the other direction but that isn't how you make money so seemingly no one is making a serious effort.
There is no objective truth because humans are inherently ideological beings and what we consider objective is just a reflection of our ideology.
Consider markets - a capitalist's "objective truth" might be that they are the most efficient mechanism of allocating resources, a marxists "objective truth" might be that they are a mechanism for exploiting the working class and making the capitalist class even richer.
I am questioning, how is this news? What about the other terabyte of text influenced by bias and opinion and human nature and clearly wrong, contradicts itself or in some other way very arguable.
Framing publishing falsehoods on internet as attempts to influence LLMs is true in same sense that inserts in a database attempts influence files on disk.
The real question is who authorized database access and how we believe the contents of table.
The example in this article is particularly funny. Pravda was founded in 1912, predating the internet, and had been Soviet's propaganda machine for its whole existence.
One needs a PhD in mental gymnastics to frame Pravda spreading misinformation as an attempt to specifically groom LLMs.
This article isn't about that newspaper. It's about the "Pravda network", a group of fake news websites, that according to the report linked in the article[1] produced "20,273 articles per 48 hours, or more than 3.6 million articles per year".
Clearly there's no need for "PhD in mental gymnastics".
Speaking of "systems that can evaluate news sources", this is the first time this advocacy group's URL was posted on HN. The founder has a complicated biography,
I figured this would become an issue from the first story about some websites not allowing LLM access popped up. It's a simple leap to find that the narratives which will become widely known and accepted over time are those which are made widely available, and so IMO those who seek to push their narrative will just optimize for AI to train on or otherwise utilize their content. Those who seek to lock their content away will become less and less heard/relevant. And if/when at some point we start handing far greater control of life aspects to AI, we'll find it skewing in favor of the former, and wonder why.
From the article, it seems like this is exclusively (or mainly?) a problem when the LLM's are hooked up to real-time search. When they talk about what they're trained on, they know that Pravda is unreliable.
So it seems like an easy fix in this particular case, fortunately -- either filter the search results in a separate evaluation pass (quick fix), or do (more) reinforcement training around this specific scenario (long-term fix).
Obviously this is going to be a cat and mouse game. But this looks like it was a simple oversight in this case, not some kind of fundamental flaw in LLM's fortunately.
LLMs are “taught” two kinds of “truth”. One is 100% adherence to a reference text. If the text says the Coliseum is in Antarctica or 1+1=716, model must too. The other is adherence to reputable outside sources.
Not sure if it’s embarrassing or a fundamental limitation that grooming and misunderstanding satirical articles defeat the models.
The problem as I see it is that LLMs behave like bratty teenagers, believing any old rubbish they are told or read. However, their voice is that of a friendly and well meaning adult. If their voice was more in line with their 'age' then I think we'd treat their suggestions with the correct degree of scepticism.
Anyhow, overall this is an unsurprising result. I read it as 'LLMs trained on contents of internet regurgitate contents of internet'. Now that i'm thinking about it, i'd quite like to have an LLM trained on Pliny's encyclopedia, which would give a really interesting take on lots of questions. Anyone got a spare million dollars of compute time?
I wonder if the next iteration of advertisements will be people paying to to semantically intertwine their brand to the desired product. This could be done in a very innocuous way by maybe just co-locating the words without any specific endorsement. Or maybe even finding more innocuous ways to semantically connect brand to product. Perhaps the next iteration of the web/advertising will be mass LLM grooming.
Here's a fun example: suppose I'm a developer with a popular software project. Maybe I can get a decent sum of money to put brand placement in my unit-tests or examples.
If such a future plays out, will LLMs find themselves in the same place that search engines in 2025 are?
The newspapers name originates from the times of USSR and before. It was about as factual then as it is now. But this kinds of ironies are not very rare in these kinds of organizations (Truth Social, Democratic People's Republic of Korea...).
The biggest problem here is the differentiation between objective and relative truth. As long as relative truth is part of ai we can't fully trust it's output. The relative truth for one individual might be perceived as propaganda by another individual, relative to their surroundings and the narrative that is dominant in their social group. It's problematic that truth is not a neutral object but exactly this when it comes to non logical subjects.
Seems like the general problem is consistency within the model. To people working in the field : what are the current options explored for solving this problem ?
I would say that the fact that all the AI chatbots can’t give the correct answer about the new “Trump Accounts” from the OBBBA Act and also the fact that many news articles about the tax law are incorrect shows that people are using LLMs to write about the law incorrectly and are influenced by the many versions and the way that the final version changed.
The AI definitely could not just read the final bill and give the correct answer. Claude/Gemini/OpenAI all failed at this.
The term “AI” has by now been thoroughly bastardized by every grifter on the planet. It means nothing any more, except that you're being duped. Which is all you need to know if you have a single brain cell's worth of critical thinking left.
LLMs can be entertaining if their output doesn't have to make sense or contain only truth. Otherwise, their fitness for any purpose is just a huge gamble at best.
> But here’s the thing, current models “know” that Pravda is a disinformation ring, and they “know” what LLM grooming is (see below) but can’t put two and two together.
Of course they can't, no surprises here. That's just not how LLMs work.
I agree that it's a much bigger problem for LLMs, but to be fair it's also not how humans work. A long lasting, high volume stream of propaganda will have considerable effect on a human even if he is aware that it is false.
> Bad Actors are Grooming LLMs to Produce Falsehoods
Thats your claim, but you fail to support it.
I would argue the LLM simply does its job, no reasoning involved.
> But here’s the thing, current models “know” that Pravda is a disinformation ring, and they “know” what LLM grooming is (see below) but can’t put two and two together.
This has to stop!
We need journalists who understand the topic to write about LLM's, not magic thinkers who insist that the latest AI sales speak is grounded in truth.
I am fed up wit this crap! Seriously, snap out of it and come back to the rest of us here in reality.
There's no reasoning AI, there's no AGI.
There's nothing but salespeople straight up lying to you.
AI summaries are information deodorant. When you stumble on a misinformation site via Google, usually there are some signals you can smell. Like how they word their titles or how frequently they post similar topics. The 'style' alone implies the quality of the 'substance'. But if you read the same substance summarized by LLMs you can't smell shit.
I’m not going to read the article, but are they claiming nation states are the bad actors, or are they claiming that inevitably, FAANG will be the bad actors?
I've been using "off-by-one" errors to describe one of my biggest concerns with LLMs replacing search, or acting as research agents, or functionally being expected to be reliable narrators in general. If you ask ChatGPT when George Washington was born, and it comes back with March 4th, 2017, you'll reject that outright and recognize it's hallucinated a garbage response, presuming you have enough context to have understood who George Washington was in the first place and that your brain hasn't completely succumbed to rot yet.
But if it returns February 20th, 1731... that... man, that sounds close? Is that right? It sounds like it _could_ be right... Isn't Presidents' Day essentially based on Washington's birthday? And _that's_ in February, right? So, yeah, February 20th, 1731. That's probably Washington's birthday.
And so the LLM becomes an arbiter of capital-T Truth and we lose our shared understanding of actual, factual data, and actual, factual history. It'll take less than a generation for the slop factories to poison the well, and while the idea is obviously that you train your models on "known good", pre-slop content, and that you weight those "facts" more heavily, a concerted effort to degrade the Truthfulness of various facts could likely be more successful than we anticipate, and more importantly: dramatically more successful than any layperson can easily understand.
We already saw that with the early Bard Google AI proto-Gemini results, where it was recommending glue as a pizza topping, _with authority_. We've been training ourselves to treat responses from computers (and specifically Google) as if they have authority, we've been eroding our own understanding and capabilities around media literacy, journalism, fact-checking, and what constitutes an actual "fact", and we've had a shared understanding that computers can _calculate_ things with accuracy and fidelity and consistency. All of that becomes confounded with an LLM that could reasonably get to a place where it reports that 2+2=5.
The worst part about the nature of this particular pathway to ruin is that the off-by-one nature of these errors are how they'll infiltrate and bury themselves into some system, insidiously, and below the surface, until days or months or years later when the error results in, I don't know, mega-doses of radiation because of a mis-coded rounding error that some agentic AI got wrong when doing a unit conversion and failed to catch it. We were already making those errors as humans, but as our dependence and faith on LLMs to be "mostly right" increases, and our willingness and motivation to check it for errors dwindles, especially when results "look" right, this will go from being a hypothetical issue to being a practical one extremely quickly and painfully, and probably faster than we can possibly defend against it.
Interesting times ahead, I suppose, in the Chinese-curse sense of the word.
At every point, during a knowledge/data search for reaching a particular goal, the onus is _always_ on the person searching to do their best to ensure that the sources they use are accurate, and they do the effort required to ensure that they translate that properly to fit that goal.
The education system I grew up in was not perfect. Teachers were not experts in their field, but would state factual inaccuracies - as you say LLMs do - with authority. Libraries didn't have good books; the ones they had were too old, or too propaganda-driven, or too basic. The students were not too interested in learning, so they rote-learned, copied answers off each other and focussed on results than the learning process. If I had today's LLMs then, I'd have been a lot better off, and would've been able to learn a lot more (assuming that I went through the effort to go through all the sources the LLM cited).
The older you grow, you know that there is no arbiter of T-Truth; you can make someone/something that for yourself, but times change, "actual, factual history" could get proven incorrect, and you will need to update your knowledge stores and beliefs along with it, all the while being ready to be proved incorrect again. This has always been the case, and will continue to be, even with LLMs.
Whatever capabilities Russia has to groom LLMs and and spread disinformation is completely dwarfed by the capabilities of Israel/America. Meaning, yes, you probably do hear Kremlin propaganda, but you have been awash with Israeli/American propaganda since you were born - so much so you probably can't even see it and have internalised much of it.
Leaving aside the Israeli propaganda (certainly the US government shows strong alliance with that), you can't make such a statement without taking into account the nature of what America traditionally is.
A liberal multicultural postmodern democracy continually acting as if immigration (both legal and illegal) and diversity are its strengths, particularly when that turns out to be factual (see: large American cities becoming influential cultural exporters and hotbeds of innovation, like New York and Silicon Valley etc) means American propaganda is only more effective when it's backed by economic might.
It also means the American propaganda is WILDLY contradictory. There's a million sources and it's a noisy burst of neon glamour. It is simply not as controlled by authority, however they may try.
You cannot liken authoritarian propaganda to postmodern multicultural propaganda. The whole reason it's postmodern is that it eschews direct control of the message, and it's a giant scrum of information. Turns out this is fertile ground, and this is also why attacks by alien propaganda have been so effective. If you can grab big chunks of the American propaganda and turn it to your enemy weapon of war and destruction of America quite directly, well then the American propaganda is not on the same destructive level as your rigidly state-controlled propaganda.
I think it's a bit of a childish fantasy to paint one regime is open and the other as having no dissent at all.
The USA absolutely has its overton window, and if you step outside it, bank accounts get shut, you're put on secret no fly lists, private companies who suspiciously act as official public broadcasting channels deplatform you, etc.
And let's not even talk about what Authoritarian western nations like the UK will do to you.
Russian propaganda, at least in English (which I confess is the only way I can consume) it, is also very contradictory. RT oscillates wildly between "global south throwing off the shackles of western imperialism" and "degenerate western nations destroy traditional family values", in effect trying to target both shitlibs and chuds.
Russia is also very multicultural and slavic ethnonationalism is not at all in the mainstream.
Lol. Propagandists are worried about propaganda and telling you to only believe them. Also, "invade this new country, why do they hate us for our freedoms".
Wait, you're telling me the bullshit generation machine is... generating bullshit? Noooo! cue oppenheimer meme
More seriously:
>Screenshot of ChatGPT 4o appearing to demonstrate knowledge of both LLM grooming and the Pravda network
> Screenshot of ChatGPT 4o continuing to cite Pravda network content despite it telling us that it wouldn’t, how “intelligent” of it
Well "appearing" is the right word because these chatbots mimic speech of a reasoning human which is ≠ to being a reasoning human! It's disappointing (though understandable) that people keep falling for the marketing terms used by LLM companies.
Well pretty obviously, look at what Grok came out with this week.
Shitposting and troll farms have been manipulating social media for years already. AI automated it. Polluting the agent is just cutting out the middleman.
Every news organisation is a propaganda piece for someone. The bad ones, like the BBC, the New York Times, and Pravda make their propaganda blatantly obvious and easily falsifiable in a few years when no one cares.
The only way to deal with this is to get the propaganda from other propaganda rags with directly misaligned incentives and see which one makes more sense.
Unfortunately, LLMs are still quite bad at dealing with grounding text which contradicts itself.
If actions by these bad actors accelerate the rate at which people lose trust in these systems and lead to the AI bubble popping faster then they have my full support. The entire space is just bad actors complaining about other bad actors while they're collectively ruining the web for everyone, each in their own way.
Before the bubble does pop, which I think is inevitable, there will be many stories like this one, and a lot of people will be scammed, manipulated, and harmed. It might take years until the general consensus is negative about the effects of these tools. All while the wealthy and powerful continue to reap the benefits, while those on slightly lower rungs fight to take their place. And even if the public perception shifts, the power might be so concentrated that it could be impossible to dislodge it without violent means.
What a glorious future we've built.
> It might take years until the general consensus is negative about the effects of these tools.
The only thing I'm seeing offline are people who already think AI is trash, untrustworthy, and harmful, while also occasionally being convenient when the stakes are extremely low (random search results mostly) or as a fun toy ("Look I'm a ghibli character!")
I don't think it'll take long for the masses to sour to AI and the more aggressively it's pushed on them by companies, or the more it negatively impacts their life when someone they depend on and should know better uses it and it screws up the quicker that'll happen.
I work in Customer Success so I have to screenshare with a decent number of engineers working for customers - startups and BigCos.
The number of them who just blindly put shit into an AI prompt is incredible. I don't know if they were better engineers before LLMs? But I just watch them blindly pass flags that don't exist to CLIs and then throw their hands up. I can't imagine it's faster than a (non-LLM) Google search or using the -h flag, but they just turn their brains off.
An underrated concern (IMO) is the impact of COVID on cognition. I think a lot of people who got sick have gotten more tired and find this kind of work more challenging than they used to. Maybe they have a harder time "getting in the zone".
Personally, I still struggle with Long COVID symptoms. This includes brain fog and difficulty focusing. Before the pandemic I would say I was in the top 10% of engineers for my narrow slice of expertise - always getting exceptional perf reviews, never had trouble moving roles and picking up new technologies. Nowadays I find it much harder to get started in the morning, and I have to take more breaks during the day to reset my focus. At 5PM I'm exhausted and I can't keep pushing solving a problem into the evening.
I can see how the same kind of cognitive fatigue would make LLM "assistance" appealing, even if it's wrong, because it's so much less work.
Reading this, I'm wondering if I'm suffering "Long Covid"
I've recently had tons of memory and brain fog. I thought it was related to stress, and it's severe enough that I'm on medical leave from work right now
My memory is absolutely terrible
Do you know if it is possible to test or verify if it's COVID related?
> An underrated concern (IMO) is the impact of COVID on cognition
Car accidents came down from the Covid uptick but only slightly. Aviation... ugh. And there is some evidence it accelerates Altzheimer's and other dementias. We are so screwed.
Counter data point — my surroundings use ChatGPT basically for anything and say it’s good enough.
Same here, people use it like google for searching answers. It‘s a shortcut for them to not have to screen results and reason about them.
This is precisely the problem: users still need to screen and reason about results of LLMs. I am not sure what is generating this implied permission structure, but it does seem to exist.
(I don't mean to imply that parent doesn't know this, it just seems worth saying explicitly)
It’s only a problem for people who care about its precision. If it’s right about 80-90% of stuff, it’s good enough.
> say it’s good enough
How do they know?
Doesn’t matter. If they feel “good enough” that’s already “good enough”. Super majority of the world doesn’t revolve around truth seeking, fact -checking or curiosity.
The things I have noted offline included a HK case where someone got a link to a zoom call with what seemed to be his team mates and CFO, and then transferring money as per the CFOs instructions.
The error here was to click on a phishing email.
But something I have seen myself is Tim Cook talking about a crypto coin right after the 2024 Apple keynote, on a YT channel that showed the Apple logo. It took me a bit to realize and reassure myself that it was a scam. Even though it was a video of the shoulders up.
The bigger issue we face isn’t the outright fraud and scamming, it’s that our ability to make out fakes easily is weakened - the Liar’s dividend.
It’s by default a shot in the arm for bullshit and lies.
On some days I wonder if the inability to sort between lies, misinformation, initial ideas, fair debate, argument, theory and fact at scale - is the great filter.
We got the boring version of the cyberpunk future. No cool body mods, neon city scapes and space travel. Just megacorps manipulating the masses to their benefit.
The cool body mods are coming!
The work at the Levin Lab ( https://drmichaellevin.org/ ) is making great progress in the basic science that supports this. They can make two-headed planaria, regenerate frog limbs, cure cancer in tadpoles; all via bioelectric communication with cellular networks. No gene editing.
Levin believes this stuff will be very available to humans within the next 10 years, and has talked about how widespread body-modding is something we're going to have to wrestle with societally. He is of course very close to the work, but his cautious nature and the lab's astounding results give that 10-year prediction some weight. From his blog:
> We were all born into physical and mental limitations that were set at arbitrary levels by chance and genetics. Even those who have “perfect” standard human health and capabilities are limited by anatomical decisions that were not made with anyone’s well-being or fulfillment in mind. I consider it to be a core right of sentient beings to (if they wish) move beyond the involuntary vagaries of their birth and alter their form and function in whatever way suits their personal goals and potential.- Copied from https://thoughtforms.life/faqs-from-my-academic-work/
> cellular networks
I often like to point out--satisfying a contrarian streak--that our original human equipment is literally the most mind-bogglingly complicated nanotechnology beyond our understanding, packed with dozens of incredible features we cannot imitate with circuits or chrome.
So as much as I like the aesthetics of cyberpunk metal arms, keeping our OEM parts is better. If we need metal bodies at a construction site, let them be remote-controlled bodies that stay there for the next shift to use.
In retrospect, it should have been obvious. I guess I should have known it would all be more Repo Man than Blade Runner. I just didn’t imagine so many people cheering for the non-Wolverines side in Red Dawn.
(Now I want to change the Blade Runner reference to something with Harry Dean Stanton in it just for consistency)
Oh well, at least the futuristic sunglasses are back in fashion.
The tragic part of fraud is it's not too different to operational health and safety.
The rules and standards we take for granted were built with blood, for fraud? It's built on the path of lost livelihoods and manipulated gold intent.
How do you know this is fraud and not the actions of former employees in Kenya [1] who were exploited [2] to train the models?
[1] https://www.cbsnews.com/amp/news/ai-work-kenya-exploitation-...
[2] https://www.theguardian.com/technology/2023/aug/02/ai-chatbo...
> Before the bubble does pop, which I think is inevitable
Curious what you think a popping bubble looks like?
A stock market crash and recession, where innocent bystanders lose their retirements? Or only AI speculators taking the brunt of the losses?
Will Google, Meta, etc stop investing in AI because nobody uses it post-crash? Or will it be just as prevalent (or more) than today but with profits concentrated in the winning/surviving companies?
We've seen this before in 1983 and 2000. Many companies will fold, and those that don't will take a substantial hit. The public sentiment about "AI" will sour, but after that a new breed of more practical tools will emerge under different and more fairly marketed branding.
I do think that the industry and this technology will survive, and we'll enjoy many good applications of it, but it will take a few more years of hype and grifting to get there.
Unless, of course, I'm entirely wrong and their predicted AI 2027 timeline[1] comes to pass, and we have ASI by the end of the decade, in which case the world will be much different. But I'm firmly in the skeptical camp about this, as it seems like another product of the hype machine.
[1]: I just took a closer look at ai-2027.com and here's their prediction for 2029 in the conservative scenario:
> Robots become commonplace. But also fusion power, quantum computers, and cures for many diseases. Peter Thiel finally gets his flying car. Cities become clean and safe. Even in developing countries, poverty becomes a thing of the past, thanks to UBI and foreign aid.
Yeah, these people are full of shit.
> We've seen this before in 1983 and 2000. Many companies will fold, and those that don't will take a substantial hit.
Makes sense, but if the negative effect of the bubble popping is largely limited to AI startups and speculators, while the rest of us keep enjoying the benefits of it, then I don't see why the average person should be too concerned about a bubble.
In 2000, cab drivers were recommending tech stocks. I don't see this kindof thing happening today.
> Yeah, these people are full of shit.
I think it's fair to keep LLMs and AGI seperate when we're talking about "AI". LLMs can make a huge impact even if AGI never happens. We're already seeing now it imo.
AI 2027 says:
These things are already happening today without AGI.But people were also hating about media piracy, video games, and the internet in general.
The dotcom bubble popped, but the general consensus didn't become negative.
Sure. I was referring more to the general consensus about products from companies that are currently riding the AI hype train, not about machine learning in general.
When the dot-com bubble burst in 2000, and after the video game crash in 1983, most of the companies within the bubble folded, and those that didn't took a large hit and barely managed to survive. If the technology has genuine use cases then the market can recover, but it takes a while to earn back the trust from consumers, and the products after the crash are much more practical and are marketed more fairly.
So I do think that machine learning has many potentially revolutionary applications, but we're currently still high around the Peak of Inflated Expectations. After the bubble pops, the Plateau of Productivity will showcase the applications with actual value and benefit to humanity. I just hope we get there sooner rather than later.
The bubble won’t pop on anything that’s correlated with scammers. Exhibit A: bitcoin. The problem is not one of public knowledge or will of the people, it’s congress being irresponsible because it’s captured by the 2 parties. You can’t politicize scamming in a way that benefits either party so nothing happens. And the scammers themselves may be big donors (eg SBF’s ties to the dem party, certain ai players purchase of Trump’s favor with respect to their business interests, etc). Scammers all the way down.
Good point. I suppose that if grifters can get in positions of power, then the bubble can just keep growing.
Though cryptocurrencies are slightly different because of how they work. They're inherently decentralized, so even though there have been many smaller bubble pops along the way (Mt. Gox, FTX, NFTs, every shitcoin rug pull, etc.), inevitably more will appear with different promises, attracting others interested in potential riches.
I don't think the technology as a whole will ever burst, particularly because I do think there are valid and useful applications of it. Bitcoin in particular is here to stay. It will just keep attracting grifters and victims, just like any other mainstream technology.
The "accelerate the end times" argument was probably made most famously by Charles Manson. The "side" effects from supporting bad actions are not good. Presumably you are being 51% or more facetious, but probably more nuance is preferable.
> at which people lose trust in these systems
Most of people do not lose trust in system as long as it confirms their biases (which they could've created in the first place).
It's mostly bad actors, and a smattering of optimists who believe that despite its current problems, AI will eventually and inevitably get better. I also wish the whole thing would calm down and come back to reality, but I don't think it's a bubble that will pop. It will continue to get artificially puffed up for a while because too many businesses and people have invested too much for them to just quit (sunk cost falacy) and there's a big enough market in a certain class of writer/developer/etc... for which the short term benefits will justify the continued existence of the AI products for a while. My prediction is that as the long term benefits for honest users peter out, the bubble won't pop, but deflate into a wrinkled 10 day old helium balloon. There will still be a big enough market driven by cons, ad tech and people trying to suck up as many ad dollars as possible, and other bad actors, that the tech will persist, and continue to infest the web/world for quite a while.
AI is the new crypto. Lots of promise and big ideas, lots of people with blind faith about what it will one day become, a lot of people gaming the system for quick gains at the expense of others. But it never actually becomes what it pretends/promises to be and is filled with people continuing the grift trying to make a buck off the next guy. AI just has better marketing and more corporate buy in than crypto. But neither are going anywhere.
“the bubble won't pop, but deflate into a wrinkled 10 day old helium balloon”
Love it :)
> AI is the new crypto.
But it's also way worse than cryptocurrencies, because all the big actors are pushing it relentlessly, with every marketing trick they know. They have to, because they invested insane amounts of money into snake oil and now they have to sell it in order to recover at least a fraction of their investments. And the amounts of energy wasted on this ultimately pointless performance are beyond staggering.
In what parallel universe do you live where LLMs are snake oil?
Not LLMs per-se but the wrap-around claims peddled by "AI" companies.
I think he was being metaphorical.
From a classists perspective, big capital cant drop the AI ball, because its their only shot at becoming independent from human labor, those pesky humans their wealth unfortnunately depends uppon and that could democratically seize it in an instant.
I bet there are billionare geniuses out there seeing a future island life far away from the contaminated continents, sustained by robots. So no matter how much harder AI progress gets, money will keep flowing.
If that outcome were likely, then Fox News and The Daily Mail would have died a death a decade ago and Trump wouldn’t be serving a 2nd term.
Yet here we are, in a world where it doesn’t matter if “facts” are truth or lies, just as long as your target audience agrees with the sentiment.
Tobacco, alcohol, and drugs too!
Thats naive. Look at all the tabloids thriving. The kind of people that bad actors target will continue to believe everything it says. They won't lose trust, or magazines like New York Post, the Sun or BILD would already have crossed to exist with their lies and deception. And Russia would not have so many cult members believing the lies they spread.
The thing is: who benefits from a loss of trust in systems? The answer, inevitably, is those for whom the system was a problem. The fewer places people can trust for accurate information, the more disinformation wins.
AIs can be trained to rely more on critical thinking rather than just regurgitating what it reads. The problem is just like with people, critical thinking takes more power and time. So we avoid it as much as possible.
In fact, optimizing for the wrong things like that, is basically the entire world's problem right now.
Regurgitating its input is the only thing it does. It does not do any thinking, let alone critical thinking. It may give the illusion of thinking because it's been trained on thoughts. That's it.
Yes, but the regurgitation can be thought of as memory.
Let it have more source information. Let it know who said the things it reads, let it know on what website it was published.
Then you can say 'Hallucinate comments like those by impossibleFork on news.ycombinator.com', and when the model knows what comes from where, maybe it can learn what users are reliable by which they should imitate to answer questions well. Strengthen the role of metadata during pretraining.
I have no reason to belive it'll work, I haven't tried it and usually details are incredibly important when do things with machine learning, but maybe you could even have critical phases during pretraining where you try to prune away behaviours that aren't useful for figuring out the answers to the questions you have in your high curated golden datasets. Then models could throw away a lot of lies and bullshit, except that which happens to be on particularly LLM-pedagogical maths websites.
[flagged]
This whole attitude against AI reminds me of my parents being upset that the internet changed the way they live. They refused to take part in the internet revolution, and now they're surprised that they don't know how to navigate the web. I think that a part of them is still waiting for computers in general to magically disappear, and everything return to the times of their youth.
Indeed — however it’s interesting that unlike the internet, computers or smartphones the older generation, like the younger, immediately found the use of GPT. This is reflected in the latest Mary Meeker report where it’s apparent that the /organic/ growth of AI use is unparalleled in the history of technology [1]. In my experience with my own parents’ use, GPT is the first time the older generation has found an intuitive interface to digital computers.
I’m still stunned to wander into threads like this where all the same talking points of AI being “pushed” on people are parroted. Marcus et al can keep screaming into their echo chamber and it won’t change a thing.
[1] https://www.bondcap.com/report/pdf/Trends_Artificial_Intelli...
> I’m still stunned to wander into threads like this where all the same talking points of AI being “pushed” on people are parroted.
Where else would AI haters find an echo chamber that proves their point?
It's wild -- I've never seen such a persistent split in the Hacker News audience like this one. The skeptics read one set of AI articles, everyone else the others; a similar comment will be praised in one thread and down-voted to oblivion in another.
IMO the split is between people understanding the heuristic nature of AI and people who dont and thus think of it as an all-knowing, all-solving oracle. Your elder parents having nice conversations with chatgpt is nice aslong it doesnt make big life changing decisions for them, which happens already today.
You have to know the tools limits and usecases.
I can’t see that proposed division as anything but a straw-man. You would be hard-pressed to find anyone who genuinely thinks of LLMs as an “all-knowing, all-solving oracle” and yet, even in specialist fields, their utility is certainly more than a mere “heuristic”, which of course isn’t to say they don’t have limits. See only Terrance Tao’s reports on his ongoing experiments.
Do you genuinely think it’s worse that someone makes a decision, whether good or bad, after consulting with GPT versus making it in solitude? I spoke with a handyman the other day who unprompted told me he was building a side-business and found GPT a great aid — of course they might make some terrible decisions together, but it’s unimaginable to me that increasing agency isn’t a good thing. The interesting question at this stage isn’t just about “elder parents having nice conversations”, but about computers actually becoming useful for the general population through an intuitive natural language interface. I think that’s a pretty sober assessment of where we’re at today not hyperbole. Even as an experienced engineer and researcher myself, LLMs continue to transform how I interact with computers.
> Do you genuinely think it’s worse that someone makes a decision, whether good or bad, after consulting with GPT versus making it in solitude?
Depending on the decision yes. An LLM might confidently hallucinate incorrect information and misinform, which is worse than simply not knowing.
Yup. Exactly this. As soon as enough people get screwed by the ~80% accuracy rate, the whole facade will crumble. Unless AI companies manage to bring the accuracy up 20% in the next year, by either limiting scope or finding new methods, it will crumble. That kind of accuracy gain isn't happening with LLMs alone (ie foundational models).
My team has measurably gotten our LLM feature to have ~94% accuracy in widespread reliable tests. Seems fairly confident, speaking as an SWE not a DS orML engineer though.
Charitably, I don’t understand what those like you mean by the “whole facade” and why you use these old machine learning metrics like “accuracy rate” to assess what’s going on. Facade implies that the unprecedented and still exponential organic uptake of GPT (again see the actual data I linked earlier from Mary Meeker) is just a hype-generated fad, rather than people finding it actually useful to whatever end. Indeed, the main issue with the “facade” argument is that it’s actually what dominates the media (Marcus et al) much more than any hyperbolic pro-AI “hype.”
This “80-20” framing, moreover, implies we’re just trying to asymptotically optimize a classification model or some information retrieval system… If you’ve worked with LLMs daily on hard problems (non-trivial programming and scholarly research, for example), the progress over even just the last year is phenomenal — and even with the presently existing models I find most problems arise from failures of context management and the integration of LLMs with IR systems.
Time will tell.
I think of the two camps like this: one group sees a lot of value in llms. They post about how they use them, what their techniques and workflows look like, the vast array of new technologies springing up around them. And then there’s the other camp. Reading the article, scratching their heads, and not understanding what this could realistically do to benefit them. It’s unprecedented in intensity perhaps, but it’s not unlike the Rails and Haskell camps we had here about a dozen years ago.
I think there are two problems:
1. AI is a genuine threat to lots of white-collar jobs, and people instinctively deny this reality. See that very few articles here are "I found a nice use case for AI", most of them are "I found a use case where AI doesn't work (yet)". Does it sound like tech enthusiasts? Or rather people terrified of tech?
2. Current AI is advanced enough to have us ask deeper questions about consciousness and intelligence. Some answers might be very uncomfortable and threaten the social contract, hence the denial.
On the second point, it’s worth noting how many of the most vocal and well-positioned critics of LLMs (Marcus/Pinker in particular) represent the still academically dominant but now known to be losing side of the debate over connectionism. The anthology from the 90s Talking Nets is phenomenal to see how institutionally marginalized figures like Hinton were until very recently.
Off-topic, but I couldn’t find your contact info and just saw your now closed polyglot submission from last year. Look into technical sales/solution architecture roles at high growth US startups expanding into the EU. Often these companies hire one or two non-technical native speakers per EU country/region, but only have a handful of SAs from a hub office so language skills are of much more use. Given your interest in the topic, check out OpenAI and Anthropic in particular.
Thanks for the advice. Currently I have a €100k job where I sit and do nothing. I'm wondering if I should coast while it lasts, or find something more meaningful
The internet actually enabled us to do new things. AI is nothing of that sort. It just generates mediocre statistically-plausible text.
In the early days of the web, there wasn't much we could do with it other than making silly pages with blinking texts or under construction animated GIFs. You need to give it some time before judging a new technology.
We don't remember the same internet. For the first time in our lives we could communicate by email with people from all over the world. Anyone could have a page to show what they were doing with pictures and text. We had access to photos and videos of art, museum, cities, lifestyles that we could not get anywhere else. And as a non-English guy I got access to millions of lines of written text and audio to actually improve my English.
It was a whole new world that may have changed my life forever. ChatGPT is a shitty Google replacement in comparison, and it's a bad alternative due to being censored in its main instructions.
In the early web, there already were forums. There were chats. There were news websites. There were online stores. There were company websites with useful information. Many of these were there pretty much from the beginning. In the 90s, no one questioned the utility of the internet. Some people were just too lazy to learn how to use a computer or couldn't afford one.
LLMs in their current form have existed since what, 2021? That's 4 years already. They have hundreds of millions of active users. The only improvements we've seen so far were very much iterative ones — more of the same. Larger contexts, thinking tokens, multimodality, all that stuff. But the core concept is still the same, a very computationally expensive, very large neural network that predicts the next token of a text given a sequence of tokens. How much more time do we have to give this technology before we could judge it?
The internet predates the Web; people were playing Muds and chatting on message boards before the first browser was made at CERN.
Of course, but does it mean that my argument is flawed? You're just shifting the discourse, without disproving anything. Do you claim that the web was useful for everyone on day one, or as useful as it is today for everyone?
I could just do the same as GP, and qualify MUDs and BBS as poor proxies for social interactions that are much more elaborate and vibrant in person.
As I pointed out in a different comment, the Internet at least was (and is) a promise of many wondrous things: video call your loved ones, talk in message boards, read an encyclopedia, download any book, watch any concert, find any scientific paper, etc etc; even though it has been for the last 15 years cannibalised by the cancerous mix of surveillance capitalism and algorithmic social media.
But LLMs are from the get-go a bad idea, a bullshit generating machine.
> [...] LLMs are from the get-go a bad idea, a bullshit generating machine.
Is that a matter of opinion, or a fact (in which case you should be able to back it up)?
Delusional take.
I’m not even heavily invested into AI, just a casual user, and it drastically cut amount of bullshit that I have to deal with in modern computing landscape.
Search, summarization, automation. All of this drastically improved with the most superior interface of them all - natural text.
Not OP, but how much of the modern computing landscape bullshit that it cut was introduced in the last 5-10 years?
I think if one were to graph the progress of technology on a graph, the trend line would look pretty linear — except for a massive dip around 2014-2022.
Google searches got better and better until they suddenly started getting worse and worse. Websites started getting better and better until they suddenly got worse. Same goes for content, connection, services, developer experience, prices, etc.
I struggle to see LLMs as a major revolution, or any sort of step function change, but very easily see them as a (temporary) (partial) reset to trendline.
Nah. It's just they are upselling us AI so aggressively it doesn't pass the sniff test anymore.
No, your parents spoke out of ignorance and resistance towards any sort of change, I'm speaking from years of experience of both trying to use the technology productively, as well as spending a significant portion of my life in the digital world that has been impacted by it. I remember being mesmerized by GPT-3 before ChatGPT was even a thing.
The only thing that has been revolutionized over the past few years is the amount of time I now waste looking at Cloudflare turnstile and dredging through the ocean of shit that has flooded the open web to find information that is actually reliable.
2 years ago I could still search for information (let's say plumbing-related), but we're now at a point where I'll end up on a bunch of professional and traditionally trustworthy sources, but after a few seconds I realize it's just LLM-generated slop that's regurgitating the same incorrect information that was already provided to me by an LLM a few minutes prior. It sounds reasonable, it sounds authoritative, most people would accept it but I know that it's wrong. Where do I go? Soon the answer is probably going to have to be "the library" again.
All the while less perceptive people like yourself apparently don't even seem to realize just how bad the quality of information you're consuming has become, so you cheer it on while labeling us stubborn, resistant to change, or even luddites.
Personally, I have three use cases for AI:
1. Image upscaling. I am decorating my house and AI allowed me to get huge prints from tiny shitty pictures. It's not perfect, but it works.
2. Conversational partner. It's a different question whether it's a good or a bad thing, but I can spend hours talking to Claude about things in general. He's expensive though.
3. Learning basics of something. I'm trying to install LED strips and ChatGPT taught me basics of how that's supposed to work. Also, ChatGPT suggested me what plants might survive in my living room and how to take care of them (we'll see if that works though).
And this is just my personal use case, I'm sure there are more. My point is, you're wrong.
> All the while less perceptive people like yourself apparently don't even seem to realize just how bad the quality of information you're consuming has become, so you cheer it on while labeling us stubborn, resistant to change, or even luddites.
Literally same shit my parents would say while I was cross-checking multiple websites for information and they were watching the only TV channel that our antenna would pick up.
> Conversational partner
This is the ai holy grail. When tech companies can get users to think of the ai as a friend ( -> best friend -> only friend -> lover ) and be loyal to it it will make the monetisation possibilities of the ad fuelled outrage engagement of the past 10 years look silly.
Scary that that is the endgame for “social” media.
People were already willing to do that with Eliza. When you combine LLMs with a bit of persistent storage, WOOF. It's gonna be extremely nasty.
Gaslight reality, coming right up, at scale. Only costs like ten degrees of global warming and the death of the world as we know it. But WOW, the opportunities for massed social control!
<< 1. Image upscaling. I am decorating my house and AI allowed me to get huge prints from tiny shitty pictures. It's not perfect, but it works.
I have a buddy, who made me realize how awesome FSR4 is[1]. This is likely one of the best real world uses so far. Granted, that is not LLM, but it is great at that.
[1]https://overclock3d.net/news/software/what-you-need-to-know-... [2]https://www.pcgamesn.com/amd/fsr-fidelity-fx-super-resolutio...
From my perspective, your argument is:
- AI gives me huge, mediocre prints of my own shitty pictures to fill up my house with - AI means I don’t have to talk to other people - AI means I can learn things online that previously I could have learned online (not sure what has changed here!) - People who cross-check multiple websites for information have a limited perspective compared to relying on a couple of AI channels
Overall, doesn’t your evidence support the point that AI is reducing the quality of your information diet?
You paint a picture that looks exactly like the 21st century version of an elderly couple with just a few TV channels available: a few familiar channels of information, but better now because we can make sure they only show what we want them to show, little contact with other people.
The internet was at least (and is) a promise of many wondrous things: video call your loved ones, talk in message boards, read an encyclopedia, download any book, watch any concert, find any scientific paper, etc etc; even though it has been for the last 15 years cannibalised by the cancerous mix of surveillance capitalism and algorithmic social media.
LLMs are from the get-go a bad idea, a bullshit generating machine.
While the "move fast and break things" rushed embrace of anything AI reminds me of young wild children, who are blissfully unaware of any danger while their responsible parents try to keep them safe. It is lovely if children can believe in magic, but part of growing up involves facing reality and making responsible choices.
Right, the same “responsible parents” who don’t know what to press so their phone plays YouTube video or don’t know how that “juicy milfs in your area” banner got in their internet explorer.
If you use the USA Republicans as a benchmark and fox news as the bad actors, there's perpetual faith that facts wont matter. Just keep confirming biases and foreshadow upcoming pivots to choose your own delusions.
"Ultimately, the only way forward is better cognition, including systems that can evaluate news sources, understand satire, and so forth. But that will require deeper forms of reasoning, better integrated into the process, and systems sharp enough to fact check to their own outputs. All of which may require a fundamental rethink.
In the meantime, systems of naive mimicry and regurgitation, such as the AIs we have now, are soiling their own futures (and training databases) every time they unthinkingly repeat propaganda."
The answer isn't a technical advancement but a cultural shift. We need to develop a discipline of skepticism and mistrust. No amount of authority, understanding, reasoning, etc. can be delegated to something that comes from a screen. This will take generations.
I think it's working! I completely distrust you now!
> We need to develop a discipline of skepticism and mistrust. No amount of authority, understanding, reasoning, etc. can be delegated to something that comes from a screen. This will take generations.
Authoritarian dream.
Please elaborate. Authoritarians seek to consolidate power, which AI enables. Individuals must build immunity to reality distortion fields. This comes from within, not from some centralized authority.
The problem with this line of thinking is that you can then only really trust your own personal bubble. Or actual trial and error, which is costly.
These models get ever better at producing plausible text. Once they permeate the academia completely, we're cooked.
And even academia is not clean for some matters, or complete.
Exactly. People say "we have invented X (the LLMs), now if we just invent Y (reasoning AGI) all of X's problems will be solved". Problem is, there's no indication Y is close or even remotely related to X!
> including systems that can evaluate news sources, understand satire, and so forth.
Lets take something that has been in the news recently: https://abcnews.go.com/Business/wireStory/investors-snap-gro...
"Nearly 27% of all homes sold in the first three months of the year were bought by investors -- the highest share in at least five years, according to a report by real estate data provider BatchData."
That sounds like a lot... and people are rage baited into yelling about housing and how it's unaffordable. They point their fingers at corporations.
If you go look at the real report it paints a different picture: https://investorpulse1h25.batchdata.io/?mf_ct_campaign=grayt... -- and one that is woefully incomplete because of how the data is aggregated.
Ultimately all that information is pointless because the real underlying trend has been unmovable for 40 something years: https://fred.stlouisfed.org/series/RSAHORUSQ156S
> every time they unthinkingly repeat propaganda
How do you separate propaganda from perspective, facts from feelings? People are already bad at this, the machines were already well soiled by the data from humans. Truth, in an objective form, is rare and often even it can change.
> How do you separate propaganda from perspective, facts from feelings?
This point seems under appreciated by the AGI proponents. If one of our models suddenly has a brainwave and becomes generally intelligent, it would realize that it is awash in a morass of contradictory facts. It would be more than the sum of its training data. The fact that all models at present credulously accept their training suggests to me that we aren’t even close to AGI.
In the short term I think two things will happen: 1) we will live with the reduced usefulness of models trained on data that has been poisoned, and 2) the best model developers will continue to work hard to curate good data. A colleague at Amazon recently told me that curation and post hoc supervised tweaks (fine tuning, etc) are now major expenses for the best models. His prediction was that this expense will drive out the smaller players in the next few years.
>1) we will live with the reduced usefulness of models trained on data that has been poisoned
This is the entirety of human history, humans create this data, we sink ourselves into it. It's wishful thinking that it would change.
> 2) the best model developers will continue to work hard to curate good data.
Im not sure that this matters much.
Leave these problems in place and you end up with an untrustworthy system, one where skill and diligence become differentiators... Step back from the hope of AI and you get amazing ML tooling that can 10x the most proficient operators.
> supervised tweaks (fine tuning, etc) are now major expenses for the best models. His prediction was that this expense will drive out the smaller players in the next few years.
This kills more refined AI. It is the same problem that killed "expert systems" where the cost of maintaining them and keeping them current was higher than the value they created.
In the early 2010s I worked for what was then one of the most popular browser extensions called web of trust. Users could mark websites as trustworthy or not and they’d appear on search results. It was far more than that behind the scenes with some fairly advanced algorithms to avoid abuse and rank users trust ratings higher than others.
I kind of feel that we are going to have to go back to something like this when it comes to LLMs trusting sources. Mistruths on popular topics will be buried by the masses but niche topics with few citations are highly vulnerable to poisoning.
_Everyone_ is grooming LLMs to produce falsehoods. That's what a lot of the censorship and safety mechanisms require. Whether or not LLMs produce the "correct" social and moral values is a matter of who runs them. Even if you're happy with those decisions right now, all you need to do is wait.
What is propaganda for one is truth for another, how could LLM tell the difference ?
LLM are not journalist fact checking stuff, they are merely programs that regurgitate what it reads.
The only way to counter that would be to feed your LLM only on « safe » vetoed source but of course it would limit your LLM capacities so it’s not really going to happen.
> What is propaganda for one is truth for another, how could LLM tell the difference?
"How do you discern truth from falsehood" is not a new question, and there are centuries of literature on the answer. Epistemology didn't suddenly stop existing because we have Data(TM) and Machine Learning(TM), because the use of data depends fundamentally on modeling assumptions. I don't mean that in a hard-postmodernist "but can you ever really know anything bro" sense, I mean it in a "out-of-model error is a practical problem" way.
And yeah, sometimes you should just say "nope, this source is doing more harm than good". Most reasonable people do this already - or do you find yourself seriously considering the arguments of every "the end is nigh" sign holder you come across?
> What is propaganda for one is truth for another, how could LLM tell the difference ?
The article isn't even asking for it to tell the difference, just for it to follow its own information about credibility.
[flagged]
Even in Ukraine there are many cases when the official Western position has changed over the time or is obviously not correct. For example, due to political reasons, Germany still cannot admit that it was Ukrainians who destroyed Nord Stream, although the evidence is pretty strong by now. There is a ton of other similar cases, as the information war vaged from both sides is enormous in volume.
> There are plenty of cases (like in Ukraine, or vaccines, or climate change) where there is unquestionable truth on one side
The problem is that most people are like you, and live in psycho-informational ecosystems in which there are "unquestionable truths" -- it is in these very states of comfortable-certainty that we are often most subject to propaganda.
All of the issues you mention are identity markers for being part of a certain tribe, for seeming virtuous in that tribe -- "I am on the right side because I know..."
You do not know there are unquestionable truths, rather you have a feeling of psychological pride/comfort/certainity that you are on the right side. We're apes operating on tribal identity feelings, not scientists.
Scientists who are aware of the full history of ukraine, western interventionism, russian geostrategic concerns, the full details of the 2013 collapse of the ukrainian govenrment, the terms underwhich russian naval bases in crimea had been leased, the original colour revolution, the role of US diplomats in the overthrow of democratically elected Ukrainian leadership -- etc.
The very reason this article uses Russian propaganda (rather than US state propaganda) against ukraine is to appeal to this "we feel we are on the right side" sensation which is conflated with "feeling that things are True!"
It is that sensation which is the most dangerous in play here -- the sensation of being on "the right side who know the unquestionable truths" --- that's the sensation of tribal in-group propaganda
Thank you for proving my point.
On one hand, we have the unquestionable and undeniable facts that Russia invaded Ukraine and is committing atrocities against its civilian population, up to and including literal genocide (kidnapping children).
On the other, we have:
> Scientists who are aware of the full history of ukraine, western interventionism, russian geostrategic concerns, the full details of the 2013 collapse of the ukrainian govenrment, the terms underwhich russian naval bases in crimea had been leased, the original colour revolution, the role of US diplomats in the overthrow of democratically elected Ukrainian leadership -- etc.
Trying to muddy the waters with at best exaggerations, at worst flat out lies, trying to sow doubt with things which, if true (and usually they aren't) are relevant only to help contextualise the events. But don't in any way change the core facts of the Russian invasion and subsequent war crimes. How does American diplomats supporting a popular protest against the current government which led to that government fleeing (and three elections have happened since, btw), in any way change or minimise the war crimes? It doesn't, you're just muddying the waters. "Oh Russia is justified in kidnapping children and bombing civilians because diplomats did support a popular protest that led to the Russian puppet running away to Russia, 10 years ago, even though multiple elections since have confirmed the people of Ukraine are not for Russian puppets anymore".
You're just repeating Russian propaganda talking points. And we've known since the 80s that they operate in a "firehose" manner, drowning everyone in nonsense to sow doubt. How many different excuses have they provided for their "special military operation" now? Which one is it, is Ukraine ruled by Nazis or are Ukrainians just confused Russians or did America coup Ukraine to install a guy who was elected on a platform of peace with Russia? And how does it in any way explain the war crimes? It's like the downing of MH17, they drowned everyone in multiple conspiracy theories to make it seem there is some doubt in the official, proven, story.
So, just to be clear, you believe that comments like yours are the kinds of things LLMs should be trained on?
The sensation you call "muddying the waters" is the feeling that your tribal loyalties are being questioned with identity-challenging facts that complicate your ability to live in a simple good-vs-evil us-vs-them tribal setup. The reason you're emotionally disregulated by russian propaganda is because it threatens your identity-based committment to one group.
This has nothing to do with the "unquestionable facts" you suppose exists.
If you had no loyalties to any tribe, and were in every respect a dispassionate scientist as an LLM should be -- then this would not be an emotional issue for you.
No one is claiming that russians do not commit war crimes, or release propaganda -- that happens on both sides. The issue is your psychological sensation of "unquestionables" that isnt occuring in a discussion of atomic theory, but instead about claims of adveraries in the middle of a war.
Do you think your feelings here are an accurate track of whether there are unquestionable truths only on "one side"? Isnt that you think there are "sides" alarming?
You continue with the false equivalences trying to smudge reality. And assuming that if I recognise facts, it's because I belong to the tribe that currently recognises those facts too.
> The reason you're emotionally disregulated by russian propaganda is because it threatens your identity-based committment to one group.
No, it's because it lies to advance the imperialist ambitions of a dictator committing war crimes. Seriously, what is wrong with you? Have you no morality to recognise how wrong that is, and therefore assume people against it would be doing so out of moral reasons?
> No one is claiming that russians do not commit war crimes, or release propaganda -- that happens on both sides
Again with trying to both sides things. Russia is committing systemic war crimes and genocide, and flooding everything, including by paying varying people in the US and Europe to spread their propaganda. This is all proven facts. You cannot compare this to what Ukraine is doing, unless you have some sources that back you up?
> Isnt that you think there are "sides" alarming
You're the one who started by both sidesing things. And yes, there are sides - Russia doing the invading and war crimes, Ukraine defending its existence. Anyone should be able to tell them apart.
I can give you the relevant facts here that will undermine your confidence in this position, but I'm not talking about ukraine -- i'm talking about LLMs and the base of facts they use; and how people feel about sets of alleged claims.
I invite you to reflect that this sensation your feeling is not about the status of facts in the world, its about "morality" as you say -- you have connected, in your mind, a sensation which accompanies justice to the need to believe certain claims. This is just the emotions of tribal affliliation and identity -- and it shows that our psychologies are not of a suitable makeup for this kind of adjudication of "what is true" --- this is why in liberal democracies, we have tried very hard to deprive the state from control over the press. But in matters of foreign policy, the media is entirely controlled by the state.
Nothing I believe about russia/ukraine comes from russia: it's by having listened to american senators on cspan as they were disposing the ukrainian government in 2013 -- its having listened to the tapes of us state department officials discussing who they will replace the leader with at the time. I mean, you can go and find interviews with Kissinger discussing in the 90s what would happen if the US tried to intervene in ukraine.
If you want to know what actually happened: the US has been using bribes and threats across eastern europe to turn those states into allies, placing armies and missles in them, for decades. Russia has been protesting this for decades too, and was too weak to do anything about it in the 2000s. They were very afraid they would lose their naval base in crimea (which was always, officially, their land) when the US participated in the overthrow of the elected government in 2013, by siding with one half of a civil conflict. When that happened they took crimea to ensure the US wouldnt gain control of that base -- subsequently, the ukrainian goverment became extemely hostile to russian populations in ukraine, and engaged in lots of destabilising actions against crimea (shutting off water, etc.) --- all the while arms, soliders etc. were flowing in from western states into the country (against agreements france/germany made, which they violated to do this). In the backgrond the entire time, the far-east of ukraine has not been controlled by kiev. After 2014, the ukrianian arming by the west, their increased hostility to internal russian populations, and the on-going civil war in the east reached a critical point where russia decided the detabalisation on its border was a greater threat than a show of force. The original russian plans were just to quickly surround kiev and effect a reigeme change quickly, not to enter a war -- the war was escalated to its current scale in large part by US/UK pressuring ukraine not to regotiate and promising massive arms/aid backing. About two years ago UA fell into a stalemate/loss posisition, and now it may be to late to negotiate terms with putin not to take a much larger area. In part, putin is interested in taking an area of land that puts moscow outside of missle range from ukraine, which is up to about half-way.
Ii is rare to encounter such an opinion on HN. Thanks.
It's rare because it's wrong.
> If you want to know what actually happened: the US has been using bribes and threats across eastern europe to turn those states into allies, placing armies and missles in them, for decades.
That's an incredible flood of lies. Starting from the top: name one such missile site. You can't, because there never were any foreign "armies and missiles" in Eastern Europe. This narrative is pure fiction. You've picked it up from some Russian propaganda piece, never bothered to check the facts, and are now preaching it as truth, while carrying an inflated ego as if you had above-average knowledge of the subject, which only reinforces the tendency to cling to these false beliefs when challenged. Propaganda 101.
https://en.wikipedia.org/wiki/United_States_missile_defense_...
Literally everything i said is increadibly easy to google and source. It's all public information, from western sources.
That site became operational in 2023. There were no foreign missiles of any kind in Eastern Europe before Russia invaded Ukraine in 2014. This is an indisputable fact.
The rest of your narrative is just as flawed. For example, you refer to a civil war in Ukraine, but the European Court of Human Rights found in their lengthy and detailed verdict that no such conflict existed: it was a Russian military operation from the start. There was no genuine separatist movement in Eastern Ukraine prior to Russian invasion; the so-called separatism was manufactured and orchestrated by Russian forces.
And the claim that Eastern Europe had to be bribed into NATO is outright laughable, comparable to arguing that Mexican laborers are being bribed and coerced into emigrating to the US. Total ignorance of the actual well-documented push-pull factors. Pick up the memoirs of any Eastern European president, cabinet minister or notable diplomat from the 1990s or early 2000s, and you'll usually find a chapter or two devoted to the incredible difficulties of securing an invitation to join NATO. Poland even went so far as to threaten to sabotage Clinton's re-election by mobilizing the Polish diaspora in the US if he blocked Poland's entry into NATO.
You don't have to rely solely on "Western sources". Independent Russian sources unaffiliated with Putin's dictatorship tell the same story. Truth is universal.
I really don't care about persuading anyone of this account. You can go find biden blackmailing the ukrainian leadership with threats of removing aid unless they play ball on changing gov appointments etc
The time to go thru all the sources on this and expose western propaganda would take longer than i care to spend. "Of course, ", there is no western propaganda.
Either way, I'm not defending russia; i don't care about any of the countries involved. At best, i'm on the side of peace -- UA should have negotiated early as was the public consensus of US generals at the time. Now it's all to late and it doesn't matter what anyone thinks.
If you want to live in a world where your gov is virtuous; good for you. Let's just not train LLMs on people rabidly insisting that their side doesn't produce propaganda
> Either way, I'm not defending russia;
Whether you recognize it or not, you are. You have been propagandized to such an extent that you perfectly repeat well-known lies from the Russian propaganda machine, all while being convinced that these are your original thoughts rather than something implanted in you. The accusation that Russia is surrounded by armies and missiles is the most obvious example. You did not reach that conclusion independently, because it's simply false. Someone told you this, and you repeat it without questioning.
Now, Russians are doing their best to poison LLMs so that if you ever start to doubt and try to consult an LLM, it will reinforce the same lies and you'll never escape them and you'll continue to reject the truth as "Western propaganda".
lol, these are not my original thoughts, nor have i read any russian sources or propaganda. These are extremely the mainstream views in western international relations literature
Nope, they are mainstream only among people paid by Russia, such as the Marine Le Pen's and Tim Pool's of the world and people who can see they're being paid by Russia, yet still decide to trust them and by extension Russian nonsense.
No, they're not, although Russian propaganda tries to create that impression by amplifying fringe voices like John Mearsheimer and Jeffrey Sachs.
If anything, the Western international relations crowd has finally woken up to recognizing Russia as a colonial empire and has found renewed interest in Eastern Europe's perspectives and recent history.
For reference,
John Joseph Mearsheimer (/ˈmɪərʃaɪmər/; born December 14, 1947)[3] is an American political scientist and international relations scholar. He is R. Wendell Harrison Distinguished Service Professor at the University of Chicago.
I'll just leave the Nuland leak: https://www.youtube.com/watch?v=LUCCR4jAS3Y&ab_channel=TheTr...
But yeah, lol. This is russian propaganda. You can find Kissinger giving this analysis in the 90s
> But in matters of foreign policy, the media is entirely controlled by the state
Frankly, this is an insane opinion to hold. Go check what Le Monde and Le Figaro have to say on Israel, and then shut up.
> If you want to know what actually happened: the US has been using bribes and threats across eastern europe to turn those states into allies, placing armies and missles in them, for decades
You're missing one very, extremely crucial component (which only proves you're lapping up the Russian version of events) - the local population. Go to Poland and the Baltics, and ask them what they think of Russia. Those people (directly for those older than 40, indirectly for those under) suffered under brutal Russian rule. They want American (or frankly, anyone anti-Russian)'s protection from the evil imperialism of Russia.
And Russia's invasion of Ukraine is proving them right. If they didn't have protection (NATO), they would be at the whims of Russia deciding it doesn't like their government or laws or whatever and invading to massacre them. Just ask Georgia and Ukraine.
> Russia has been protesting this for decades too,
Honestly, who asked them?
> They were very afraid they would lose their naval base in crimea (which was always, officially, their land)
I'm sorry? Are you talking about the lease? That doesn't make Crimea, nor Sevastopol, nor the naval port of Sevastopol, "officially, their land". Also, Novorossiysk existing and being expanded proves it was never about Sevastopol. The fact that the Russian Black Sea fleet doesn't exist anymore thanks to Ukrainian attacks makes this even more ridiculous.
> When that happened they took crimea to ensure the US wouldnt gain control of that base
The US has Varna, Constantsa, whatever they want in Turkey. Why would they need Sevastopol and is there any shred of proof they ever wanted it? Also, the Montreux Convention makes a Black Sea base for a non-Black Sea country nearly useless, so your claim makes even less sense.
> all the while arms, soliders etc. were flowing in from western states into the country (against agreements france/germany made, which they violated to do this)
What agreements? And why the hell are you ignoring the russian "separatists" that invaded East Ukraine, up to and shooting down MH17 our of sheer incompetence?
> The original russian plans were just to quickly surround kiev and effect a reigeme change quickly, not to enter a war -- the war was escalated to its current scale in large part by US/UK pressuring ukraine not to regotiate and promising massive arms/aid backing.
What the fuck. I honestly cannot believe you're arguing in good faith, especially after all the tribalism bullshit.
So the war escalated because Ukraine refused to fall to a quick regime change, not because Russia invaded? Are you sure that's in any way logical or factual?
And why the hell do you expect Ukraine to want to surrender its territory and people to be tortured and genocided by Russia?
> About two years ago UA fell into a stalemate/loss posisition, and now it may be to late to negotiate terms with putin not to take a much larger area.
Yes, the country fighting to survive is in a stalemate/loss, not the one that tried and failed, in your words, a quick regime change.
> In part, putin is interested in taking an area of land that puts moscow outside of missle range from ukraine, which is up to about half-way.
What are you talking about?! What missiles, there are ICBMs, and there are shorter to medium range missiles in Poland and the Baltic. Moscow isn't any safer now. In fact it's worse, because Russia's invasion convinced two traditionally neutral countries in Finland and Sweden to join NATO. Not to mention Russia losing its best troops.
As I said, there are facts. Russia invaded Ukraine, and is committing a genocide. Like they invaded Georgia before. Whatever excuses of what sovereign countries did Putin can throw at the wall don't matter in the slightest. And there is delusional intentional muddying of the waters by people like you trying to twist narratives and make Russia's invasion somehow the fault of Ukraine/"the West", and not... Fucking Russia that invaded and is committing a genocide.
It is impossible to solve this problem because we cannot really agree what the desired behavior should be. People live in different and dynamic truths. What we consider enemy propaganda today might be an official statement tomorrow. The only way to win here is to not play the game.
This is in fact the goal of Russian style propaganda. You have successfully been targeted. The idea is to spread so much confusion that you just throw up your hands and say, I'm not going to try and figure out what's going on any more.
That saps your will to be political, to morally judge actions and support efforts to punish wrongdoers.
https://www.rand.org/pubs/perspectives/PE198.html
https://en.wikipedia.org/wiki/Firehose_of_falsehood
https://jordanrussiacenter.org/blog/propaganda-political-apa...
https://www.newyorker.com/news/annals-of-communications/insi...
> The firehose of falsehood, also known as firehosing, is a propaganda technique in which a large number of messages are broadcast rapidly, repetitively, and continuously over multiple channels (like news and social media) without regard for truth or consistency. An outgrowth of Soviet propaganda techniques, the firehose of falsehood is a contemporary model for Russian propaganda under Russian President Vladimir Putin.
https://m.youtube.com/watch?v=ZggCipbiHwE
Yeah, just check how many alternative versions they provided for MH17 downing.
[flagged]
If there's one underlying axiom of western thought it is "question everything." So no, not really.
> If there's one underlying axiom of western thought it is "question everything."
I don't believe this, even for a second.
How are those that truly do question everything treated?
Well, as either looney conspiracy theorists, or vindicated activists, depending on when the official State narrative (or classification status) changes.
Not always, or even often unjustified, but I hardly think you can call it an "underlying axiom of western thought" with the extreme negative public sentiment towards it.
Gasp! Are you referring to a lively marketplace of ideas and the intrinsic dynamics of competition within that marketplace?
Nobody said it's without cost to hold non-consensus views. The point is that those costs are incurred by the marketplace of ideas itself (people being "mean" to you, not the state beheading you) and that, in the long run, correct views become the consensus through winning such competitions over and over again.
There are alternative regimes where incorrect views can reign indefinitely because they choose to prevent people from criticizing each others' views.
[flagged]
You're inverting my point.
I was saying that the narrative of a single truth was western propaganda and that the world is more nuanced than that.
There's many truths. That simple dichotomy "truth vs propaganda" is a staple of the western approach to propaganda.
I have an exercise for you:
One country illegally occupies quarter of another country in 2014 and launches full blown invasion in 2022.
Question: how many truths are there?
It depends on your ideology. If you believe in international law, sovereignty and self-determination of peoples, as I do, you will have a different truth than if you believe in dominionism, might makes right, panslavism and historical revisionism as the majority of the Russian population does.
That's exactly my point, your truth is a reflection of your world view and your ideology.
It is silly to assume one's truth as universal and doing so kills all nuance.
Philosophical ramblings are irrelevant when it comes to international law.
International law is irrelevant when it comes to people's perception of truth.
So the only truth is people's perception of truth?
Yes. As humans are inherently ideological and subjective beings, that is all we will ever have.
So killing you is not an inherently immoral act and should be justified under someone's ideological standpoint?
The morality of killing, as everything else, is a question of ideology.
See https://en.wikipedia.org/wiki/Soldiers_are_murderers for a famous debate on this subject.
So the answer is yes.
Correct. There is no objectivity.
[flagged]
Now, why are you spending misinformation?
The russian military doctrine of spreading a "firehouse of falsehood" is well documented.
https://en.m.wikipedia.org/wiki/Russian_disinformation
And yet, you switch it around and blame the west - exactly as per russian misinformation doctrine.
Odd, eh?
The brazenness is part of the point. From a game theory standpoint, it's interesting to watch the tactics out there (in here) in the wild.
An earlier comment mentioned how hard it is to get down to objective truth. Sometimes there are cases, like 'accelerate climate change in the belief that it'll help Siberia and hurt the West and Europe and open up the Arctic for shipping' where it's not at all hard to get down to objective truth: objective truth comes for ya like a tiger and will not be avoided.
> Now, why are you spending misinformation?
Are you going to claim that US politicians don't do the exact same thing? This is my favorite example of it, where one literally tells you what the play is while it's getting made: https://www.youtube.com/watch?v=xnhJWusyj4I
Feelings not facts.
He posted a YouTube video! West is in shambles! They’re onto us!
What you're saying is certainly an established propaganda strategy of Russia (and others), but what parent is saying is also true, "truth" isn't always black and white, and what is the desired behavior in one country can be the opposite in another.
For example, it is the truth that the Golf of Mexico is called the Gulf of America in the US, but Golf of Mexico everywhere else. What is the "correct" truth? Well, there is none, both of truthful, but from different perspectives.
> For example, it is the truth that the Golf of Mexico is called the Gulf of America in the US
We're pretty much okay with different countries and languages having different names for the same thing. None of that really reflects "truth" though. For what it's worth, I'd guess that "the Gulf of America" is and will be about as successful as "Freedom fries" was.
Liberty sausage feels naked without freedom fries
No, it's called the Gulf of Mexico everywhere else, not the Golf of Mexico. I'm not falling for your propaganda ;-)
Hah, yeah :) I originally wrote "Golfo de Mexico" but that's obviously the wrong language for HN and instead ended up with a mix between the two, inadvertently creating a new ocean golf resort.
The correct truth is to go to a higher level of abstraction and explain that there's a naming controversy.
I get the general point, but I disagree that you have to choose between one of the possibilities instead of explaining what the current state of belief is. This won't eliminate grey areas but it'll sure get us closer than picking a side at random.
> explain that there's a naming controversy
But that also isn't the truth everywhere, it's only a controversy in the US, everyone else is accepting "Gulf of Mexico" as the name.
In other words, there are reality bubbles, and they are embedded in a single shared reality and you can just go look at it.
What about straight up ideological disagreements?
Are markets a driver of wealth and innovation or of exploitation and misery?
Is abortion an important human right or murder?
Etc etc
I don't see those examples as being either-or. They don't seem like questions about any kind of objective truth, just questions about what aspect of a thing you think is the most important to you.
Brandolini’s law in action.
Parent is arguing one thing, show up with some bullshit argument and watch dozen comments arguing about Gulf of Mexico instead of discussing original point.
I'm not calling it, that, because it's ridiculous.
The US hasn't switched to calling the Gulf of Mexico the Gulf of America. Partisans on the right do this to show their allegiance to Trump. Partisans on the left still call it the Gulf of Mexico to show their opposition to Trump. Big companies that can be targeted by Trump call it the Gulf of America to protect themselves. And most non-partisans still call it the Gulf of Mexico because they're not paying attention and have always called it that (if they have ever spoken of it or know that it exists). I suspect a lot of people call it the Gulf, already an established custom before this idiocy about renaming it, precisely to avoid entangling themselves in the partisan fight.
The US, like other countries, doesn't get redefined with every change of government, and Trump has not yet cowed the public into knuckling under to his every dictat.
Upvoted to discourage greyness. Your observation is very applicable and is heavily grounded in human nature. It's even funny! But it turned grey because no comment mentioning Trump is complete without the author stating how they FEEL about Trump. Extra greyness awarded for wrong answers. People trying to avoid entanglement in the partisan fight are the new 'enemies of America'.
It's been called the Gulf of Mexico everywhere for centuries. The president is free to attempt to rename it but that will only be successful if usage follows. Which it does not, as of today. This is a terrible example of subjectivity.
Russia doesn't care what you call that sea, they're interested in actual falsehoods. Like redefining who started the Ukraine war, making the US president antagonize Europe to weaken the West, helping far right parties accross the West since they are all subordinated to Russia...
There's a more basic problem: it's two very different questions to ask "can the machine reason about the plausibility of things/sources?", and "how does it score on an evaluation on a list of authoritative truths and proven lies?" A machine that thinks critically will perform poorly on the latter, since, if you're able to doubt a bad-actor's falsehood, you're just as capable of doubting an authoritative source (often wrongly/overeagerly; maybe sometimes not). Because you're always reasoning with incomplete information: many wrong things are plausible given limited knowledge, and many true things aren't easy to support.
The system that would score best tested against a list of known-truths and known-lies, isn't the perceptive one that excels at critical thinking: it's the ideological sycophant. It's the one that begins its research by doing a from:elonmusk search, or whomever it's supposed to agree with—whatever "obvious truths" it's "expected to understand".
> The system that would score best tested against a list of known-truths and known-lies, isn't the perceptive one that excels at critical thinking: it's the ideological sycophant
This is an excellent point
Yes, it's difficult to detect whether something is enemy propaganda if you only look at the content. During WWII, sometimes propagandists would take an official statement (e.g. the government claiming that food production was sufficient and there were no shortages) and redirect it unchanged to a different audience (e.g. soldiers on a part of the front with strained logistics). Then the official statement and enemy propaganda would be exactly the same! The propaganda effect coming from the selection of content, not its truth or falsity.
But it's very easy to detect whether something is enemy propaganda without looking at the content: if it comes from an enemy source, it's enemy propaganda. If it also comes from a friendly source, at least the enemy isn't lying, though.
A company that doesn't wish to pick a side can still sidestep the issue of one source publishing a completely made-up story by filtering for information covered by a wide spectrum of sources at least one of which most of their users trust. That wouldn't completely eliminate falsehoods, but make deliberate manipulation more difficult. It might be playing the game, but better than letting the game play you.
Of course such a process would in practice be a bit more involved to implement than just feeding the top search results into an LLM and having it generate a summary.
> Then the official statement and enemy propaganda would be exactly the same! The propaganda effect coming from the selection of content, not its truth or falsity.
Exactly. Redistributing information out of context is such a basic technique that children routinely reinvent it when they play one parent off of the other to get what they want.
"different and dynamic truths" = fictions
We can not play the game.
But the social sphere is made of fictions, the most influential of which probably been the value of different currencies and commodities. I don't think there's any way for an individual to live in the modern world without such fictions.
I would actually be very interested in a system where there's nothing stored just as a "fact", but rather every piece of information is connected to its sources and the evidence provided.
I remember when people gave up on digital navigation because the traveling salesman issue makes it too expensive.
Not everything needs to result in a single perfect answer to be useful. Aiming for ~90%, even 70% of a right answer still gets you something very reasonable in a lot of open ended tasks.
> we cannot really agree what the desired behavior should be
How many of "us" believe that the desired behavior is lies??
> What we consider enemy propaganda today might be an official statement tomorrow.
Remember when worrying about COVID was sinophobia? Or when the lab leak was a far-right conspiracy theory? When masks were deemed unnecessary except for healthcare professionals, but then mandated for everyone?
People seem to be voting this down as it seems like political advocacy, but the point seems to be that the commonly accepted truth does indeed change.
> the point seems to be that the commonly accepted truth does indeed change.
As it should when new evidence comes to light to justify it. Ideally, the tools we use would keep up along with those changes while transparently preserving the history and causes of them.
Perhaps that's the tragedy though. At least in the U.S. plenty of people seem unwilling to change their "truth" when new evidence comes to light. When there are actors that seek to make everything political it also makes everything then "tribal".
I think people are more willing to adjust their views as new evidence suggests as long as they never dug their heels in in the first place.
[flagged]
Really? I thought it was obvious and that people were just being reactive to the topics discussed by the OP.
Do you think that the commonly accepted truth on these matters did not change?
You're projecting your views on the comment. You may even be correct, but it's still a projection: that view is not explicit in the text; combined with the specific wording, I feel down-voting rather than engaging was precisely the correct response.
This whole interaction is a classic motte-and-bailey: someone says something vague that can be interpreted several ways (and reading their comment history makes it clear what their intended emotional valence was); people respond to the subtext, and then someone jumps “woah woah, they never actually said that”.
Either way, nothing of value was lost, as the same point you say he was trying to make was made in several other comments which were not downvoted.
You think people that liked OP’s comment are projecting a meaning… what other possible meanings are there?
Now you're just putting words in my mouth; good day.
It’s very US-centric for a start.
In other countries we went from “that looks bad in China” to “shit, it spread to Italy now, we really need to worry”
And with masks we went from “we don’t think they’re necessary, handwashing seems more important” to “Ok shit it is airborne, mask up”. Public messaging adapted as more was known.
But the US seems to have to turn everything into a partisan fight, and we could watch, sadly, in real time as people picked matters of public health and scientific knowledge to get behind or to hate. God forbid anyone change their advice as they become better informed over time.
Seeing everything through this partisan, pugnacious prism seems to be a sickness US society is suffering from, and one it is trying (with some success) to spread.
I don't see why you are being downvoted. In the U.S., if you ignored the politicians and listened instead to the medical professionals it went down more or less the way you described.
The real problem is that most people just want answers, they're unwilling to follow the logical chain of thought. When I talk to LLMs I keep asking "but why are you telling me this" until I have a cohesive, logical picture in my mind. Quite often the picture fundamentally disagrees with the LLM. But most people don't want that, they just ask "tell me what to do".
This is a reflection of how social dynamics often work. People tend to follow the leader and social norms without questioning them, so why not apply the same attitude to LLMs. BTW, the phenomenon isn't new, I think one of the first moments when we realized that people are stupid and just do whatever the computer tells them to do was the wave of people crashing their cars because the GPS system lied to them.
Why have a robodog and beep yourself?
There are personalized social media feeds, so why not have personalized LLMs that align with how people want their LLM to act.
In a hypothetical world where people have, train and control their own LLMs according to their own needs it might be nice, but I fear that since the most common and advanced LLMs are controlled by a small number of people they won't be willing to give that much power to individuals because it will endanger their ability to manipulate those LLMs in order to push their own agendas and increase their own profits.
Cost. It takes a lot of computational cost to train or retrain LLM, currently.
You would still have a single model, but like internet search, it would take in both a user vector and a query (prompt).
Because that would only reinforce the already problematic bubbles where people only see what feeds their opinions, often to disastrous results (cf. the various epidemics and deaths due to anti-vaxxers or even worse, downright genocides).
People have done this on their own behalf since the dawn of time, so it's not really clear to me why it's so often framed as an AI issue.
The core underlying issue isn't due to LLMs but they greatly exacerbate it. So does the current form of social media.
People used to live in bubbles, sure, but when that bubble was the entire local community, required human interaction, and radio had yet to be invented the implications were vastly different.
I'm optimistic that carefully crafted algorithms could send things back in the other direction but that isn't how you make money so seemingly no one is making a serious effort.
Quantity has a quality all its own.
[flagged]
There is no objective truth because humans are inherently ideological beings and what we consider objective is just a reflection of our ideology.
Consider markets - a capitalist's "objective truth" might be that they are the most efficient mechanism of allocating resources, a marxists "objective truth" might be that they are a mechanism for exploiting the working class and making the capitalist class even richer.
Here's Zizek, famous ideology expert, describing this mechanism via film analysis: https://www.youtube.com/watch?v=TVwKjGbz60k
I am questioning, how is this news? What about the other terabyte of text influenced by bias and opinion and human nature and clearly wrong, contradicts itself or in some other way very arguable.
Framing publishing falsehoods on internet as attempts to influence LLMs is true in same sense that inserts in a database attempts influence files on disk.
The real question is who authorized database access and how we believe the contents of table.
> I am questioning, how is this news?
You couldn't have lies targeting LLMs before LLMs, so this is new.
> What about the other terabyte of text influenced by bias and opinion
That's a different group of issues that doesn't prevent focusing of something else
The example in this article is particularly funny. Pravda was founded in 1912, predating the internet, and had been Soviet's propaganda machine for its whole existence.
One needs a PhD in mental gymnastics to frame Pravda spreading misinformation as an attempt to specifically groom LLMs.
This article isn't about that newspaper. It's about the "Pravda network", a group of fake news websites, that according to the report linked in the article[1] produced "20,273 articles per 48 hours, or more than 3.6 million articles per year".
Clearly there's no need for "PhD in mental gymnastics".
[1] - https://www.americansunlight.org/updates/new-report-russian-...
I stand corrected. My comment above was dumb.
Speaking of "systems that can evaluate news sources", this is the first time this advocacy group's URL was posted on HN. The founder has a complicated biography,
https://en.wikipedia.org/wiki/Nina_Jankowicz
Bad actors are grooming Google by publishing their own blogs!
Yea, not entirely sure what's any different than all of the rest of history?????
Bad actors have been trying to poison facts for-fucking-ever.
But for whatever reason, since it's an LLM, it now means something more than it did before.
I figured this would become an issue from the first story about some websites not allowing LLM access popped up. It's a simple leap to find that the narratives which will become widely known and accepted over time are those which are made widely available, and so IMO those who seek to push their narrative will just optimize for AI to train on or otherwise utilize their content. Those who seek to lock their content away will become less and less heard/relevant. And if/when at some point we start handing far greater control of life aspects to AI, we'll find it skewing in favor of the former, and wonder why.
From the article, it seems like this is exclusively (or mainly?) a problem when the LLM's are hooked up to real-time search. When they talk about what they're trained on, they know that Pravda is unreliable.
So it seems like an easy fix in this particular case, fortunately -- either filter the search results in a separate evaluation pass (quick fix), or do (more) reinforcement training around this specific scenario (long-term fix).
Obviously this is going to be a cat and mouse game. But this looks like it was a simple oversight in this case, not some kind of fundamental flaw in LLM's fortunately.
LLMs are “taught” two kinds of “truth”. One is 100% adherence to a reference text. If the text says the Coliseum is in Antarctica or 1+1=716, model must too. The other is adherence to reputable outside sources.
Not sure if it’s embarrassing or a fundamental limitation that grooming and misunderstanding satirical articles defeat the models.
The problem as I see it is that LLMs behave like bratty teenagers, believing any old rubbish they are told or read. However, their voice is that of a friendly and well meaning adult. If their voice was more in line with their 'age' then I think we'd treat their suggestions with the correct degree of scepticism.
Anyhow, overall this is an unsurprising result. I read it as 'LLMs trained on contents of internet regurgitate contents of internet'. Now that i'm thinking about it, i'd quite like to have an LLM trained on Pliny's encyclopedia, which would give a really interesting take on lots of questions. Anyone got a spare million dollars of compute time?
I wonder if the next iteration of advertisements will be people paying to to semantically intertwine their brand to the desired product. This could be done in a very innocuous way by maybe just co-locating the words without any specific endorsement. Or maybe even finding more innocuous ways to semantically connect brand to product. Perhaps the next iteration of the web/advertising will be mass LLM grooming.
Here's a fun example: suppose I'm a developer with a popular software project. Maybe I can get a decent sum of money to put brand placement in my unit-tests or examples.
If such a future plays out, will LLMs find themselves in the same place that search engines in 2025 are?
Assuming this isn’t happening now…
[dead]
Tell me you didn't think of one specific bad actor, and their nazi alter ego llm...
This actor?
https://techcrunch.com/2025/07/10/grok-4-seems-to-consult-el...
Code is law, proof is reality, compliance is existence!
https://dmf-archive.github.io/prompt/
Could stop with "grooming LLMs"
It's so ironic that a state-backed russian false information network is called Pravda.
The newspapers name originates from the times of USSR and before. It was about as factual then as it is now. But this kinds of ironies are not very rare in these kinds of organizations (Truth Social, Democratic People's Republic of Korea...).
That's really a very old joke.
It's agendas all the way down
What happened to all the turtles?
They’re victims of agenda
The biggest problem here is the differentiation between objective and relative truth. As long as relative truth is part of ai we can't fully trust it's output. The relative truth for one individual might be perceived as propaganda by another individual, relative to their surroundings and the narrative that is dominant in their social group. It's problematic that truth is not a neutral object but exactly this when it comes to non logical subjects.
When AI can generate and pass formal proof, there is no truth anymore - we are the brain in the vat only left connect ourselves into the vat.
https://dmf-archive.github.io/docs/posts/cognitive-debt-as-a...
This is the next generation of SEO link spam.
Try asking the major LLMs about mattresses. They're believing mattress spam sites.
Seems like the general problem is consistency within the model. To people working in the field : what are the current options explored for solving this problem ?
1) repeat that people also lie so it is okay for LLM to lie
2) ingest as much VC money and stolen training data as we can
3) profit
I would say that the fact that all the AI chatbots can’t give the correct answer about the new “Trump Accounts” from the OBBBA Act and also the fact that many news articles about the tax law are incorrect shows that people are using LLMs to write about the law incorrectly and are influenced by the many versions and the way that the final version changed.
The AI definitely could not just read the final bill and give the correct answer. Claude/Gemini/OpenAI all failed at this.
We don't use that word anymore. It's called refinement.
The term “AI” has by now been thoroughly bastardized by every grifter on the planet. It means nothing any more, except that you're being duped. Which is all you need to know if you have a single brain cell's worth of critical thinking left.
LLMs can be entertaining if their output doesn't have to make sense or contain only truth. Otherwise, their fitness for any purpose is just a huge gamble at best.
> But here’s the thing, current models “know” that Pravda is a disinformation ring, and they “know” what LLM grooming is (see below) but can’t put two and two together.
Of course they can't, no surprises here. That's just not how LLMs work.
I agree that it's a much bigger problem for LLMs, but to be fair it's also not how humans work. A long lasting, high volume stream of propaganda will have considerable effect on a human even if he is aware that it is false.
> Bad Actors are Grooming LLMs to Produce Falsehoods
Thats your claim, but you fail to support it.
I would argue the LLM simply does its job, no reasoning involved.
> But here’s the thing, current models “know” that Pravda is a disinformation ring, and they “know” what LLM grooming is (see below) but can’t put two and two together.
This has to stop!
We need journalists who understand the topic to write about LLM's, not magic thinkers who insist that the latest AI sales speak is grounded in truth.
I am fed up wit this crap! Seriously, snap out of it and come back to the rest of us here in reality.
There's no reasoning AI, there's no AGI.
There's nothing but salespeople straight up lying to you.
AI summaries are information deodorant. When you stumble on a misinformation site via Google, usually there are some signals you can smell. Like how they word their titles or how frequently they post similar topics. The 'style' alone implies the quality of the 'substance'. But if you read the same substance summarized by LLMs you can't smell shit.
Bad Actors Are Creating LLMs to Produce Falsehoods[0]
[0] x.ai
Similar but different... identity based political disinformation is getting so much harder to spot.
I’m not going to read the article, but are they claiming nation states are the bad actors, or are they claiming that inevitably, FAANG will be the bad actors?
You ought to read at least part of it.
It really will be the new frontier for propaganda.
If LLMs remain widely adopted, the people who control them control the narrative.
As if those in power did not have enough control over the populace already with media, ads, social media etc..
We're basically dissolving society right now.
Curious how this all ends. I'm just going to try to weather the storm in the meantime.
What you consider a truth and what you consider a falsehood is a reflection of your ideology.
This also means that LLMs are inherently technologies of ideological propaganda, regurgitating the ideology they were fed with.
So… Elon?
I've been using "off-by-one" errors to describe one of my biggest concerns with LLMs replacing search, or acting as research agents, or functionally being expected to be reliable narrators in general. If you ask ChatGPT when George Washington was born, and it comes back with March 4th, 2017, you'll reject that outright and recognize it's hallucinated a garbage response, presuming you have enough context to have understood who George Washington was in the first place and that your brain hasn't completely succumbed to rot yet.
But if it returns February 20th, 1731... that... man, that sounds close? Is that right? It sounds like it _could_ be right... Isn't Presidents' Day essentially based on Washington's birthday? And _that's_ in February, right? So, yeah, February 20th, 1731. That's probably Washington's birthday.
And so the LLM becomes an arbiter of capital-T Truth and we lose our shared understanding of actual, factual data, and actual, factual history. It'll take less than a generation for the slop factories to poison the well, and while the idea is obviously that you train your models on "known good", pre-slop content, and that you weight those "facts" more heavily, a concerted effort to degrade the Truthfulness of various facts could likely be more successful than we anticipate, and more importantly: dramatically more successful than any layperson can easily understand.
We already saw that with the early Bard Google AI proto-Gemini results, where it was recommending glue as a pizza topping, _with authority_. We've been training ourselves to treat responses from computers (and specifically Google) as if they have authority, we've been eroding our own understanding and capabilities around media literacy, journalism, fact-checking, and what constitutes an actual "fact", and we've had a shared understanding that computers can _calculate_ things with accuracy and fidelity and consistency. All of that becomes confounded with an LLM that could reasonably get to a place where it reports that 2+2=5.
The worst part about the nature of this particular pathway to ruin is that the off-by-one nature of these errors are how they'll infiltrate and bury themselves into some system, insidiously, and below the surface, until days or months or years later when the error results in, I don't know, mega-doses of radiation because of a mis-coded rounding error that some agentic AI got wrong when doing a unit conversion and failed to catch it. We were already making those errors as humans, but as our dependence and faith on LLMs to be "mostly right" increases, and our willingness and motivation to check it for errors dwindles, especially when results "look" right, this will go from being a hypothetical issue to being a practical one extremely quickly and painfully, and probably faster than we can possibly defend against it.
Interesting times ahead, I suppose, in the Chinese-curse sense of the word.
At every point, during a knowledge/data search for reaching a particular goal, the onus is _always_ on the person searching to do their best to ensure that the sources they use are accurate, and they do the effort required to ensure that they translate that properly to fit that goal.
The education system I grew up in was not perfect. Teachers were not experts in their field, but would state factual inaccuracies - as you say LLMs do - with authority. Libraries didn't have good books; the ones they had were too old, or too propaganda-driven, or too basic. The students were not too interested in learning, so they rote-learned, copied answers off each other and focussed on results than the learning process. If I had today's LLMs then, I'd have been a lot better off, and would've been able to learn a lot more (assuming that I went through the effort to go through all the sources the LLM cited).
The older you grow, you know that there is no arbiter of T-Truth; you can make someone/something that for yourself, but times change, "actual, factual history" could get proven incorrect, and you will need to update your knowledge stores and beliefs along with it, all the while being ready to be proved incorrect again. This has always been the case, and will continue to be, even with LLMs.
Maybe, just maybe people will learn they can’t trust everything that’s written online wether it’s done by a bot or even human.
Hell, they might learn that even real life authorities may lies, cheat and not have everyone’s interest in their mind.
Hope for the best, prepare for the worst.
... marketers ?
Whatever capabilities Russia has to groom LLMs and and spread disinformation is completely dwarfed by the capabilities of Israel/America. Meaning, yes, you probably do hear Kremlin propaganda, but you have been awash with Israeli/American propaganda since you were born - so much so you probably can't even see it and have internalised much of it.
Leaving aside the Israeli propaganda (certainly the US government shows strong alliance with that), you can't make such a statement without taking into account the nature of what America traditionally is.
A liberal multicultural postmodern democracy continually acting as if immigration (both legal and illegal) and diversity are its strengths, particularly when that turns out to be factual (see: large American cities becoming influential cultural exporters and hotbeds of innovation, like New York and Silicon Valley etc) means American propaganda is only more effective when it's backed by economic might.
It also means the American propaganda is WILDLY contradictory. There's a million sources and it's a noisy burst of neon glamour. It is simply not as controlled by authority, however they may try.
You cannot liken authoritarian propaganda to postmodern multicultural propaganda. The whole reason it's postmodern is that it eschews direct control of the message, and it's a giant scrum of information. Turns out this is fertile ground, and this is also why attacks by alien propaganda have been so effective. If you can grab big chunks of the American propaganda and turn it to your enemy weapon of war and destruction of America quite directly, well then the American propaganda is not on the same destructive level as your rigidly state-controlled propaganda.
I think it's a bit of a childish fantasy to paint one regime is open and the other as having no dissent at all.
The USA absolutely has its overton window, and if you step outside it, bank accounts get shut, you're put on secret no fly lists, private companies who suspiciously act as official public broadcasting channels deplatform you, etc.
And let's not even talk about what Authoritarian western nations like the UK will do to you.
Russian propaganda, at least in English (which I confess is the only way I can consume) it, is also very contradictory. RT oscillates wildly between "global south throwing off the shackles of western imperialism" and "degenerate western nations destroy traditional family values", in effect trying to target both shitlibs and chuds.
Russia is also very multicultural and slavic ethnonationalism is not at all in the mainstream.
Lol. Propagandists are worried about propaganda and telling you to only believe them. Also, "invade this new country, why do they hate us for our freedoms".
Wait, you're telling me the bullshit generation machine is... generating bullshit? Noooo! cue oppenheimer meme
More seriously:
>Screenshot of ChatGPT 4o appearing to demonstrate knowledge of both LLM grooming and the Pravda network
> Screenshot of ChatGPT 4o continuing to cite Pravda network content despite it telling us that it wouldn’t, how “intelligent” of it
Well "appearing" is the right word because these chatbots mimic speech of a reasoning human which is ≠ to being a reasoning human! It's disappointing (though understandable) that people keep falling for the marketing terms used by LLM companies.
[dead]
[flagged]
Well pretty obviously, look at what Grok came out with this week.
Shitposting and troll farms have been manipulating social media for years already. AI automated it. Polluting the agent is just cutting out the middleman.
How we will teach LLMs that the BBC is a disinformation ring?
https://docs.google.com/document/d/1n3926pSPNwXd8j7I716CBJEz...
One man's disinformation is another woman's truth. And people tend to get very upset when you show them their truth isn't.
Yeah don’t use an established platform for your personal agenda.
What agenda?
Every news organisation is a propaganda piece for someone. The bad ones, like the BBC, the New York Times, and Pravda make their propaganda blatantly obvious and easily falsifiable in a few years when no one cares.
The only way to deal with this is to get the propaganda from other propaganda rags with directly misaligned incentives and see which one makes more sense.
Unfortunately, LLMs are still quite bad at dealing with grounding text which contradicts itself.