I have largely written Reddit off and no longer visit it
after an experiment I did where I had an agent karma farm for me and do some covert advertising. As I went through the posts it wrote I realized that as a reader I would have NO idea that these were just written by a computer. Many many people (or other bots) had full on conversations with it and it scared me a bit.
I am not quite there with Hacker News but I do know for a fact that many "users" here are LLMs.
Online communities are definitely dying. I guess I hope that maybe IRL communities have a resurgence in this wake.
I think it's going to effectively kill public chat communities without either proof of identity or attestation through a web of trust. Or rather turn them into little better than comment sections on news sites; thriving but worthless.
I'm active in a number of online communities that are doing just fine but the difference is those all involve ongoing relationships, built over time and with engagement across multiple platforms. I've no doubt this clock is ticking too but it's still harder to fake a user across a mix of text chat, voice and video calls, playing an online game, etc and when much of the web of relationships extends back into real life activity.
But I agree the golden age of easy anonymous connections online has ended.
> I think it's going to effectively kill public chat communities without either proof of identity or attestation through a web of trust.
This seems self evident to me too.
It's another factor in why I think the tech community needs to get ahead of governments on the whole "prove your ID on the Internet" thing by having some sort of standard way to do it that doesn't necessarily involve madness in the loop.
I'd be interested in working on a problem like that.
I have a strong preference for remaining anonymous or at least making it a reasonably high bar to tying my online identity to my personal identity
I would love to be involved in helping to design a sort of "human verified" badge that doesn't necessarily make it possible or at least not easy for everyone to find your real identity
I've been thinking about it a bunch and it seems like a really interesting problem. Difficult though.
I suspect there is too much political and corporate will that wants to force everyone online to use their real identity in the open, though
Serious question: If there are so many LLMs on online forums, who is doing it? Is it just 1000s of research students or something more nefarious? Is it AI businesses building up evidence that their output is as highly scored as humans therefore "buy our software"?
We're in the middle of an active cold war where countries are trying to manipulate the citizens of rival countries to destroy their civilization without having to fire a single bullet. Anonymous, over the internet mass manipulation, all for some minimal electricity cost.
Lots of marketing. Not even AI business, just regular consumer crap. They realized that blatantly spamming their product looks bad, so they orchestrate multiple accounts to look more organic. And people actually engage with it.
People like the above poster who are "just running an experiment" or "trying something for fun" who then wonder why online communities are full of AI now.
If you farm a fleet of good accounts, you control the discourse. On HN, you could boost whatever you're trying to push, and downvote or flagkill whoever objects.
There are obvious benefits to controlling public discourse, right? Even if it's just to support some project you're working on.
Sort of, except if no one can ever discover a community it is always dying by default
Personally I'd love to find a decent online community these days, my social circle has shrunk considerably, but idk. It seems difficult to start fresh with new people nowadays
we were made to socialize in person. you can mimic it online and nourish existing connections over it but nothing helps build friendship more than being in the same place at the same time a few different times and talking to each other
on the public servers yeah. but the ones im in with real people who know each other will be fine.
I think the problem is not keeping agents out of private real people spaces, but for people who dont have any pre-existing or 'real world' connections to these communities to find a way to prove they are a real person over the internet alone and get an invite
On a related note, I think this is going to be the biggest challenge to most folks when it comes in resisting using government ID online. it will be the apple offered for easy proof youre not a bot to normal circles.
There's this old meme where someone asks what will happen when AI bots posts helpful, curious and thoughtful messages!? That's mission accomplish :D They can't be better then the average human though because of training data, so I don't worry about AI comments getting up-voted by real humans, I am however worried about fake upvotes.
I find it amusing that this is the top comment. Reddit is so awful you finally wrote it off, but not before you used it to try to “karma farm and do some covert advertising”. It’s on-brand for HN hypocritical bullshit. But, since we are slamming on Reddit anyways without realizing how fucked HN is by the same petard, have an upboat fellow traveler.
Reddit astroturfing firms and bot farms learned to buy/use “seasoned” accounts over a decade ago. I’d venture there have been countless bots just in a holding pattern harmlessly building up reputation and a human-like history of posts across different subs etc just to eventually be either activated or sold to someone else to “burn”
I recently spotted one unmistakable example of this[0]. It’s been a trick for many years now that duplicating a human post and its comments is a good way to appear human but this was quite the example.
I feel you. Especially in the larger subreddita. i participate, and mod, a few small ones, and the community there is pretty strong and folks shut down ai slop pretty quickly.
I'm not saying being a mod means it's bullet proof, but i do notice smaller communities tend to self police better and know what's real.
It used the browser agent to grab user cookies after signing in, then made API calls iirc.
Using just a browser is way too token intensive and slow. It would look for 401 errors then run the browser automation to login with the credentials and grab the token.
I'm surprised these platforms don't have advanced heuristics to detect API calls and inauthentic traffic.
Did you clone the Reddit API from browser traffic and then turn it into a 100% API driven thing?
I'd imagine they'd be sniffing browser agents, plugins, cookies, etc. to fingerprint. Using JavaScript scroll position, browsing rate and patterns, etc.
Maybe their protections just aren't that sophisticated.
the suits or suit minded people have realised that HN is good for advertising to the kind of demographic that'll give them free labour and is easily swayed by whatever the latest trend is
> HN's guidelines and tone policing are more easily followed by a bot than a human.
HN's guidelines aren't that strict and the mod hammer is a plushie. It's not difficult to get by here. It's also kind of useful for critical reflection/self-regulation to hear the occasional "you came in too hot" or "don't be boring" from a moderator.
Seems better to me to just try to be sort of reasonable and let the mods nudge you if they need to and let your comments be downvoted from time to time. What is the goal of these people, to never experience correction in their lives? To never write an unpopular comment?
This site is CLEARLY astroturfed to hell and back and infested with bots. Any attempted discussion of this fact gets killed REALLY fast.
This part of the guidelines is a 15 year out-of-date bad joke:
> Please don't post insinuations about astroturfing, shilling, brigading,
> foreign agents, and the like. It degrades discussion and is usually
> mistaken. If you're worried about abuse, email hn@ycombinator.com and
> we'll look at the data.
"We'll look at the data". Sure buddy. You'll do what you always do, which is apply to banhammer to anyone that's not following your talking points, and tone police the actual humans.
Enjoy "conversing curiously" with bots while the mods tone-police non-bots out of existence.
For what it's worth the admins here have let the tone of conversation slip a little when it comes to AI, as in there are many people who now openly mock (and worse) the AI zealots and there's no admin coming in and "saving" the metaphorical day anymore. In the not so distant past that kind of behaviour was almost instantly reprimanded, kindergarten-style.
He's stating a fact. Turn on showed in your options and scroll to the bottom of the comments on any popular story. There are so many agentic users here.
On the other hand, I’ve been accused of being AI/bot and if I say things the mod doesn’t like and is not their favorite thing to hear I’m “flamebaiting” or engaging in personal attacks when pointing out specific things.
Frankly, online communities have been doing for many years now, when the censorship, anti-free-speech, tone policing mods and mobs started dominating online and America really did not have the self-respect or confidence anymore to enforce the Constitution online.
I've been on the Internet for decades at this point and one thing I've noticed is that communities that, for example, ban political topics actually mean "positions I don't like" as "political". This is somewhat related to the Overton window but really a bunch of (mostly conservative) ideas get normalized so aren't deemed "political".
I see the same thing with "AI Slop". Yes, there is AI Slop but (IME) it's pretty easy to spot. But what's more annoying is how often people are willing to throw that accusation whenever someone takes a position they don't like, much like the "political" label. It's lazy and honestly just as bad as the slop itself because it unintentionally launders the slop in a "boy who cried wolf" kind of way.
I also have a theory that some AI slop isn't inherently successful. It's just heavily botted by people who are interested in promoting certain positions. I bet you could make a pro-administration LLM bot and another one promoting a communist revolution and no amount of model tuning would make the second as popular as the first because the first would hit third-party botting as well as platform content biases (eg Twitter).
I've personally been accused of being a bot. This is particularly true in recent time as I've tried to share facts and fact-based analysis of, say, what's going on with crude oil markets, the military operation in the Gulf and the politics and economics around it. I even saw one hilarious comment saying (paraphrased) "the bots are getting clever and posting about unrelated topics". This was funny because it never occurred to this person that no, it was just a real person posting something you disagreed with.
With the current batch of SOTA models, it is not hard to prompt a model to pass the sniff test on social media forums. If you don't believe me, try it.
All you really need to do is give it some guidelines of a style to follow and styles to avoid. There's also a bunch of skills people have already written to accomplish this.
Exactly, they aren't good at creating new material. But many discussions in comment section are simply regurgitations of existing material, which they are good at rearranging. New novel discussions in places like this are actually a very rare thing, as many comment sections are simply people who already know informing those who don't. I'm doing that right now, funnily enough.
I have worked with LLMs for a couple years at a very non-technical level and it was not that difficult to give it proper prompting and reference material.
If you are reading LLM content just about everywhere and have no idea. Obviously there are easy to spot things, but the stuff you don't spot is the stuff you don't spot
People that like to fancy themselves as good llm content detectors just end up accusing everything they don't like as llm content.
The only thing worst than a slop comment are the people that bitch about it incessantly. I'm convinced it's become a new expression of a mental illness.
The main thing I suspect of being LLM written is the sort of LinkedIn style: very short sentences, overly focused on sort of… making an impact on the user. But that’s also how a certain type of bad human writer writes. So in the end, I’m not sure I know if anything in particular was written by an LLM.
I guess… “that’s not just an AI red flag, it’s generally shit prose” would be how ChatGPT would describe most things nowadays.
It’s the distilled mediocrity of the statements. Never venturing beyond a 10% margin of what you would get if you sampled the opinions of 1,000 people who underwent jury selection by west coast liberals.
I kind feel this might be good.
Bot writen comments and AI media that can no longer be distinguish from real, will make us human leave the social networks, which helped to separate Us humans.
Going back to the real world were you can trully believe on what you see, and enjoy the tone, look and scent of of our fellows humans beings.
A few tech companies managed to get massive numbers of people addicted to toxic social media content that was terrible for mental health but made a small group very wealthy. I don't think those same businesses and execs are just going to pack up and go home with an even more powerful content tool available now. LLMs are going to be used to create skinner boxes that make Facebook and Twitter seem like wholesome communities.
This seems naive. As long as people are "enjoying" the AI-infested social networks, or at least not annoyed enough to leave, they will stay on them, and become further disconnected from reality. We have half of EU teenagers talking to chatbots regularly. Alienated people flock to them.
One of the paradoxical things that makes me hopeful is that there's going to be such an incredible amount of low effort AI slop content that it's going to drown out the low effort human-made content and generate a large amount of distaste for it. So much will be so bad that good taste and high quality will be rewarded more status as the people who will say and believe anything will be led astray and left behind.
Maybe it's hard getting across what I mean so a more concrete example is there will be SO MUCH clickbait out there that serious outfits instead of being forced to do it will be able to successfully differentiate themselves by NOT doing it. (and many similar things in different arenas)
I'm trying to say that LLMs raising the noise floor will drown out a lot of the toxic noise that's been plaguing us.
> So much will be so bad that good taste and high quality will be rewarded more status as the people who will say and believe anything will be led astray and left behind.
I really want to believe this will be true. However, I also suspect there's some external driving force, that I cannot readily name, which is making people incapable of consuming anything except this low-effort content. I mean, obviously it's working to some extent. Perhaps AI will be the thing that accelerates its death, but part of me thinks something else needs to happen beyond just an increase in useless content.
In my opinion there isn't an external _nefarious_ force causing all of this. Certainly those forces exist but without them much the same would be happening.
It's the economy of everything being free but supported with advertising. That mechanic is what leads to the race to the bottom lowest common denominator human motivation hacking attention toxicity. (yes that's a bit of a ramble).
If people weren't getting paid for the smallest increment of attention they could grab, it wouldn't be promoted the way it is. I don't have a high opinion of the things which grab my attention, but they still manage to do it sometimes. I think many people are in that boat. If there were other mechanisms with which we rewarded people for doing things, something different would be optimized.
And people just wouldn't reward the 10-second-gratification in anywhere near the same way if it weren't for the advertising.
I feel that a lot in my side projects: maybe one should keep the half-baked AI repo for oneself and rather share what the experiment, the thesis, and the learning from the building are. No one cares much about the (un)finished product, as it can be replicated better in most cases with a couple hours' work of claude coding.
For instance, I really liked how Karpathy shared a high-level idea on the LLM-based wiki. It was sadly followed by a long tail of no-one-cares-about "Here is my LLM wiki product" posts pointing to the generic LLM-generated landing page.
Unlike a lot of communities, yours at least started on the correct side. Better to ban outright, than to slowly realize that you should have banned it.
Indeed. Take a soft approach, or "wait and see", and you'll just allow your community to get infested with slop enthusiast crybullies that loudly protest any pushback against "genai content". The communities that draw a firm line and hold it will be the only ones that endure.
There has to be room for an AI-driven project that expresses a unique idea, even if there's no community around it yet. Someone has to express it, and from now on that idea will largely be implemented with AI.
> A good use of AI is when it enables people to do something they couldn’t do before, to contribute to a community when they couldn’t before.
I agree 100% with the novel contribution aspect. But there's some nuance there.
For example a project might have no active contributors. It might not be something you can drop directly into your codebase. Neither of those is inherently bad.
As AI becomes more responsible for higher-level planning decisions, the value of an OSS project becomes less tied to visible community activity like PRs and issues.
I notice this in my own work a lot. I might not use that project's code directly. But I think about a problem differently as a result. I often point my agent to existing OSS projects as inspiration on how to solve a problem. The project provides indirect value by supporting architectural decisions, deployment approaches etc. Unfortunately OSS activity doesn't capture this.
> A good use of AI is when it enables people to do something they couldn’t do before, to contribute to a community when they couldn’t before.
There are two separate things here that are getting silently conflated.
> A good use of AI is when it enables people to do something they couldn’t do before
This could be good on an individual level, if say, a doctor wants to vibe code an app of some sort for his individual practice.
>to contribute to a community when they couldn’t before.
This is where it goes off the rails. If they couldn't meaningfully contribute before, they aren't going to suddenly be able to discern that whatever slop they want to contribute is of value to the community. That's just another way of saying, if I wanted an AI opinion on something, why wouldn't I get it directly from the source, and write the prompt myself, instead of have some intermediate human prompt the AI for me?
The human has unique context. They may work in a niche domain or they talked to people and observed an unsolved problem. Then they express a potential solution via OSS. It's like product sense. Then they share that with others who find it interesting. The code is a great way to encapsulate the idea. It is usually the result of research and back and forth not a single prompt. It would be way harder to think through or build a solution without AI even if they had context.
Who is going to verify that an AI-driven project is a unique idea? How do you distinguish between a genuinely unique project, a grifter who is shilling their "unique" project, and a new enthusiast who is convinced their project is unique, but is not? This is an impossible moderation task. The only options I see for a community are to either totally ban AI-generated content, or be totally consumed by it.
I don't really know. Certainly we need a higher bar. The Kafka example in the post may be hyperbolic but I agree it pollutes the space. But we also can't swing the other way and rely completely on out of date proxies. If you ban AI code there will be very little code to see in a year. It'll take time but we'll arrive at new norms. We built semi successful ways to filter content farms in the earlier internet days. The signal has to shift to "did they think hard about this problem" which has some observable properties. Like how they articulate the problem, or why it became important to them.
I want my future community apps and sites to build in bot a flagger. I don't care how hard it is, the community that gets this right is the one I'll jump ship to.
I'll remove the particulars to avoid anything partisan, but:
I failed to truly appreciate how cooked reddit was with bots until I accidentally clicked Popular and stumbled upon a national subreddit post with a 'chad meme', starring a particular political leader, whose unpopularity is hard to adequately convey to foreigners.
It was not just that this post had been so severely upvoted, but the comment section itself had a mantra more or less, with very little actual conversation, just echoing the same sentiment; and all those comments in turn upvoted to the point of drowning out the lone comments at the bottom (not downvoted, just not upvoted) expressing "???". I don't know if I'd ever even written the word 'astroturfing' before expressing my bafflement at a friend, so I don't think I'm very tinfoil hat about these things.
It was just utterly bizarre to see someone who can barely get a single win in public discourse being heralded -- monotonously -- like he was the second coming.
When LLMs were new on the scene, I thought trust would fade in the written(text) medium. I saw it happening on Substack, Medium, and Reddit. But then VCs pumped so much money and AI has gotten into every other modality (audio, video). The only thing I really interact these days are the human beings sitting in front me, phone calls with people I know and hackernews. Life seems sorted but something feels missing as well.
Edit - I am not anti AI but it is slowly killing the digital human interaction.
I made this point elsewhere, but people are learning a lot of what us had to learn the old way which is no one cares about your stuff for the most part and now the value provided has to go way up to get people to care. That is, as the author says, the novelty has worn off and since we know it's AI the perceived value is also way down.
We're all recalibrating.
I do really think this is just a quick period in time before most people realize that the slop posting doesn't help them personally get anything and most give up and we go back to roughly the ratio of cool things with real value to see but like on a bigger scale because AI helps you do more as one person.
I don't know... I might have said the same thing about email/text/phone spam but it has only proliferated to the point where it's a constant stream of garbage. Email, text, and phone calls are almost completely useless at this point. Sifting the signal from the noise is a non-stop effort.
I think people who want to push a certain narrative might just set up a quick bot and tell that bot to start posting on Reddit or whatever and just let it run. Why not? Little effort on their part and they might actually have influence. The same reason why spammers apparently think sending me 10 text messages per day about a loan I've been approved for. It probably does work 0.0001% of the time, but that's okay if it's all automated.
I mean I think the dynamics are a bit different in online communities at least for actual communities and not drive by subs like r/technology or whatever.
Especially say here on HN with Show HN and such the forcing factors are "i get no votes or community recognition"
But I don't entirely disagree with you I think things won't totally go back I think it will settle way more than now though especially where things are a little more niche.
Online communities that allow upvoting / downvoting have been effectively dead for a long time because it's easy to manipulate conversations by elevating and punishing comments to fit a narrative. This is especially true on HN.
There's a lot of focus on tech projects here, but it's not just vibe written projects that are ruining communities now.
No, it's a problem with art, text and videos too. Reddit was already becoming a creative writing exercise in many ways, with infamous subs like 'Am I the Asshole?' seemingly being about 80% fiction labelled as fact. But now you don't even need to know how to write to flood the site with useless 'content'.
YouTube is arguably even worse, since AI led content farms are not just spamming the hell out of every topic under the sun, but giving outright dangerous advice and misinformation on top of that. I saw this video about medical misinformation by these 'creators' earlier, and it genuinely made me want to see them crack down on this junk:
And there's just this feeling of distrust everywhere too. Is anyone on Hacker News human anymore? Is that Reddit poster I'm responding to human? Are the folks on Twitter, Threads or Bluesky human?
The scary part is that you basically can't tell anymore. Any project you find could be AI generated slop, any account could be a bot using stolen images or deepfakes, any article or video could be blatant misinformation put together as a cash grab...
If something doesn't improve, pretty much every platform under the sun is going to be completely useless, as is a lot of the internet as a whole.
I think people like the blog author need to realize that this problem can't be dealt with content moderation or users trying their best to be honest. You just get a firehose with an on/off switch, you don't get free filtering or moderation with it.
This synthetic participation (LLM or otherwise) has catalyzed weakspots in HN's high-trust environment.
The weight we give to the average HN comment is orders of magnitude higher than the average Reddit (& co.) comment, and this relationship probably goes both ways (much higher ROI on ads/propaganda). Due to the low volume & high trust, it seems to be a very different (easier) environment in which to achieve pervasive propaganda/advertising/etc with a disproportionate impact.
I remember when some new LLM version came out (maybe from Meta?) I saw something like 3 of the top 10 posts on the front page were all variations of "Foobar 2.1 New Model". Perhaps not explicit, deliberate manipulation, but the result was the same, and apparently allowed.
How many of those generic LLM websites (https://letsbuyspiritair.com/ comes to mind) show up on the front page per day? Zero effort static front-ends for some unremarkable data. I'm not going to touch the politics minefield, but that is a weakspot too.
All of this, and yet I think HN has handled it relatively well. I really appreciate not seeing comments of the form "I asked Clog/Gemini/etc. here's 5 paragraphs". Places like Reddit do not have the agility or control, and have degraded accordingly.
It makes me sad to think that a short time ago, every forum was ~100% humans, and now it is some fraction of that. I wonder if I will ever see that again.
For every argument against AI slop, you will get a variation of it's the future, or I'm 10x more productive now, I've shipped 3 applications in 2 days, etc.
They won't stop talking about it and defending it. But I can't get anyone to share their amazing work with me.
There is a reason the Show HN projects that are mostly vibecoded don't get much response. It's because they aren't any good. Comments that are AI generated are hollow. Videos that AI generated a shell of their sources.
Human slop is realistically just as bad. In a strange twist, human commentary on the Internet is asymptotically approaching an older LLM. Trite cliches, repetitive tropes, and tribal affiliation signals dominate conversation.
I have turned to blunt instruments: blocking individuals on their first cliche banner-wave. It has substantially improved comment quality but I still suffer from the problem that I don’t block stories entirely.
It sucks that the narrative framing device of 'human slop' has vanished in the last year. Some subreddits, like all location subreddits, lifestyle subreddits like malefashionadvice and redscarepod and entry-level academic subreddits like math and criticaltheory were already just hives of human slop before AI came around because of a structural design to the site that had the side effect of normalising a total absence of quality control.
Upvotes are not a good mechanism for quality control in any way because they force good content to have the same metadata as the content that is technically well-constructed but is irrelevant, meaningless, just a platitude, too obvious to be obvious or pablum. Upvotes turn everything into a shock-value dominated 101 space.
HN is in peril and I don’t think it is a bad thing. Or rather, I’d like to bring back the old chestnut: it’s a good thing.
While the site has moved to using /showlim, the AI garbage just bypasses that and goes straight to the home page. Almost every project that’s being shown is vibe coded and looks exactly the same - generated by Claude or the like. This is an excellent test for the site: will it be able to adapt or do we simply end up with a husk of what HN was and it’s the AI posts driving majority of engagement, Overton window, and upvotes/downvotes?
I look forward to this, I think it is an exciting development.
This is a good thing. social media was already slop before AI. If this gets more intellectuals off these same websites daily and instead spend their time to better things, then I love AI slop’s purpose. There’s more to the internet than Reddit, TikTok, and youtube. Really there is, if your circle of friends is small or non existent without going to the same dotcoms, you have an issue that is worse than any AI slop tbh
Im not a crypto person, but I was intrigued by Chia. They generate their coins based on allocating disk space. So if you have a bit of free space, you can fill it with plots and play the lotto.
The intriguing part is that I think it works against scaling. The incremental cost for me to use the 500GB of free space on my disk is $0, but someone scaling a bot farm has to buy all their space.
Real people tend to have a lot more idle capacity than optimized, scaled businesses, so any kind of proof of idle capacity seems like it would disadvantage bot farms.
I’ve also thought that proof of collateral spending would be a good system. For example, you buy groceries and the store gives you a token saying you spent $X of real world money. Those tokens help show you're not a bot. Keeping that system honest and equitable would be extremely difficult though.
Maybe schools could give kids tokens for attendance. It sounds kind of dumb, but who knows.
There are "nice", "polite" slop enthusiasts. The ones who insist they have taste and tact. They would never post bad slop, recklessly, only the very highest-quality human-refined, curated slop. Not really slop at all, they would argue, because they gave it a careful review before posting it. They insist there's a very important difference between this premium slop and the nasty kind, and that low-quality human-authored media is actually slop, too, when you think about it. They talk about how important it is for people to use slop thoughtfully, efficiently, correctly, and that we all need to learn about and discuss slop constantly because it's the inevitable future and highly relevant for everyone.
They muddy the waters. They wheedle, rules-lawyer, carve out exceptions, and talk about how important it is to have nuance in separating virtuous applications for slop from bad ones, and that focusing on the bad ones is actually very tedious and rude. We should have polite discourse about the good things about slop and stop being so mean about bad slop, which isn't even really a problem. The bad kinds of slop will be solved soon, probably, and the harms are overstated. They colonize spaces.
If moderators don't swiftly throw these slop enthusiasts out on their ass, slightly less polite ones will post slop slightly less politely. More and more of the people participating in the space will have favorable opinions toward slop, and shout down people who object to slop. In no time at all, your community is a slop bar. Who could have imagined?
I usually type 5000 words researching for a 500 word output. It's not "write me an article on X", it's 99% my own ideas, but worded and structured and polished a bit. But I don't post them here. They are on my blog.
AI slop is hurting my community in a different way. We have an internal viva engage community for quick development how to type questions at work. More frequently, instead of asking "how to" questions to the crowd to crowdsource answers, people are reaching out to me directly to ask me why the solution AI suggested doesn't work.
That people trust AI over an organizational knowledge is bad enough. I fear that AI is turning people generally antisocial.
This is happening at my workplace and it's incredibly annoying. We get support tickets asking us to troubleshoot AI written scripts. The funny thing is that most of the time, it would be faster for the customer to tell us what they want to do in plain english and have us make it for them. Hell, if they make an honest attempt, we can point them in the right direction and teach them.
It's frustrating because we're bundling this shitty AI with our product so we're just making more work for ourselves. Then there's the push from leadership to use more AI...
I don't think it's making people antisocial though, people just like easy solutions to their problems. We're giving them what seems like an easy solution. But it's easy for them, not easy for the reviewers.
Sigh. First the article states that "coding by LLM is the way things are done right now" in 10 different ways but message boards and articles need to be protected.
We get it, the current narrative is that coding is the big thing, promoted by billionaires and scabs alike.
So, the coding narrative must be protected until the IPO of Juniper^H^H^H Anthropic happens and the whole thing implodes.
You already could have code for free and faster by using "git clone" without a company of thieves selling your own output back to you.
I have largely written Reddit off and no longer visit it after an experiment I did where I had an agent karma farm for me and do some covert advertising. As I went through the posts it wrote I realized that as a reader I would have NO idea that these were just written by a computer. Many many people (or other bots) had full on conversations with it and it scared me a bit.
I am not quite there with Hacker News but I do know for a fact that many "users" here are LLMs.
Online communities are definitely dying. I guess I hope that maybe IRL communities have a resurgence in this wake.
I think it's going to effectively kill public chat communities without either proof of identity or attestation through a web of trust. Or rather turn them into little better than comment sections on news sites; thriving but worthless.
I'm active in a number of online communities that are doing just fine but the difference is those all involve ongoing relationships, built over time and with engagement across multiple platforms. I've no doubt this clock is ticking too but it's still harder to fake a user across a mix of text chat, voice and video calls, playing an online game, etc and when much of the web of relationships extends back into real life activity.
But I agree the golden age of easy anonymous connections online has ended.
> I think it's going to effectively kill public chat communities without either proof of identity or attestation through a web of trust.
This seems self evident to me too.
It's another factor in why I think the tech community needs to get ahead of governments on the whole "prove your ID on the Internet" thing by having some sort of standard way to do it that doesn't necessarily involve madness in the loop.
I'd be interested in working on a problem like that.
I have a strong preference for remaining anonymous or at least making it a reasonably high bar to tying my online identity to my personal identity
I would love to be involved in helping to design a sort of "human verified" badge that doesn't necessarily make it possible or at least not easy for everyone to find your real identity
I've been thinking about it a bunch and it seems like a really interesting problem. Difficult though.
I suspect there is too much political and corporate will that wants to force everyone online to use their real identity in the open, though
"I think it's going to effectively kill public chat communities without either proof of identity or attestation through a web of trust."
Those sorts of places were always the only places with reliably good communities.
To the contrary, platforms like Facebook and X demonstrate that even personal verification won't save you from identity politics.
Did you ever introspect about who ruined Reddit?
It’s a tragedy of the commons, many have done it, but no one user did it.
Serious question: If there are so many LLMs on online forums, who is doing it? Is it just 1000s of research students or something more nefarious? Is it AI businesses building up evidence that their output is as highly scored as humans therefore "buy our software"?
Established accounts are worth money, often for scamming/propaganda.
Not too dissimilar to people bot-leveling in MMOs to the sell the accounts.
We're in the middle of an active cold war where countries are trying to manipulate the citizens of rival countries to destroy their civilization without having to fire a single bullet. Anonymous, over the internet mass manipulation, all for some minimal electricity cost.
That's definitely the most insidious use, but I think the larger portion is advertisers and karma farmers (who later sell to advertisers).
Lots of marketing. Not even AI business, just regular consumer crap. They realized that blatantly spamming their product looks bad, so they orchestrate multiple accounts to look more organic. And people actually engage with it.
HN has historically been gamed for visibility. The stakes for doing this can be quite high if you can pull it off.
My impression is that they're sometimes unemployed people or students hoping to create a popular open source project, and use it to find a job.
They aren't going to care about any of the advice in the article about not posting slop -- finding a job is (of course?) more important to them.
Can't really say they are doing anything wrong, maybe I too would have? ... Just that large scale, doesn't work
People like the above poster who are "just running an experiment" or "trying something for fun" who then wonder why online communities are full of AI now.
If you farm a fleet of good accounts, you control the discourse. On HN, you could boost whatever you're trying to push, and downvote or flagkill whoever objects.
There are obvious benefits to controlling public discourse, right? Even if it's just to support some project you're working on.
Public* online communities are dying. Discord is thriving
This. Everything important has moved to discord. Which is sad because of how undiscoverable and unsearchable it is.
I'm more sad about how the UI of it all is just clunky. Even though it resembles ye olde IRC clients like mIRC, nowhere near readable for some reason.
are those attributes now assets?
Sort of, except if no one can ever discover a community it is always dying by default
Personally I'd love to find a decent online community these days, my social circle has shrunk considerably, but idk. It seems difficult to start fresh with new people nowadays
we were made to socialize in person. you can mimic it online and nourish existing connections over it but nothing helps build friendship more than being in the same place at the same time a few different times and talking to each other
This shit will come to Discord too.
on the public servers yeah. but the ones im in with real people who know each other will be fine.
I think the problem is not keeping agents out of private real people spaces, but for people who dont have any pre-existing or 'real world' connections to these communities to find a way to prove they are a real person over the internet alone and get an invite
On a related note, I think this is going to be the biggest challenge to most folks when it comes in resisting using government ID online. it will be the apple offered for easy proof youre not a bot to normal circles.
It's already there.
There's this old meme where someone asks what will happen when AI bots posts helpful, curious and thoughtful messages!? That's mission accomplish :D They can't be better then the average human though because of training data, so I don't worry about AI comments getting up-voted by real humans, I am however worried about fake upvotes.
Reddit is more or less dead to me, as the popular subs are botfests and the niche subs are empty. I'm lucky to get a single reply on gaming subs.
How do we know now that this comment wasn't written by LLM?
You don't and that's the problem :)
Reddit was already on its way way before this LLM craze, hopefully the recent tech-related changes will only accelerate that process.
I find it amusing that this is the top comment. Reddit is so awful you finally wrote it off, but not before you used it to try to “karma farm and do some covert advertising”. It’s on-brand for HN hypocritical bullshit. But, since we are slamming on Reddit anyways without realizing how fucked HN is by the same petard, have an upboat fellow traveler.
Unless their account is <1 year I wouldn't assume they are a bot.
Reddit astroturfing firms and bot farms learned to buy/use “seasoned” accounts over a decade ago. I’d venture there have been countless bots just in a holding pattern harmlessly building up reputation and a human-like history of posts across different subs etc just to eventually be either activated or sold to someone else to “burn”
I recently spotted one unmistakable example of this[0]. It’s been a trick for many years now that duplicating a human post and its comments is a good way to appear human but this was quite the example.
0: https://wiki.roshangeorge.dev/w/Blog/2026-01-06/Is_The_Inter...
So what is the comment frequency of these bots? There must be some signal in the activity even if the comments themselves pass the turing test.
If you find one account you can find a few dozen spam accounts by building a graph of what posts they reply to
Even if there was, I doubt Reddit cares enough to go after them when it’s boosting their valuation
Does it matter? With enough you can just have them upvote each other.
So easy to purchase online accounts nowadays, neither karma nor age of the account matters anything anymore.
I feel you. Especially in the larger subreddita. i participate, and mod, a few small ones, and the community there is pretty strong and folks shut down ai slop pretty quickly.
I'm not saying being a mod means it's bullet proof, but i do notice smaller communities tend to self police better and know what's real.
That said, your experiment scares me as well.
I will say that I believe you probably have absolutely no idea because it's not "slop". It looks like every other reddit comment you see.
My experiment was focused on niche subreddits as well due to the nature of the product I was trying to market.
Dead Internet theory ?
> where I had an agent karma
Was this a browser using agent? What did you use?
It used the browser agent to grab user cookies after signing in, then made API calls iirc.
Using just a browser is way too token intensive and slow. It would look for 401 errors then run the browser automation to login with the credentials and grab the token.
I'm surprised these platforms don't have advanced heuristics to detect API calls and inauthentic traffic.
Did you clone the Reddit API from browser traffic and then turn it into a 100% API driven thing?
I'd imagine they'd be sniffing browser agents, plugins, cookies, etc. to fingerprint. Using JavaScript scroll position, browsing rate and patterns, etc.
Maybe their protections just aren't that sophisticated.
> I am not quite there with Hacker News but I do know for a fact that many here are LLM's.
Please don’t do this here.
Don't do it anywhere. He's a jerk for doing it on reddit.
Reddit is the sewer of the internet. Good place for LLMs.
People are definitely trying to make HN bots because I have seen several get flagged. No idea to what end though.
the suits or suit minded people have realised that HN is good for advertising to the kind of demographic that'll give them free labour and is easily swayed by whatever the latest trend is
Possibly to test reactions to a bot they plan to build a startup around.
I've seen some claim they do it to avoid stylometry or being fingerprinted, or because of social anxiety problems.
Some people just have a compulsive need to optimize everything, and HN's guidelines and tone policing are more easily followed by a bot than a human.
> HN's guidelines and tone policing are more easily followed by a bot than a human.
HN's guidelines aren't that strict and the mod hammer is a plushie. It's not difficult to get by here. It's also kind of useful for critical reflection/self-regulation to hear the occasional "you came in too hot" or "don't be boring" from a moderator.
Seems better to me to just try to be sort of reasonable and let the mods nudge you if they need to and let your comments be downvoted from time to time. What is the goal of these people, to never experience correction in their lives? To never write an unpopular comment?
> What is the goal of these people, to never experience correction in their lives?
Look at all the people who complain about cancel culture. There's a huge swath of people who don't ever want to hear "that was mean/bad/shitty".
>What is the goal of these people, to never experience correction in their lives? To never write an unpopular comment?
Yes?
This site is CLEARLY astroturfed to hell and back and infested with bots. Any attempted discussion of this fact gets killed REALLY fast.
This part of the guidelines is a 15 year out-of-date bad joke:
> Please don't post insinuations about astroturfing, shilling, brigading, > foreign agents, and the like. It degrades discussion and is usually > mistaken. If you're worried about abuse, email hn@ycombinator.com and > we'll look at the data.
"We'll look at the data". Sure buddy. You'll do what you always do, which is apply to banhammer to anyone that's not following your talking points, and tone police the actual humans.
Enjoy "conversing curiously" with bots while the mods tone-police non-bots out of existence.
For what it's worth the admins here have let the tone of conversation slip a little when it comes to AI, as in there are many people who now openly mock (and worse) the AI zealots and there's no admin coming in and "saving" the metaphorical day anymore. In the not so distant past that kind of behaviour was almost instantly reprimanded, kindergarten-style.
He's stating a fact. Turn on showed in your options and scroll to the bottom of the comments on any popular story. There are so many agentic users here.
I generally disagree, because the level of discourse here has always been very high, curious and intellectual.
Maybe 1:100 comments match any one of those attributes.
Most comments are just grammatically "correct". Not a high bar.
It has, and the well prompted agents still give that. It's very weird.
On the other hand, I’ve been accused of being AI/bot and if I say things the mod doesn’t like and is not their favorite thing to hear I’m “flamebaiting” or engaging in personal attacks when pointing out specific things.
Frankly, online communities have been doing for many years now, when the censorship, anti-free-speech, tone policing mods and mobs started dominating online and America really did not have the self-respect or confidence anymore to enforce the Constitution online.
I've been on the Internet for decades at this point and one thing I've noticed is that communities that, for example, ban political topics actually mean "positions I don't like" as "political". This is somewhat related to the Overton window but really a bunch of (mostly conservative) ideas get normalized so aren't deemed "political".
I see the same thing with "AI Slop". Yes, there is AI Slop but (IME) it's pretty easy to spot. But what's more annoying is how often people are willing to throw that accusation whenever someone takes a position they don't like, much like the "political" label. It's lazy and honestly just as bad as the slop itself because it unintentionally launders the slop in a "boy who cried wolf" kind of way.
I also have a theory that some AI slop isn't inherently successful. It's just heavily botted by people who are interested in promoting certain positions. I bet you could make a pro-administration LLM bot and another one promoting a communist revolution and no amount of model tuning would make the second as popular as the first because the first would hit third-party botting as well as platform content biases (eg Twitter).
I've personally been accused of being a bot. This is particularly true in recent time as I've tried to share facts and fact-based analysis of, say, what's going on with crude oil markets, the military operation in the Gulf and the politics and economics around it. I even saw one hilarious comment saying (paraphrased) "the bots are getting clever and posting about unrelated topics". This was funny because it never occurred to this person that no, it was just a real person posting something you disagreed with.
Unless you've discovered the secret sauce, LLM comments are very obvious. Even Altman revealed that they focused on coding at the expense of writing.
With the current batch of SOTA models, it is not hard to prompt a model to pass the sniff test on social media forums. If you don't believe me, try it.
All you really need to do is give it some guidelines of a style to follow and styles to avoid. There's also a bunch of skills people have already written to accomplish this.
The obvious ones are the ones you notice
LLMs are not good at writing. If they were we would have entire libraries of new, amazing literature.
Exactly, they aren't good at creating new material. But many discussions in comment section are simply regurgitations of existing material, which they are good at rearranging. New novel discussions in places like this are actually a very rare thing, as many comment sections are simply people who already know informing those who don't. I'm doing that right now, funnily enough.
Neither are most humans
Agreed, some humans are good writers, and no LLMs are good writers.
This is rather moving the goalposts from "plausibly human comment" to "meaningful literature", I think
No. I'm drawing it out to its logical conclusion.
I have worked with LLMs for a couple years at a very non-technical level and it was not that difficult to give it proper prompting and reference material.
If you are reading LLM content just about everywhere and have no idea. Obviously there are easy to spot things, but the stuff you don't spot is the stuff you don't spot
People that like to fancy themselves as good llm content detectors just end up accusing everything they don't like as llm content.
The only thing worst than a slop comment are the people that bitch about it incessantly. I'm convinced it's become a new expression of a mental illness.
The main thing I suspect of being LLM written is the sort of LinkedIn style: very short sentences, overly focused on sort of… making an impact on the user. But that’s also how a certain type of bad human writer writes. So in the end, I’m not sure I know if anything in particular was written by an LLM.
I guess… “that’s not just an AI red flag, it’s generally shit prose” would be how ChatGPT would describe most things nowadays.
It’s the distilled mediocrity of the statements. Never venturing beyond a 10% margin of what you would get if you sampled the opinions of 1,000 people who underwent jury selection by west coast liberals.
A mere opinion is not mental illness.
I wasn't suggesting you have a mental illness for having an opinion.
More, commenting that just as bad as generated content if not worse is every thread where the top comment is an accusation and ensuing witch hunt.
So, no, having an opinion is not a mental illness. Feeling compelled to call it out and discuss it on everything one reads may just be.
The threads that have the top comment saying "this is AI slop" are nearly always about an article that is obvious AI slop.
Threads that aren't - like this one - don't.
If you need to tell yourself that in order to cope that's fine with me.
So you ran an "experiment" where you deliberately made someone else's community worse to see what would happen? Cool project.
I kind feel this might be good. Bot writen comments and AI media that can no longer be distinguish from real, will make us human leave the social networks, which helped to separate Us humans. Going back to the real world were you can trully believe on what you see, and enjoy the tone, look and scent of of our fellows humans beings.
A few tech companies managed to get massive numbers of people addicted to toxic social media content that was terrible for mental health but made a small group very wealthy. I don't think those same businesses and execs are just going to pack up and go home with an even more powerful content tool available now. LLMs are going to be used to create skinner boxes that make Facebook and Twitter seem like wholesome communities.
But I have a lot of friends online, both ones I made online and ones that have moved away from me and vice versa.
I don't want to be limited to only the friends I can make who live near me
This seems naive. As long as people are "enjoying" the AI-infested social networks, or at least not annoyed enough to leave, they will stay on them, and become further disconnected from reality. We have half of EU teenagers talking to chatbots regularly. Alienated people flock to them.
One of the paradoxical things that makes me hopeful is that there's going to be such an incredible amount of low effort AI slop content that it's going to drown out the low effort human-made content and generate a large amount of distaste for it. So much will be so bad that good taste and high quality will be rewarded more status as the people who will say and believe anything will be led astray and left behind.
Maybe it's hard getting across what I mean so a more concrete example is there will be SO MUCH clickbait out there that serious outfits instead of being forced to do it will be able to successfully differentiate themselves by NOT doing it. (and many similar things in different arenas)
I'm trying to say that LLMs raising the noise floor will drown out a lot of the toxic noise that's been plaguing us.
I can hope.
> So much will be so bad that good taste and high quality will be rewarded more status as the people who will say and believe anything will be led astray and left behind.
I really want to believe this will be true. However, I also suspect there's some external driving force, that I cannot readily name, which is making people incapable of consuming anything except this low-effort content. I mean, obviously it's working to some extent. Perhaps AI will be the thing that accelerates its death, but part of me thinks something else needs to happen beyond just an increase in useless content.
In my opinion there isn't an external _nefarious_ force causing all of this. Certainly those forces exist but without them much the same would be happening.
It's the economy of everything being free but supported with advertising. That mechanic is what leads to the race to the bottom lowest common denominator human motivation hacking attention toxicity. (yes that's a bit of a ramble).
If people weren't getting paid for the smallest increment of attention they could grab, it wouldn't be promoted the way it is. I don't have a high opinion of the things which grab my attention, but they still manage to do it sometimes. I think many people are in that boat. If there were other mechanisms with which we rewarded people for doing things, something different would be optimized.
And people just wouldn't reward the 10-second-gratification in anywhere near the same way if it weren't for the advertising.
Have you considered that (further) lowering the signal-to-noise ratio will make it much more difficult to find and distinguish a signal?
Yes, but I'm hopeful for a survival of the fittest instead of an extinction.
Now there's more pressure to have a stronger signal and hopefully rewards to match.
What do you think happens to the least prolific organisms that lose the survival of the fittest?
I feel that a lot in my side projects: maybe one should keep the half-baked AI repo for oneself and rather share what the experiment, the thesis, and the learning from the building are. No one cares much about the (un)finished product, as it can be replicated better in most cases with a couple hours' work of claude coding.
For instance, I really liked how Karpathy shared a high-level idea on the LLM-based wiki. It was sadly followed by a long tail of no-one-cares-about "Here is my LLM wiki product" posts pointing to the generic LLM-generated landing page.
I run a niche creative community, and we outlawed AI-generated content in 2022 as it was easy to see how corrosive it would be to the community.
It hasn't been easy. We ban fake AI accounts daily and shrug off around 600 AI content creator accounts monthly.
It's a lot of work, extra work that wasn't needed before AI content came around, and of course, that is an extra cost.
I fear losing the battle.
Unlike a lot of communities, yours at least started on the correct side. Better to ban outright, than to slowly realize that you should have banned it.
Indeed. Take a soft approach, or "wait and see", and you'll just allow your community to get infested with slop enthusiast crybullies that loudly protest any pushback against "genai content". The communities that draw a firm line and hold it will be the only ones that endure.
There has to be room for an AI-driven project that expresses a unique idea, even if there's no community around it yet. Someone has to express it, and from now on that idea will largely be implemented with AI.
> A good use of AI is when it enables people to do something they couldn’t do before, to contribute to a community when they couldn’t before.
I agree 100% with the novel contribution aspect. But there's some nuance there.
For example a project might have no active contributors. It might not be something you can drop directly into your codebase. Neither of those is inherently bad.
As AI becomes more responsible for higher-level planning decisions, the value of an OSS project becomes less tied to visible community activity like PRs and issues.
I notice this in my own work a lot. I might not use that project's code directly. But I think about a problem differently as a result. I often point my agent to existing OSS projects as inspiration on how to solve a problem. The project provides indirect value by supporting architectural decisions, deployment approaches etc. Unfortunately OSS activity doesn't capture this.
> A good use of AI is when it enables people to do something they couldn’t do before, to contribute to a community when they couldn’t before.
There are two separate things here that are getting silently conflated.
> A good use of AI is when it enables people to do something they couldn’t do before
This could be good on an individual level, if say, a doctor wants to vibe code an app of some sort for his individual practice.
>to contribute to a community when they couldn’t before.
This is where it goes off the rails. If they couldn't meaningfully contribute before, they aren't going to suddenly be able to discern that whatever slop they want to contribute is of value to the community. That's just another way of saying, if I wanted an AI opinion on something, why wouldn't I get it directly from the source, and write the prompt myself, instead of have some intermediate human prompt the AI for me?
The human has unique context. They may work in a niche domain or they talked to people and observed an unsolved problem. Then they express a potential solution via OSS. It's like product sense. Then they share that with others who find it interesting. The code is a great way to encapsulate the idea. It is usually the result of research and back and forth not a single prompt. It would be way harder to think through or build a solution without AI even if they had context.
Who is going to verify that an AI-driven project is a unique idea? How do you distinguish between a genuinely unique project, a grifter who is shilling their "unique" project, and a new enthusiast who is convinced their project is unique, but is not? This is an impossible moderation task. The only options I see for a community are to either totally ban AI-generated content, or be totally consumed by it.
I don't really know. Certainly we need a higher bar. The Kafka example in the post may be hyperbolic but I agree it pollutes the space. But we also can't swing the other way and rely completely on out of date proxies. If you ban AI code there will be very little code to see in a year. It'll take time but we'll arrive at new norms. We built semi successful ways to filter content farms in the earlier internet days. The signal has to shift to "did they think hard about this problem" which has some observable properties. Like how they articulate the problem, or why it became important to them.
I want my future community apps and sites to build in bot a flagger. I don't care how hard it is, the community that gets this right is the one I'll jump ship to.
You're absolutely right!
I've found the smoking gun ⸻ it's not your work, it's your prompt.
I've seen en dashes. I've seen em dashes. What kind of dash is that?!
It's been a personal favourite of mine to sprinkle into replies to clearly LLM generated textual diarrhea, it scores a laugh like, 1/10 times haha.
A three-em dash. TIL.
I'll remove the particulars to avoid anything partisan, but:
I failed to truly appreciate how cooked reddit was with bots until I accidentally clicked Popular and stumbled upon a national subreddit post with a 'chad meme', starring a particular political leader, whose unpopularity is hard to adequately convey to foreigners.
It was not just that this post had been so severely upvoted, but the comment section itself had a mantra more or less, with very little actual conversation, just echoing the same sentiment; and all those comments in turn upvoted to the point of drowning out the lone comments at the bottom (not downvoted, just not upvoted) expressing "???". I don't know if I'd ever even written the word 'astroturfing' before expressing my bafflement at a friend, so I don't think I'm very tinfoil hat about these things.
It was just utterly bizarre to see someone who can barely get a single win in public discourse being heralded -- monotonously -- like he was the second coming.
When LLMs were new on the scene, I thought trust would fade in the written(text) medium. I saw it happening on Substack, Medium, and Reddit. But then VCs pumped so much money and AI has gotten into every other modality (audio, video). The only thing I really interact these days are the human beings sitting in front me, phone calls with people I know and hackernews. Life seems sorted but something feels missing as well.
Edit - I am not anti AI but it is slowly killing the digital human interaction.
Question for web devs - are captchas effective any more? If Reddit required a captcha on every comment, would it actually decrease bot comments?
I made this point elsewhere, but people are learning a lot of what us had to learn the old way which is no one cares about your stuff for the most part and now the value provided has to go way up to get people to care. That is, as the author says, the novelty has worn off and since we know it's AI the perceived value is also way down.
We're all recalibrating.
I do really think this is just a quick period in time before most people realize that the slop posting doesn't help them personally get anything and most give up and we go back to roughly the ratio of cool things with real value to see but like on a bigger scale because AI helps you do more as one person.
I don't know... I might have said the same thing about email/text/phone spam but it has only proliferated to the point where it's a constant stream of garbage. Email, text, and phone calls are almost completely useless at this point. Sifting the signal from the noise is a non-stop effort.
I think people who want to push a certain narrative might just set up a quick bot and tell that bot to start posting on Reddit or whatever and just let it run. Why not? Little effort on their part and they might actually have influence. The same reason why spammers apparently think sending me 10 text messages per day about a loan I've been approved for. It probably does work 0.0001% of the time, but that's okay if it's all automated.
I mean I think the dynamics are a bit different in online communities at least for actual communities and not drive by subs like r/technology or whatever.
Especially say here on HN with Show HN and such the forcing factors are "i get no votes or community recognition"
But I don't entirely disagree with you I think things won't totally go back I think it will settle way more than now though especially where things are a little more niche.
Online communities that allow upvoting / downvoting have been effectively dead for a long time because it's easy to manipulate conversations by elevating and punishing comments to fit a narrative. This is especially true on HN.
There's a lot of focus on tech projects here, but it's not just vibe written projects that are ruining communities now.
No, it's a problem with art, text and videos too. Reddit was already becoming a creative writing exercise in many ways, with infamous subs like 'Am I the Asshole?' seemingly being about 80% fiction labelled as fact. But now you don't even need to know how to write to flood the site with useless 'content'.
YouTube is arguably even worse, since AI led content farms are not just spamming the hell out of every topic under the sun, but giving outright dangerous advice and misinformation on top of that. I saw this video about medical misinformation by these 'creators' earlier, and it genuinely made me want to see them crack down on this junk:
https://www.youtube.com/watch?v=UEfCTCBDKIU
And there's just this feeling of distrust everywhere too. Is anyone on Hacker News human anymore? Is that Reddit poster I'm responding to human? Are the folks on Twitter, Threads or Bluesky human?
The scary part is that you basically can't tell anymore. Any project you find could be AI generated slop, any account could be a bot using stolen images or deepfakes, any article or video could be blatant misinformation put together as a cash grab...
If something doesn't improve, pretty much every platform under the sun is going to be completely useless, as is a lot of the internet as a whole.
I think people like the blog author need to realize that this problem can't be dealt with content moderation or users trying their best to be honest. You just get a firehose with an on/off switch, you don't get free filtering or moderation with it.
I have been reading HN near-daily for years.
This synthetic participation (LLM or otherwise) has catalyzed weakspots in HN's high-trust environment. The weight we give to the average HN comment is orders of magnitude higher than the average Reddit (& co.) comment, and this relationship probably goes both ways (much higher ROI on ads/propaganda). Due to the low volume & high trust, it seems to be a very different (easier) environment in which to achieve pervasive propaganda/advertising/etc with a disproportionate impact.
I remember when some new LLM version came out (maybe from Meta?) I saw something like 3 of the top 10 posts on the front page were all variations of "Foobar 2.1 New Model". Perhaps not explicit, deliberate manipulation, but the result was the same, and apparently allowed. How many of those generic LLM websites (https://letsbuyspiritair.com/ comes to mind) show up on the front page per day? Zero effort static front-ends for some unremarkable data. I'm not going to touch the politics minefield, but that is a weakspot too.
All of this, and yet I think HN has handled it relatively well. I really appreciate not seeing comments of the form "I asked Clog/Gemini/etc. here's 5 paragraphs". Places like Reddit do not have the agility or control, and have degraded accordingly.
It makes me sad to think that a short time ago, every forum was ~100% humans, and now it is some fraction of that. I wonder if I will ever see that again.
For every argument against AI slop, you will get a variation of it's the future, or I'm 10x more productive now, I've shipped 3 applications in 2 days, etc.
They won't stop talking about it and defending it. But I can't get anyone to share their amazing work with me.
There is a reason the Show HN projects that are mostly vibecoded don't get much response. It's because they aren't any good. Comments that are AI generated are hollow. Videos that AI generated a shell of their sources.
The importance of good search engines and good discovery engines will grow even more.
Can such a thing even exist now? Any search engine algorithm can be gamed by AI.
Human slop is realistically just as bad. In a strange twist, human commentary on the Internet is asymptotically approaching an older LLM. Trite cliches, repetitive tropes, and tribal affiliation signals dominate conversation.
I have turned to blunt instruments: blocking individuals on their first cliche banner-wave. It has substantially improved comment quality but I still suffer from the problem that I don’t block stories entirely.
It sucks that the narrative framing device of 'human slop' has vanished in the last year. Some subreddits, like all location subreddits, lifestyle subreddits like malefashionadvice and redscarepod and entry-level academic subreddits like math and criticaltheory were already just hives of human slop before AI came around because of a structural design to the site that had the side effect of normalising a total absence of quality control.
Upvotes are not a good mechanism for quality control in any way because they force good content to have the same metadata as the content that is technically well-constructed but is irrelevant, meaningless, just a platitude, too obvious to be obvious or pablum. Upvotes turn everything into a shock-value dominated 101 space.
HN is in peril and I don’t think it is a bad thing. Or rather, I’d like to bring back the old chestnut: it’s a good thing.
While the site has moved to using /showlim, the AI garbage just bypasses that and goes straight to the home page. Almost every project that’s being shown is vibe coded and looks exactly the same - generated by Claude or the like. This is an excellent test for the site: will it be able to adapt or do we simply end up with a husk of what HN was and it’s the AI posts driving majority of engagement, Overton window, and upvotes/downvotes?
I look forward to this, I think it is an exciting development.
This is a good thing. social media was already slop before AI. If this gets more intellectuals off these same websites daily and instead spend their time to better things, then I love AI slop’s purpose. There’s more to the internet than Reddit, TikTok, and youtube. Really there is, if your circle of friends is small or non existent without going to the same dotcoms, you have an issue that is worse than any AI slop tbh
getting people off the internet is antithetical to the business goals of the AI companies. they won't let that happen without a fight
How would one build an online community free of LLM agents commenters and links to "slop" content?
Strict invitation trees? Small signup fees? No SEO incentives?
Im not a crypto person, but I was intrigued by Chia. They generate their coins based on allocating disk space. So if you have a bit of free space, you can fill it with plots and play the lotto.
The intriguing part is that I think it works against scaling. The incremental cost for me to use the 500GB of free space on my disk is $0, but someone scaling a bot farm has to buy all their space.
Real people tend to have a lot more idle capacity than optimized, scaled businesses, so any kind of proof of idle capacity seems like it would disadvantage bot farms.
I’ve also thought that proof of collateral spending would be a good system. For example, you buy groceries and the store gives you a token saying you spent $X of real world money. Those tokens help show you're not a bot. Keeping that system honest and equitable would be extremely difficult though.
Maybe schools could give kids tokens for attendance. It sounds kind of dumb, but who knows.
My guess is that sooner or later we're going to one or the other of these:
* dead online communities
* highly-invasive, government-mandated "prove you are a human" requirements in order to participate in online communities
Charge $10 for an account, like Something Awful.
It'd be interesting to see how lobste.rs fares with all this.
The writing here is good. Quote of the day "Any fool can feed coins into a fruit machine and pull the arm."
There are "nice", "polite" slop enthusiasts. The ones who insist they have taste and tact. They would never post bad slop, recklessly, only the very highest-quality human-refined, curated slop. Not really slop at all, they would argue, because they gave it a careful review before posting it. They insist there's a very important difference between this premium slop and the nasty kind, and that low-quality human-authored media is actually slop, too, when you think about it. They talk about how important it is for people to use slop thoughtfully, efficiently, correctly, and that we all need to learn about and discuss slop constantly because it's the inevitable future and highly relevant for everyone.
They muddy the waters. They wheedle, rules-lawyer, carve out exceptions, and talk about how important it is to have nuance in separating virtuous applications for slop from bad ones, and that focusing on the bad ones is actually very tedious and rude. We should have polite discourse about the good things about slop and stop being so mean about bad slop, which isn't even really a problem. The bad kinds of slop will be solved soon, probably, and the harms are overstated. They colonize spaces.
If moderators don't swiftly throw these slop enthusiasts out on their ass, slightly less polite ones will post slop slightly less politely. More and more of the people participating in the space will have favorable opinions toward slop, and shout down people who object to slop. In no time at all, your community is a slop bar. Who could have imagined?
I usually type 5000 words researching for a 500 word output. It's not "write me an article on X", it's 99% my own ideas, but worded and structured and polished a bit. But I don't post them here. They are on my blog.
I'm not the arbiter on all things Godwin's Law, but either way the analogy doesn't work.
gonna start calling this effect The Slop Vanguard
AI slop is hurting my community in a different way. We have an internal viva engage community for quick development how to type questions at work. More frequently, instead of asking "how to" questions to the crowd to crowdsource answers, people are reaching out to me directly to ask me why the solution AI suggested doesn't work.
That people trust AI over an organizational knowledge is bad enough. I fear that AI is turning people generally antisocial.
This is happening at my workplace and it's incredibly annoying. We get support tickets asking us to troubleshoot AI written scripts. The funny thing is that most of the time, it would be faster for the customer to tell us what they want to do in plain english and have us make it for them. Hell, if they make an honest attempt, we can point them in the right direction and teach them.
It's frustrating because we're bundling this shitty AI with our product so we're just making more work for ourselves. Then there's the push from leadership to use more AI...
I don't think it's making people antisocial though, people just like easy solutions to their problems. We're giving them what seems like an easy solution. But it's easy for them, not easy for the reviewers.
Sigh. First the article states that "coding by LLM is the way things are done right now" in 10 different ways but message boards and articles need to be protected.
We get it, the current narrative is that coding is the big thing, promoted by billionaires and scabs alike.
So, the coding narrative must be protected until the IPO of Juniper^H^H^H Anthropic happens and the whole thing implodes.
You already could have code for free and faster by using "git clone" without a company of thieves selling your own output back to you.
Related, from a couple of days ago: Knitting Bullshit https://katedaviesdesigns.com/2026/04/29/knitting-bullshit/
> AI slop is driving up the noise, and making the signal more and more difficult to discern in communities.
Thank you OP, this puts into words why I no longer look at Show HNs.