Sneaky spam in conversational replies to blog posts

(shkspr.mobi)

59 points | by ColinWright 2 hours ago ago

35 comments

  • hrunt an hour ago

    I subscribe to handful of investment-related YouTube channels. This pattern has been common for years. A bot will reply with a comment loosely related to the video and about how something worked for them. Another bot will reply asking how they did that. Another bot (not the original commenter) will reply that they worked with so-and-so or invested in such-and-such, and then there will be maybe four or five more comments responding to that. All obvious bot accounts.

    It's obvious on the channels, because these reply sets usually don't contain a lot of replies to comments (if there are any comment replies, it's almost always from the channel owner). It's so obvious, in fact, that I'm surprised YouTube hasn't done something to address it.

    • pinkmuffinere an hour ago

      Oh I love these comment threads! I like to add another reply saying something like “oh my goodness, I used Elizabeth Ferguson for my investing too!! She went to my college, so I thought I could trust her. But then I found out she was cheating on me with my wife! We got a divorce and i lost half my assets in the separation. Elizabeth Ferguson probably is enjoying them now :(. Just one experience, but buyer beware!”

      • basilikum an hour ago

        I'd be careful with that. Sounds like you could be mistaken for a bot that is part of the scheme and get your Google account banned.

        Then again, you should live under the assumption that your Google account could be banned at any time with no recourse. You do have local backups of all your Google account data and don't need your Gmail account to access anything important, right?

        • bombcar 28 minutes ago

          That makes me realize that banning is a punishment only usable on people who care about their account. Scammers don’t, a new bot account is a click away. But basilikum would be sad to lose his account.

          • johnmaguire 7 minutes ago

            For something like YouTube, there is a small monetary cost in order to verify a phone number.

    • sebakubisz 7 minutes ago

      Have you seen the same chain pattern outside finance yet? Wonder whether investment scams are the most conspicuous because the payout per convert is high or whether it's seeded the widest on YouTube specifically.

      • nibbleyou a minute ago

        I saw something like this for a book. It was under an Instagram reel where the person was describing ways to improve your self-esteem. In the comments section someone mentioned a book that worked for them and it had a few replies saying how it worked for them too. I searched for the book and it was a very new book from an unknown author and zero reviews everywhere.

    • lopis an hour ago

      It's been well know to happen on reddit too for many years. Whole posts and comment threads copied verbatim with new accounts. Nowadays with AI you can make it way more dynamic.

      • jerf 8 minutes ago

        AI has been awful on Reddit.

        I've acquired a sense for at least some of the bots. There's this set of bots that post a high-engagement post about once a day to an implausibly large range of subreddits, with implausible regularity. I can tell by the way I remove them and the way that the other subs are mostly not that most subs have not figured this out yet.

        There is an obvious solution to that problem, which I haven't wanted to put out there, but I've become increasingly suspicious that it's already been figured out anyhow, which is to limit a specific user account to a specific "persona" with plausible interests and posting rates.

        And that's where I think the race may well end, victory spammers. If there's a winning move against that in general I haven't figured it out.

        I know reddit is concerned about this at the corporate level but I'm not sure they realize this is possibly their #1 threat, towering above all others. Not that I have any specific suggestions about what to do about it either. And it's years before the masses realize this and stop visiting, and by the time that happens all the social media companies are going to be in trouble for the same reason. You can see the leading edge here on HN but it's still only an almost negligible fraction of the total userbase of something like Reddit today. But that will change.

      • embedding-shape 11 minutes ago

        > It's been well know to happen on reddit too for many years

        "For many years" being around 20 years at this point. Not sure reddit is a great example, given the founders admitted to using sockpuppets almost since day 1 in order to generate fake activity on the platform.

    • weird-eye-issue 23 minutes ago

      Yes and what they do is use actual registered investment advisors names and set up scam websites for them. This way it's more legitimate because if you research that person you will find that they are actually registered in official databases.

    • Ralfp an hour ago

      I’ve been seeing this kind of spam on forums all the way back in 2004. I wonder if it was a feature in Xrumer or whatever they used to post spam back then.

      • bombcar 27 minutes ago

        If you have a forum and haven’t found a thread that is just one guy arguing with himself on twelve sock accounts; well then you haven’t been looking or only have one user.

    • Forgeties79 an hour ago

      They also talk like people in a national ad.

      “Wow! Seems like it’s so easy to change over with savings like that!”

      • sixhobbits an hour ago

        The bad ones seem like this, the scary part is not knowing if there are good ones

        • Forgeties79 20 minutes ago

          Generally when people start having a back and forth about a product I assume it’s astroturfing unless it makes sense in context and/or it’s just one of those brands people genuinely get excited about (they tend to be obvious ones you’ve seen a lot already).

          Doesn’t mean I don’t ever get duped, but idk. You learn to spot the signs. I imagine most of us on HN catch most instances. Genuine-seeming referrals aren’t as easy to fake as one would think.

  • alansaber 13 minutes ago

    The post timing is the main giveaway. Surely it wouldn't be that hard to space out these spam posts. The amount of automated comments being spammed on all social platforms is not quite at tipping point, but has significantly increased.

  • keiferski an hour ago

    This has been a thing since blogs became a widespread thing 25+ years ago. Especially with the advent of Wordpress. It was even a “commonly accepted” SEO tactic for awhile.

  • rozumem 2 hours ago

    Nice. I run a site that depends on user submitted content, and it's really interesting to observe how some people try to get around the guardrails. Not sure if your tool does this, but I would perform some additional checks for comments that have links in them.

  • throwaway667555 an hour ago

    This also is absolutely rampant on reddit in the past months.

    • Aurornis an hour ago

      I’m not a heavy Reddit user but I’ve noticed a sharp increase in comment spam disguised as real discussion.

      I think the turning point was when they allowed accounts to hide their comment history. Before, when you could click on an account and read all of their other comments it was easy to tell when an account only existed for fake conversations about a product they were spamming.

      Now the spam accounts hide their comment history so they can do nothing but spam similar comments all over Reddit and walk the line where it’s not obvious if any single comment is spam or an one off comment from someone trying to be helpful.

      Users are using Google and other services to find their other posts and post warnings, but it takes so much more effort now.

      • chownie 11 minutes ago

        I have noticed the same uptick in bot-like behaviour there. The part I struggle to square is, why so much of it is so useless?

        It's maybe account laundering, but on any popular post you'll see at least half of the comments are tangential at best. They're not an expression of anything a person would express, like replying with just skull emojis to a random news post, or saying "he really said" with an exact word to word recreation of a throwaway quote from a video. No one ever replies to these posts, they get like 2 upvotes (if that), the platform doesn't reward them at all but they constantly appear in a very artificial looking way.

      • walthamstow 24 minutes ago

        It's interesting that people are concerned about seeing ads in ChatGPT when it will happily regurgitate astroturf from Reddit right now

      • throwaway667555 30 minutes ago

        I agree, anecdotally I noticed a big uptick coincident with the comment hiding feature and with the Q4 2025 leap forward in LLM quality.

      • AussieWog93 an hour ago

        Just a thought, but I wonder if Reddit are hiding this information deliberately to prevent anyone from publishing a study estimating what percentage of their traffic is driven by bots (anecdotally, it's a lot - and they used to be mostly organic even half a decade ago).

    • armchairhacker an hour ago
      • alansaber 12 minutes ago

        There must be some element of reddit turning a blind eye to this/trying to push it into their sales funnel for the paid reddit marketing features.

    • 4chandaily an hour ago

      This has been rampant on reddit for years.

  • xyzal 18 minutes ago

    Text generation is now cheap, so I expect this problem to worsen. I hate to write it, but I don't see any other solution on platforms, that aspire to be a modern agora, than identity verification ...

    • a2128 10 minutes ago

      Why would identity verification solve this? The spammer can just verify himself, or if he doesn't want to or it's at a bigger scale than individual, then there will be services where you can get identity verifications on the cheap and they'll work either by paying people in a poor country to verify themselves all day, or, even more cheaply, sketchy age verification services on sketchy porn sites will be actually proxying or replaying people's verifications to another service of your choice

    • alansaber 11 minutes ago

      All roads lead to authoritarianism eh

  • sublinear 30 minutes ago

    I also see a ton of this here on HN as the political topics have ramped up.

    Not enough people are flagging those when it aligns with their bias. It's even less likely to get flagged when it's a double whammy of politics and AI. Loosely being about AI should not give it a free pass.

    • Permit 26 minutes ago

      I haven't seen this. Can you give some examples?

    • bombcar 25 minutes ago

      I rarely downvote anything; but I’ll unholster the downvote for obvious political spam when it agrees with me.

      If we don’t police our side nobody will.