Stop Sloppypasta

(stopsloppypasta.ai)

72 points | by namnnumbr 7 hours ago ago

33 comments

  • madrox 2 hours ago

    I find that I don't have a lot of sympathy for people angry at this type of behavior, even though I share the disdain for someone else's AI output. The people doing this kind of thing are not the kind of people to be reading this manifesto. We've been creating bait content for a long time, and humans have never been given the tools to manage this in any sophisticated fashion. The internet was not a bastion of high quality content or discourse pre-AI. We need better tools as content consumers to filter content. Ironically, AI is what may actually make this possible.

    I do find it interesting that people don't mind AI content, as long it's "their AI." The moment someone thinks it's someone else's AI output, the reaction is visceral...like they're being hoodwinked somehow.

    I suspect the endgame of this is probably the fulfillment of Dead Internet Theory, where it's just AI creating content and AI browsing the internet for content, and users will never engage with it directly. That person who spent 10 seconds getting AI to write something will be consumed by AI as well, only to be surfaced to you when you ask the AI to summon and summarize.

    And if that fills people with horror at the inefficiency of it all, well, like I said, it isn't like the internet was a bastion of efficiency before. We smiled and laughed for years that all of this technology and power is just being used to share cat videos.

    • valicord an hour ago

      > I do find it interesting that people don't mind AI content, as long it's "their AI." The moment someone thinks it's someone else's AI output, the reaction is visceral.

      Isn't it obvious? If I'd wanted to see AI response to my question, I'd ask it myself (maybe I already did). If I'm asking humans, I want to see human responses. I eat fast-food sometimes, but if I was served a Big Mac at a sit down restaurant I'd be properly upset.

      • madrox an hour ago

        > If I'm asking humans, I want to see human responses I find this fascinating, honestly. It shouldn't matter as long as it addresses your ask, yet it does. I also wish I could filter social media on "it's not X. It's Y"

        Because it's probably not actually about the content but the sense of connection. People want to feel like they're connecting to people. That they're being worthy of someone's else's time and attention.

        And if that's what people are seeking, slack and social media are probably not the platforms for it (and, arguably, never were).

        • valicord an hour ago

          > It shouldn't matter as long as it addresses your ask, yet it does.

          But it doesn't? I'm more than capable of using Google and chatgpt myself. If I was looking for a machine generated answer to my question I would have already found it myself and never made the post in the first place. If I went to the effort of posting the question, it means that either the slop answer is not sufficient for some reason or that I want to hear from actual humans that have subjective experiences that an LLM cannot.

          Posting an AI response verbatim basically says "I think you're too stupid to click a couple of buttons, so let me show you how it's done". I think it's very reasonable to get upset at the implication.

    • namnnumbr an hour ago

      I acknowledge that those likely to copypaste slop aren't likely to find this article themselves, but I built the page to be shared or guide discussions around etiquette like nohello.net or dontasktoask.com. IMO a common understanding of AI etiquette would provide social pressure to halt some of these behaviors.

      I honestly don't mind someone else's AI as long as I can trust it/them. One problem I have with sloppypasta specifically is that it reads as raw LLM output and the user isn't transparent about how they worked with the AI or what they verified. "ChatGPT says" isn't enough; for me to avoid inheriting a verification burden, I'd also need to understand what they were prompting for, if they iterated with the AI, and if/what/how they validated.

      (the other problem is that dumping a multi-paragraph response in the midst of a chat thread is just obnoxious, but that's true even if its artisanal human-written text)

      • lovemenot 27 minutes ago

        Couple of expressions from pre-AI culture: "RTFM", "Google is your friend". These were well-used because they are directed, pithy, abrasive.

        (n)amow(?): All my own work

      • Aeolun an hour ago

        Yes, I can replace the link to nohello in my automated responses now :)

    • mcphage an hour ago

      > We smiled and laughed for years that all of this technology and power is just being used to share cat videos.

      Well, cat videos make people happy.

    • waterTanuki 25 minutes ago

      I find your comment disingenuous at best.

      > The internet was not a bastion of high quality content or discourse pre-AI.

      I have read thousands upon thousands of pages of AI-related discourse, watched hundreds of videos since 2022, maybe even a thousand now on it. NEVER at any point in time did people opine for the "high quality" internet of before. They opined for the imperfect HUMAN internet of before. We are now seeing once pristine, curated corners of the internet being infected with sloppypasta.

      This is quite a broad brush to paint the internet with. It's like saying The Earth is not a bastion of warzones/peaceful places to live. That is HIGHLY dependent on location.

  • OptionOfT an hour ago

    It's very weird how many people take the output of ChatGPT/Gemini/Claude as gospel, and don't question it at all.

    It's also very impolite to dump 5 pages of text on someone, because now you're asking _them_ to validate it.

    When I ask a question in Slack I want people's input. Part of my work is also consulting the GPTs and see if the information makes sense.

    And it shows up the most with people who answer questions in domains they're not a 100% familiar with.

    • Aeolun 44 minutes ago

      I don’t mind this so much if they don’t know anything about the subject themselves. What bothers me is when they then copy it at domain experts as if it makes them qualified to talk.

  • 0xbadcafebee 12 minutes ago

    If I was a bot I would probably write some perfectly punctuated garbage about how your site is a crucial testament to the ever evolving digital landscape or use big words to delve into the multifaceted tapestry of internet ethics. But honestly your website about stopping sloppy pasta is just so dumb and a complete waste of time. Your acting like somebody writing a fake story with ai is the end of the world or something. Literaly nobody cares if some random article was written by a computer so maybe stop pretending your the heroic saviors of the web. Get a real hobby and stop whining about people using chat bots because its really not that deep bro.

    - now the fun part: which AI did I use to write the above?

  • simianwords an hour ago

    I've been thinking about this, what if AI runs autonomously and finds things to criticise that are factually incorrect?

    It is easy to do in social media because the context is global but in enterprises it is a bit harder.

    Something like "flagged as very likely untrue by AI" is something I would really appreciate.

    I see many posts and comments throughout the internet that can easily be dispelled by a single LLM prompt. But this should only be used when the confidence is really high.

  • uniq7 2 hours ago

    This article's proposal for stopping sloppypasta is to convince the people who does it to stop doing it, but I am more interested on what someone who receives sloppypasta can do.

    How do I tell my colleagues to stop contributing unverified AI output without creating tension between us?

    I've never did that so far because I feel like I am either exposing their serious lack of professionalism or, if I wrongly assumed it was AI, I am plainly telling them that their work looks like bad AI slop.

    • userbinator an hour ago

      How do I tell my colleagues to stop contributing unverified AI output without creating tension between us?

      Make them realise they're replacing themselves if they continue down that path. "What value do you have if you're just acting as a pipe to the AI?"

      • uniq7 27 minutes ago

        If I tell someone literally "What value do you have if you're just acting as a pipe to the AI?", I'm pretty sure my manager will schedule a quick 1:1 to ask me why I'm telling peers that they have no value.

    • kace91 an hour ago

      >How do I tell my colleagues to stop contributing unverified AI output without creating tension between us?

      Pattern rather than person? General team reviews or the like. As long as it's not tech leadership pressing for it..

    • verdverm 2 hours ago

      I've had some luck pointing out where the AI is wrong in their sloppypasta, delicate as one can. Avoiding shame or embarrassment can be a powerful motivator.

      The most interesting incident for me is having someone take our Discourse thread, paste it into AI to validate their feelings being hurt (I took a follow up prompt to go full sycophancy), and then posting the response back that lambasted me. The mods handled that one before I was aware, but I then did the same thing, giving different prompts, and never sharing the output. It was an intriguing experience and exploration. I've since been even more mindful of my writing, sometimes using similar prompts to adjust my tone or call me out. I still write the first pass myself, rarely relying on AI for editing.

    • namnnumbr 2 hours ago

      I wrote this intending it to be directly sharable and/or to provide a framework for how to have that discussion, kind of like a nohello.net or dontasktoask.com.

      I've found success having sidebar conversations with the colleague (e.g., not in the main public thread where they pasted slop), explaining why it was disruptive and suggesting how they might alter their behavior. It may also be useful to see if you can propose or contribute to a broader policy on appropriate AI use/contribution with AI, and leverage that policy as the conversation justification?

  • stabbles 2 hours ago

    I wouldn't call "ChatGPT says" an equivalent of LMGTFY. The former is people in awe with the oracle, the latter is people tired of having to look something up for others.

    • verdverm 2 hours ago

      I would say LMAAFY is like LMGTFY, where as the sloppypasta is more like pasting search results list without vetting them. That is, there are two phases to this phenomenon, query and results.

  • chewbacha an hour ago

    When you must remind someone to “think” when using a technology because the least resistant path is to not think… it feels like the technology isn’t really helping.

    They are stealing our work, turning it into a model, and then renting our decisions to less intelligent people.

    They (tech companies) don’t want us to be smart any more. They are commodifying intelligence.

  • incognito124 2 hours ago
    • namnnumbr an hour ago

      100% - was inspired by and quote "It's rude to show AI output to people" in this. Thanks for linking the discussions!

  • rrr_oh_man an hour ago

    It's ironic, because the site has all the hallmarks of an LLM generated website.

    • spondyl an hour ago

      I think Claude Code's frontend design is quite a fan of serif fonts from what I've seen in the past.

      They did disclose AI usage which is good: https://github.com/ahgraber/stopsloppypasta?tab=readme-ov-fi...

    • namnnumbr an hour ago

      Oh, I 100% acknowledge the site itself was LLM generated. I'm not a web designer, so I needed a lot of help making a visually appealing site, even if that design language is at this point LLM trope.

      However, the essay and the guidelines were all human-written!

      • Terretta 37 minutes ago

        Hits you in the first row of buttons with the classic gen-AI slop "Why It Matters".

        So trace* through ninerealmlabs and ahgraber and sure enough:

          I used AI:
          - to help build this website.
          - to help generate examples of sloppypasta
            based on my original guidance
          - to proofread and review the human-written
            copy to provide a critical review
          - to improve my arguments and ensure clarity.
        
        Kudos for being forthright.

        ---

        * Turns out clicking "Open Source" bottom right gets there faster!

        • namnnumbr 31 minutes ago

          I talked myself in circles on that "why it matters" heading but ultimately couldn't come up with a better one. "The problem" has similar ai-slop feel, and "the rant" // "the rules" didn't really evoke the feeling I wanted.

          Happy to take suggestions on this!

      • rrr_oh_man 38 minutes ago

        Credit to you for your candor!

        I'm possibly too jaded / cynical already...

  • namnnumbr 7 hours ago

    Tired of people at work pasting raw ChatGPT output into chats, I coined the term "sloppypasta" and have written this rant to explain why it's rude and some guidelines for what to do instead

    sloppypasta: Verbatim LLM output copy-pasted at someone, unread, unrefined, and unrequested. From slop (low-quality AI-generated content) + copypasta (text copied and pasted, often as a meme, without critical thought). It is considered rude because it asks the recipient to do work the sender did not bother to do themselves.

    • ares623 an hour ago

      I'm glad that the term "slop" really caught on. It's such a succinct way to describe the phenomenon, and at the same time it's so malleable. Sloppypasta, Microslop, Workslop, Ensloppification, etc.