Slop Terrifies Me

(ezhik.jp)

82 points | by Ezhik 4 hours ago ago

68 comments

  • bonoboTP a few seconds ago

    Commercial ventures already had to care exactly to the extent that they are financially motivated by competition forces and by regulation.

    In my experience coding agents are actually better at doing the final polish and plugging in gaps that a developer under time pressure to ship would skip.

  • beardyw 2 hours ago

    I think this is far too nuanced. I am terrified by what the civilization we have known will become. People living in less advanced economies will do OK, but the rest of us not so much. We stand on the brink of a world where some wealthy people will get more wealthy, but very many will struggle without work or prospects.

    A society where a large percent have no income is unsustainable in the short term, and ultimately liable to turn to violence. I can see it ending badly. Trouble who in power is willing to stop it?

    • zozbot234 a few seconds ago

      > very many will struggle without work or prospects.

      People always say this with zero evidence. What are some real examples of real people losing their job today because of LLMs. Apart from copywriters (i.e. the original human slop creators) having to rebrand as copyeditors because the first draft of their work now comes from a language model.

    • latexr 2 hours ago
    • lm2s 2 hours ago

      Yes, that’s why they are on the race to building the very advanced robots. To prevent the violence towards them.

      • direwolf20 32 minutes ago

        Gaza is kept as a testing ground for domestic spying and domestic military technology intended to be used on other groups. Otherwise they'd have destroyed it by now. Stuff like Palantir is always tested in Gaza first.

      • satisfice 2 hours ago

        That is exactly the motivation. The problem with being a billionaire is you still have to associate with poor people. But imagine a world where your wealth completely insulates you from the resentful poor.

        • gib444 an hour ago

          How does a billionaire have to associate with poor people? They can live in a complete bubble: house in the hills, driven by a chauffeur, private jets, private islands for holidays etc...?

          • aspaviento an hour ago

            The people who cook for them, the people who clean for them, the ones who take care of their kids, the one who sell them stuff or serve them in restaurants...

            • pbhjpbhj 3 minutes ago

              Also, they're not building the house or the jet, they're not growing the food, ... people close enough can be chosen for willingness to be sycophants and happiness to be servants. Unless you're feeding yourself from your own farm, or manufacturing your own electronics, there are limits to even a billionaires ability to control personnel.

          • nkrisc an hour ago

            Unless they’re living entirely by themselves, they will always be dependent on poor people.

          • sh3rl0ck an hour ago

            The poor maids and servants, the poor chauffeur, the poor chef, etc.

          • Der_Einzige 42 minutes ago

            The fact that people see that basically the singularity is happening but can't imagine that humanoid robots get good rapidly is why most people here are bad futurists.

            • wiseowise 28 minutes ago

              You’re delusional if you think singularity is happening.

            • danaris 38 minutes ago

              > the singularity is happening

              [Citation needed]

              No LLM is yet being used effectively to improve LLM output in exponential ways. Personally, I'm skeptical that such a thing is possible.

              LLMs aren't AGI, and aren't a path to AGI.

              The Singularity is the Rapture for techbros.

              • Der_Einzige 33 minutes ago

                If you look at the rapid acceleration of progress and conclude this way, well, de nile ain't just a river in egypt.

                Also yes LLMs are indeed AGI: https://www.noemamag.com/artificial-general-intelligence-is-...

                This was Peter Norvig's take. AGI is a low bar because most humans are really stupid.

                • mattgreenrocks 21 minutes ago

                  If you think AGI is at hand why are you trying to sway a bunch of internet randos who don’t get it? :) Use those god-like powers to make the life you want while it’s still under the radar.

                • danaris 27 minutes ago

                  What rapid acceleration?

                  I look at the trajectory of LLMs, and the shape I see is one of diminishing returns.

                  The improvements in the first few generations came fast, and they were impressive. Then subsequent generations took longer, improved less over the previous generation, and required more and more (and more and more) resources to achieve.

                  I'm not interested in one guy's take that LLMs are AGI, regardless of his computer science bonafides. I can look at what they do myself, and see that they aren't, by most very reasonable definitions of AGI.

                  If you really believe that the singularity is happening now...well, then, shouldn't it take a very short time for the effects of that to be painfully obvious? Like, massive improvements in all kinds of technology coming in a matter of months? Come back in a few months and tell me what amazing new technologies this supposed AGI has created...or maybe the one in denial isn't me.

    • KurSix an hour ago

      People in power won't act out of foresight or ethics. They'll act when the cost of not acting exceeds the cost of doing something messy and imperfect

      • relaxing 21 minutes ago

        Even that’s giving them too much credit. They’ll burn it all down preserve their fragile egos.

    • kubb 2 hours ago

      I wonder, will the rich start hiring elaborate casts of servants including butlers, footmen, lady's maids, and so on, since they'll be the only ones with the income?

      • temp8830 2 hours ago

        They already do. In fact, we are all working in service of their power trips.

      • drzaiusx11 an hour ago

        As far as I can tell, the rich have never stopped employing elaborate casts of servants; these servants just go by different titles now: private chef, personal assistant, nanny, fashion consultant, etc.

      • chung8123 an hour ago

        Who do you think is building the machines for the rich? All of these tech companies are nothing without the employees that build the tech.

      • ted_bunny an hour ago

        This is what the service economy in the imperial core already is.

    • Muromec 2 hours ago

      It's regression to the mean in action. Everethyng eventually collapses into olygarhy and wevwill simply joing the unpriviliged rest in their misery. Likely with few wars civil or not here and there

    • pydry an hour ago

      >We stand on the brink of a world where some wealthy people will get more wealthy, but very many will struggle without work or prospects.

      Brink? This has been the reality for decades now.

      >A society where a large percent have no income is unsustainable in the short term, and ultimately liable to turn to violence. I can see it ending badly. Trouble who in power is willing to stop it?

      Nobody. They will try to channel it.

      I think all signals are pretty inevitably pointing to three potential outcomes (in order of likelihood): WW3, soviet style collapse of the west or a soviet style collapse of the sino-russian bloc.

      If the promise of AI is real I think it makes WW3 a much more likely outcome - a "freed up" disaffected workforce pining for meaning and a revolutionized AI-drone first battlefield both tip the scales in favor of world war.

    • jjgreen 2 hours ago

      Welcome to capitalism!

      • smokel an hour ago

        Besides being a bit of a shallow comment, what exactly do you imply here? That capitalism logically implies that the rich become richer? I don't think this is necessarily the case, it just needs a stronger government than what the US currently has in place. (e.g. progressive taxation and strong antitrust policy seem to work fairly well in Europe).

        • kubb 3 minutes ago

          But with how compounding works, isn't this outcome inevitable in capitalism? If the strong government prevents it then the first step for the rich is to weaken or co-opt the government, and exactly this has been happening.

        • beardyw 17 minutes ago

          Isn't that what Americans call socialism?

  • drzaiusx11 34 minutes ago

    I have deep concerns surrounding LLM-based systems in general, which you can see discussed in my other threads and comments. However in this particular article's case, I feel the same fears outlined largely predate mass LLM adoption.

    If you substitute "artificial intelligence" with offshored labor ("actually indo-asians" meme moniker) you have some parallels: cheap spaghetti code that "mostly works", just written by farms of humans instead of farms of GPUs. The result is largely the same. The primary difference is that we've now subsidized (through massive, unsustainable private investment) the cost of "offshoring" to basically zero. Obviously that has its own set of problems, but the piper will need to be paid eventually...

  • roxolotl an hour ago

    LLM are an embodiment of the Pareto principle. Turns out that if you can get an 80% solution in 1% of the time no one gives a shit about the remaining 20%. I agree that’s terrifying. The existential AI risk crowd is afraid we’ll produce gods to destroy us. The reality is we’ve instead exposed a major weakness in our culture where we’ve trained ourselves to care nothing about quality but instead to maximize consumption.

    This isn’t news really. Content farms already existed. Amusing Ourselves to Death was written in 1985. Critiques of the culture exist way before that. But the reality of seeing the end game of such a culture laid bare in the waste of the data center buildout is shocking and repulsive.

    • KurSix an hour ago

      The data center buildout feels obscene when framed this way. Not because computation is evil, but because we're burning planetary-scale resources to accelerate a culture that already struggles to articulate why quality matters at all

      • direwolf20 30 minutes ago

        There isn't nearly enough AI demand to make all of these projects turn a profit.

  • hereme888 2 hours ago

    "terrified".... overused word. As a man I literally can't relate. I get terrified when I see a shark next to me in the ocean. I get impatient when code is hard to debug.

    • KurSix an hour ago

      We're pretty good at naming fear when it has a physical trigger. We're much worse at naming the unease that comes from watching something you care about get quietly hollowed out over time. That doesn't make it melodrama, just a different category of discomfort.

    • relaxing 16 minutes ago

      Step 1: Start looking beyond your code, as the stuff beyond your code is looking at you.

    • mystraline an hour ago

      Its existential dread, of being useless and of not being able to thrive.

      Its being compared to that of a slop machine, and billionaires claiming that its better than you are in all ways.

      Its having integrity in your work, but the LLM slop-machines can lie and go "You're actually right (tells more lies)".

      It all comes down to that LLMs serve to 'fix' the trillion dollar problem: peoples wages. Especially those engineers, developers, medical, and more.

  • secretsatan 11 minutes ago

    I was watching a youtube video the other day where the guy was complaining his website was dropping off the google search results. Long story short, he reworded it according to advice from Gemini, the more he did it, the better it performed, but he was reflecting on how the website no longer represented him.

    Soon, we'll all just be meatpuppets, guided by AI to suit AI.

  • outime 2 hours ago

    > 90% is a lot. Will you care about the last 10%? I'm terrified that you won't.

    I feel like long before LLMs, people already didn't care about this.

    If anything software quality has been decreasing significantly, even at the "highest level" (see Windows, macOS, etc). Are LLMs going to make it worse? I'm skeptical, because they might actually accelerate shipping bug fixes that (pre-LLMs) would have required more time and management buy-in, only to be met with "yeah don’t bother, look at the usage stats, nobody cares".

    • KurSix an hour ago

      I don't think LLMs are the root cause or even a dramatic inflection point. They just tilt an already-skewed system a little further toward motion over judgment

    • intrasight 2 hours ago

      If it can enable very small teams to deliver big apps, I do think the quality will increase.

  • blaze33 2 hours ago

    As much as we speak about slop in the context of AI, slop as the cheap low-quality thing is not a new concept.

    As lots of people seem to always prefer the cheaper option, we now have single-use plastic ultra-fast fashion, plastic stuff that'll break in the short term, brittle plywood furniture, cheap ultra-processed food, etc.

    Classic software development always felt like a tailor-made job to me and of course it's slow and expensive but if it's done by professionals it can give excellent results. Now if you can get crappy but cheap and good enough results of course it'll be the preferred option for mass production.

  • chung8123 an hour ago

    AI slop is similar to the cheap tools at harbor freight. Before we used to have to buy really expensive tools that were designed to last forever and perform a ton of jobs. Now we can just go to harbor freight and get a tool that is good enough for most people.

    80% of good maybe reframed as 100% ok for 80% of the people. It is when you are in the minority that cares about or needs that last 20% where it is a problem because the 80% were subsidizing your needs by buying more than the need.

  • KurSix an hour ago

    I don't think craft dies, but I do think it retreats

  • frankie_t an hour ago

    If slop doesn't get better, it would mean that at least I get to keep my job. In the areas where the remaining 10% don't matter, maybe I won't. I'm struggling to come up with an example of such software outside of one-off scripts and some home automation though.

    The job is going to be much less fun, yes, but I won't have to learn from scratch and compete with young people in a different area (and which I will enjoy less, most likely). So, if anything slop gives me hope.

  • Havoc an hour ago

    The slop is sad but a mild irritation at most.

    It's the societal level impact of recent advances that I'd call "terrifying". There is a non-zero chance we end up with a "useless" class that can't compete against AI & machines - like at all, on any metric. And there doesn't seem to be much of a game plan for dealing with that without social fabric tearing

    • danaris 32 minutes ago

      Some of us have a perfectly good game plan for that. It's called Universal Basic Income.

      It's just that many powerful people have a vested interest in keeping the rest of us poor, miserable, and desperate, and so do everything they can to fight the idea that anything can ever be done to improve the lot of the poor without destroying the economy. Despite ample empirical evidence to the contrary.

  • RalfWausE 37 minutes ago

    Butlers jihad has to happen. Destroy the datacenters and give the oligarchs the french treatment!

  • intrasight 2 hours ago

    > I'm terrified that our craft will die, and nobody will even care to mourn it.

    "Terrified" is a strong word for the death of any craft. And as long as there are thousands that love the craft, then it will not have died.

  • bartvk 2 hours ago

    I deeply hate the people that use AI to poison the music, video or articles that I consume. However I really feel that it can possibly make software cheaper.

    A couple of years ago, I worked for an agency as a dev. I had a chat with one of the sales people, and he said clients asked him why custom apps were so expensive, when the hardware had gotten relatively cheap. He had a much harder time selling mobile apps.

    Possibly, this will bring a new era of decent macOS desktop and mobile apps, not another web app that I have to run in my browser and have no control over.

    • tonyedgecombe 2 hours ago

      >Possibly, this will bring a new era of decent macOS desktop and mobile apps, not another web app that I have to run in my browser and have no control over.

      There has been no shortage of mobile apps, Apple frequently boasts that there are over 2 million of them in the App Store.

      I have little doubt there will be more, whether any of the extra will be decent remains to be seen.

      • vrighter 2 hours ago

        ai is trained on the stuff already written. Software has been taking a nosedive for ages (ex, committing to shipping something in 6 months before one even figures out what to put in it). If anything shit will get worse due to the deskilling being caused by ai.

  • fancyfredbot 2 hours ago

    > You get AI that can make you like 90% of a thing! 90% is a lot. Will you care about the last 10%? I'm terrified that you won't.

    Based on the Adobe stock price the market thinks AI slop software will be good enough for about 20% of Adobe users (or Adobe will need to make its software 20% cheaper, or most likely somewhere between).

    Interestingly workday, which is possibly slightly simpler software more easily replicable using coding agents is about the same (down 26%).

    • twoodfin 2 hours ago

      The bear case for Workday is not that it gets replicated as slop, but that its “user base” becomes dominated by agents.

      Agents don’t care about any of Workday’s value-adds: Customizable workflows, “intuitive” experiences, a decent mobile app. Agents are happy to write SQL against a few boring databases.

  • PlatoIsADisease 2 hours ago

    >What if the future of computing belongs not to artisan developers or Carol from Accounting, but to whoever can churn out the most software out the fastest? What if good enough really is good enough for most people?

    Sounds like the cost of everything goes down. Instead of subscription apps, we have free Fdroid apps. Instead of only the 0.1% commissioning art, all of humanity gets to commission art.

    And when we do pay for things, instead of an app doing 1 feature well, we have apps do 10 features well with integration. (I am living this, instead of shipping software with 1 core feature, I can do 1 core feature and 6 different options for free, no change order needed)

    • Ezhik 2 hours ago

      The future you describe seems closer to the "Carol from Accounting" future I am hoping for in the blog post. My worry is that cost of everything goes down just enough to price out of existence all of the artists the 0.1% used to commission, without actually letting all of humanity do the same.

  • andrewstuart 2 hours ago

    I use AI/LLMs hard for my programming.

    They allow me to do work I could never have done before.

    But there’s no chance at all of an LLM one shotting anything that I aim to build.

    Every single step in the process is an intensely human grind trying to understand the LLM and coax it to make the thing I have in mind.

    The people who are panicking aren’t using this stuff in depth. If they were, then they would have no anxiety at all.

    If only the LLM was smart enough to write the software. I wish it could. It can’t, nor even close.

    As for web browsers built in a few hours. No. No LLM is coming anywhere new at building a web browser in a few hours. Unless your talking about some super simple super minimal toy with some of the surface appearance of a web browser.

    • ChrisMarshallNY 2 hours ago

      This has been my experience. I tend to use chats, in a synchronous, single-threaded manner, as opposed to agents, in an asynchronous way. That’s because I think of the LLM as a “know-it-all smartass personal assistant”; not an “employee replacement.”

      I just enjoy writing my own software. If I have a tool that will help me to lubricate the tight bits, I’ll use it.

      • Tade0 an hour ago

        Same. I hit Tab a lot because even though the system doesn't actually understand what it's doing, it's really good at following patterns. Takes off the mental load of checking syntax.

        Occasionally of course it's way off, in which case I have to tell it to stfu ("snooze").

        Also it's great at presenting someone else's knowledge, as it doesn't actually know facts - just what token should come after a sequence of others. The other day I just pasted an error message from a system that I wasn't familiar with and it explained in detail what the problem was and how to solve it - brilliant, just what I wanted.

        • ChrisMarshallNY an hour ago

          > The other day I just pasted an error message from a system that I wasn't familiar with and it explained in detail what the problem was and how to solve it

          That’s probably the single most valuable aspect, for me.

    • Ezhik 2 hours ago

      I'm less afraid of people using LLMs for coding well than I am of people not caring to and just shipping slop.

      This is the browser engine I was alluding to in the post: https://github.com/wilsonzlin/fastrender

  • Der_Einzige an hour ago

    Our paper on removing AI slop got accepted to ICLR 2026, and it's under consideration for an IgNobel prize:

    https://arxiv.org/abs/2510.15061

    Our definition of slop (repetitive characteristic language from LLMs) is the original one as articulated by the LLM creative writing community circa 2022-2023. Folks trying to redefine it today to mean "lazy LLM outputs I don't like" should have chosen a different word.

    • zbentley 8 minutes ago

      I was disappointed that your paper devoted less than a sentence in the introduction to qualifying "slop" before spending many pages quantifying it.

      The definitions you're operating under are mentioned thus:

      > characteristic repetitive phraseologyy, termed “slop,” which degrades output quality and makes AI-generated text immediately recognizable. (abstract)

      > ... some patterns occur over 1000× more frequently in LLM text than in human writing, leading to the perception of repetition and over-use – i.e. "slop". (introduction)

      And that's ... it, I think. No further effort is visible towards a definition of the term, nor do the background citations propose one that I could see (I'll admit to skimming them, though I did read the parts of your paper that seemed relevant--if I missed something, let me know).

      That might be suitable as an operating definition of "slop" to explain the techniques in your paper, but neither your paper nor any of your citations defend it as the common definition of an established term. Your paper's not making an incorrect claim per se, rather, it's taking your definition of "slop" for granted without evidence.

      That doesn't bode well for the rigor of the rest of the paper.

      Like, look: I get that this is an extremely fraught and important/popular area of research, and that your approach has "antislop" in the name. That's all great; I hope your approach is beneficial--truly. But you aren't claiming a definition of slop in your paper; you're just assuming one. Then you're coming here and asserting a definition citing "the LLM creative writing community circa 2022-2023" and asserting redefinition-after-the-fact, both of which are extraordinary claims that require evidence.

      Again, not only do I think that mis-definition is untrue, I also think that you're not actually defining "slop" (the irony of my emphasizing that in a not-just-x-but-y sentence is not lost on me).

      I don't know which of the authors you are, but Ravid, at least, should know better: this is not how you establish terminology in academic writing, nor how you defend it.

    • direwolf20 27 minutes ago

      Slop is food scraps fed to pigs. Folks trying to redefine it in 2022–2023 as "repetitive characteristic language from LLMs" should have chosen a different word.

      A computer is a person employed to do arithmetic.

      • Der_Einzige 22 minutes ago

        Sloppy joes is either a food item or a slur against the previous democratic president. Checkmate.

    • suddenlybananas an hour ago

      Words expand meanings all the time and frankly I don't think your narrow definition of slop was ever a common one.