Software Developers Say AI Is Rotting Their Brains

(404media.co)

43 points | by SpyCoder77 an hour ago ago

47 comments

  • sd9 a minute ago

    AI agents have made me far more productive, but the work now feels like drudgery. For me personally, the most intellectually stimulating parts of the job were automated away first, and I am getting increasingly sick of dealing with project management frenzy and pressing enter all day.

    I got into software engineering because I was always fascinated by getting computers to do stuff, and I really enjoyed the manual task of programming. It's been a dream to earn a living doing something I would do in my spare time. I was pretty good at it too.

    I'm not having fun any more, so I've decided to leave the field and become a teacher. I won't earn nearly as much money but I expect to feel more fulfilled, and I hope I can help make a difference to some young people.

    I realise complaining about this is tone deaf. I've had an extraordinarily privileged career, and many people never get the luxury of enjoying their work at all. But I'd rather try to enjoy what I do day to day than persist in something that's lost its spark.

  • RugnirViking 24 minutes ago

    I don't think this article is correct exactly, but I do feel that I'm less proud of my work. Less likely to go the extra mile. At first, I tried to do all the due diligence - reading and understanding all of the black box's output. But its clear what my workplace wants - more velocity, more code. If you take time reviewing, you're a blocker. If you lgtm that 3k LoC PR, that's great responsiveness. If you spend two days on a "simple fix" that involves broad cross cutting changes to the system and multiple library updates, you should be doing something else. We are all working across more areas of the system, less specialization, less understanding.

    And it is great. It does produce fixes, produce a facimilie of understanding. It answers my questions, and is often right. And tinkering with the process of it is satisfying. Integrating more and more data, writing better specs, you can get better results. Its tempting to think that it could be sustainable, this way of working, but also so scary to lose the understanding, to not have the confidence in how things work. Finding duplicated stacks using different libraries, or even the same library, is becoming more and more common. Even our debugging tools, our tracing grow fragmented and unstandardized.

    I liked the old way of working. It was fun for me, if often frustrating. It was solving hard sudoku on the train. This new way is lower friction, but more stress. It's steering a rocket ship using chopsticks to hold the wheel. You desperately want to slow things down and work methodically, to be sure, and safe. But you won't get anywhere near as far if you do that.

    Somewhere quiet, the tech debt demon smiles.

    • svieira 14 minutes ago

      > Finding duplicated stacks using different libraries, or even the same library, is becoming more and more common. Even our debugging tools, our tracing grow fragmented.

      Same - literally found a re-build of a library feature for use with the library the other day (e. g. MyCustomFooProviderFor(Bar) but Bar already literally has a `.foo` method.) No, it didn't need to be there.

    • mooreds 20 minutes ago

      I have so many questions.

      How long have you been doing this?

      Are you at a product company, a consultancy, a place where technology is an enabler but not core, or somewhere else?

      What happens when there are bugs or an outage due to that 3k LoC PR?

      • RugnirViking 9 minutes ago

        7 years experience as a developer or thereabouts. Its probably been a year since the agentic coding stuff has become really widespread, picking up pace a lot around jan. Even the old hands, 20 years plus at the company and those few holdouts who refused to use AI before are deep in it now.

        We're at a product company, not a consultancy. Hard to say exactly about tech, the tech is namely the product, but its b2b, so massive contracts move like glaciers, customer purchase decisions are often as much or more about the claims we made as the reality of the code.

        As for outages, its the same as it always was. We have our testing, in layers. unit tests, integrations, e2e, staging envs. Layers and layers before it reaches the customer. If there ever is something that reaches there, as has happened, its so hard to pin the blame on AI, and of course we run a blameless culture here anyhow. Tickets are assigned, emergency patches are made, and the behemoth lumbers on.

        I don't pin blame on stupid management or whatever, I think this is complacency rather than a specific effort to push ai, as some claim. AI has just made it easier to work on more and understand less, and this is the result, no external intervention needed. I don't have a solution other than observing that trying to stop this is fighting the tides. People used to hate working on legacy codebases, where the original developers werent around to explain themselves, now everything is a legacy codebase right from inception - even if you personally don't use ai, the job is fundamentally different.

  • deweller 42 minutes ago

    "Developers talk not just about how the AI output is often flawed, but that using AI to get the job done is often a more time consuming, harder, and more frustrating experience because they have to go through the output and fix its mistakes."

    This has not been my experience. Sure it feels like more work to fix the AI code problems sometimes - it is a different skillset than writing code from scratch. But the speed that I can deliver software has significantly increased by using coding agents.

    • rkozik1989 a minute ago

      Honestly, the effectiveness of LLMs in coding depends a lot on what you're working on. If you're dealing with a software package like Odoo that's been around for literal decades an LLMs output can be borderline useless. The problem is that in its training data it has examples from every version that's ever been released and each succeeding major version makes breaking changes to the previous one, so pretty much what happens is that the LLM can't accurately tell what in its training data belongs to which version before concocting a reply.

    • jjulius 28 minutes ago

      >This has not been my experience.

      >But the speed that I...

      • bensyverson 3 minutes ago

        First-hand experience is perfectly valid.

        I agree with the parent; I'm able to produce more. And with proper documentation and unit tests in place, I don't feel I need to review every line.

  • cowlby a minute ago

    I wonder how much this is correlated to token budgets? I'd be curious to see a split between $20/$100/$200/$500+ usage and see if there is different responses. I'm in the $400 range with Claude + Cursor subscription, use Opus exclusively, and my experience is wildly different from this.

  • spicyusername 26 minutes ago

    You're going to keep seeing this because people don't like AI adoption.

    But the fact is this is not how it is. Every competent developer I know is delivering significantly more after being AI enabled.

    Anyone seriously using the tools without a chip on their shoulder is going to say the same.

    Are the tools delivering perfect code 100% of the time, no, of course not. But that's the new skill. Guiding them so they deliver good enough code at 5-50x the velocity. As the models improve and the ecosystem tries out new workflows, the skill changes and the output gets better and better.

    What we're capable of delivering now is incredible and would have been unimaginable just a few years ago.

    • zeroonetwothree a few seconds ago

      I’d we are talking about adding code to a large existing production code base then there’s no way I can see to get 5x, let alone 50x. My experience and the data I’ve seen is more like a 10-20% improvement. We see the volume of code increase more than that but a lot of is bug fixes only necessary because the initial commit wasn’t adequately tested and reviewed. So the net effect is less.

      Now if you mean generating some one off script or playing around with a prototype in some area you don’t know then I can see more like 5-10x but these are typically not the bottleneck for shipping software.

    • thegrim33 15 minutes ago

      Of course your claim of 5-50x velocity is not born out in any metrics which track industry software velocity and you need to bend yourself backwards to come up with reasons to explain why they aren't.

      • kimjune01 14 minutes ago

        are merged PRs a measure of velocity? github.com/kimjune01/

        • collingreen 2 minutes ago

          No, of course not? I don't even disagree with your main premise but obviously "raw number of merged PRs" is not a high signal metric, even more so in the age of agentic/vibe coding.

        • svieira 2 minutes ago

          This you?

          https://june.kim/speedrunning-open-source

          > tinygrad I picked on purpose. geohot narrates rejections in public, and a narrated rejection is data; a silent close is noise. Thirteen PRs, one merged, twelve closed. His comments tell the escalation story:

          >> be careful with AI usage, we never trade complexity for speed

          >> You need to stop with AI PRs, you will be banned.

          >> Last warning about low quality PRs before I ban you from our GitHub.

          >> I don’t even understand what this does. I’m not reading anything written by AI

          > Each line a little more done with my shit than the last.

          > Some of those PRs had real bugs with real fixes. The MATVEC pattern rejected equal-range elementwise reduces, a genuine correctness issue. But by that point the maintainer had stopped reading code and started reading provenance. “We never trade complexity for speed” is a valid engineering principle. “I’m not reading anything written by AI” is not.

          > I went there for maximum surprise and got it. He had a review queue and a quality bar to protect; I had a clanker and a question. The price was his afternoon, three warnings, an account ban, and real bugs left unfixed.

          Because this is Facebook-level "let's make people angry on the internet and see what happens" levels of treating people as if they were means to an end rather than an end in themselves. And you should stop.

        • gmueckl a few seconds ago

          No.

        • bluefirebrand 4 minutes ago

          No, not any more than lines of code written are measures of velocity

      • esafak 14 minutes ago

        I could easily enjoy a 10x improvement when working on something I'm not familiar with once you factor in the learning time.

    • ryandrake a minute ago

      This is a business mindset, though. AI is great if you care about "delivering" stuff and "velocity." I got into computer programming because I like to program computers, not whatever this is. So glad I changed roles away from software development and only do programming at home as a hobby.

    • agentultra 8 minutes ago

      They're expensive to perform and rarely are reproducible but I'll wait for the empirical studies before believing any claims.

      We can't even decide if type systems have made us more productive. It's barely been studied. Same with test-driven development.

      What it sounds like we'll see, from your description of AI-enabled developers, is a commensurate (perhaps linear) increase in the rate of errors reaching production systems. Every line of code is a liability. Now everyone has a fire-hose they can aim at a production environment.

      At least time and effort prevented some bad ideas and potentially bad code from reaching production.

      I'm sure the platforms providing these tools are going to be happy with the results when every business writing code this way becomes dependent on them and have no exit strategy. The prices increase, the service gets worse, and you're locked in. Sounds real productive.

    • svieira 13 minutes ago

      > 50x the velocity

      :blinks: You are producing in a week what used to take you a year?

      • spicyusername 3 minutes ago

        Lol, it's not sustained 50x for every task every minute.

        But there are definitely many tasks that used to take a very long time that now take almost no time at all, and that can be delivered in parallel with other tasks.

        • svieira a few seconds ago

          Yes, but what _percentage_ are they? Or is this the XKCD optimization graph all over again?

        • devmor 2 minutes ago

          That's a very silly claim to make, because you can make that same claim about writing a bash script.

      • discreteevent 6 minutes ago

        And at the same time they talk about "competent developer"s

    • Tade0 21 minutes ago

      > Guiding them so they deliver good enough code at 5-50x the velocity.

      Huge problem with this is the rate at which anyone can take accountability for the code produced.

      Of course you can let AI do reviews, but my experience so far is that it's, broadly speaking, not working.

    • neogodless 14 minutes ago

      Can someone translate this?

      > What we're capable of delivering now is incredible and would have been unimaginable just a few years ago

      What I mean is - are there concrete examples, real world "things" that came from AI programming, that are incredible, and someone can talk about and point to how AI led to the thing being possible?

    • mrbungie 18 minutes ago

      I'm always wondering who has the time to consume all the new code that is being produced. Like sure, you can produce at 5-10X the speed, but is someone using those features? Not sure if the typical consumer mind can keep up with such speed of changes.

    • d_silin 6 minutes ago

      After going back and forth, I stopped using AI for coding at all.

      Maybe I am not "competent" developer, but the point has some merit.

    • zackify 13 minutes ago

      I feel the same way.

      Even if I'm reviewing more, I built the feature without even opening my editor.

      My workflow is:

      1. Plan mode 2. Read thoroughly or skim if its an easy task 3. /draft command that puts a draft PR on github 4. Review closely then send to team

    • noveltyaccount 12 minutes ago

      I'm someone with 20 years in software and the last 10 in management. I have good instincts, good design pattern knowledge, and understand system design well. But my actual coding skills are rusty, I can do it but it takes a lot of time to RTFM because specific libraries and syntax aren't on the top of my mind.

      With AI I can build. I'm having so much fun turning ideas into code. I can do a week's worth of work before lunch. I can ask AI to add comments so detailed that my code becomes a refresher tutorial.

      It's so exciting to be able to bring my ideas to life, make use of my experience, and not be hobbled by my somewhat atrophied hands-on coding skills. I for one welcome this revolution.

    • Jyaif 12 minutes ago

      I love AI, but it's possible that we are in an temporary golden age of software development because of 3 things:

      1. The software is simple because lowly humans wrote them and debugged them and maintained them.

      2. The humans are competent in software engineering.

      3. All of a sudden we now have help from AI.

      Point 3. is here to stay, but 1. and 2. could disappear.

    • devmor 3 minutes ago

      5-50x huh? Years of AI hype and yet still to this day, not a single person or organization can provide any kind of reputable evidence that it has significantly increased their productivity.

  • askllk 2 minutes ago

    The only use case for AI is for looking up historical references and current events. The latter is probably the most used part, which is why models are only useful if they scrape news sites.

    You can also use it for regurgitating manuals, but generative AI for coding is counterproductive. Only the tool and gaming addicted people like it and pretend to be more productive, for which there is no public evidence. I don't see any software improving at any faster rate.

  • agentultra 14 minutes ago

    > At Meta, Google, Microsoft, and others, leadership says that AI generates a growing share of the overall code

    Probably because they mandate its adoption. And while there are plenty of developers who will happily comply and see it as a good thing. There are others who will do it because they have to or risk losing their jobs.

    It's a bit of a silly thing to claim. "We made everyone use it, so they did, and now adoption is going up!"

  • amelius 5 minutes ago

    Yesterday I had been talking to an AI all day, and left work with a feeling of non-accomplishment even though I probably did slightly more than I would do normally, though time will tell if the maintenance costs will be higher or not.

    And I used to love my work :(

    • pawelduda 2 minutes ago

      AI is not required for this to happen

  • pxtail 19 minutes ago

    In my case it's less about actual "rotting" and more about the feeling that any mine attempts to write code are futile and meaningless - if my LLM limits are exhausted it's actually more productive to go do something else (or write specs on how it should be done) and return back later and do LLM assisted coding than coding without it because in literally minutes I can then produce equivalent of hours-long "manual" coding session.

  • xiphias2 30 minutes ago

    I think Andrej Karpathy's quote summarizes well what all software engineers are going through:

    ,,you can outsource your thinking but not your understanding''

    There's just no way to not generate much more amount of code with LLMs than we would do as humans, so well structuring code gets much more important than ever before.

  • 14 minutes ago
    [deleted]
  • general1465 2 minutes ago

    I have usually positive experience with AI. What it excels in are tasks which are having clear boundaries and proper context.

    I have also worked in customer support for some time and I have found that huge problem for some people (often times developers) is that they are lacking theory of mind. Like they literally can't comprehend that I don't see into their heads and they need to articulate their question with correct context otherwise I can't help them.

    AI is like a litmus test for it. People who have theory of mind, are capable of putting together a question which will give good results out of AI. On the other hand people who are struggling with the fact that AI can't see what you mean unless it is in a context window will have bad time with it. These people also usually suck in managing other people because - once again - they are unable to provide tasks with enough context and properly set boundaries. At best they will give you some vague poorly defined tasks and get mad when you will do it differently than they had in their mind.

  • giwook 13 minutes ago

    404 media tends to put out quality articles in my opinion but this one feels a bit like clickbait.

    It seems like they're overgeneralizing quite a bit here and focusing on a narrow subset of the population while ignoring the people who are actually thriving with their new AI-enabled dev workflows.

    LLMs are not a panacea by any means and they have lots of cons. But I for one would find it difficult to go back to a world where I can't lean on LLMs in my day-to-day.

    One very specific example that could not possibly contribute to the brainrot mentioned in this article: AI saves time and reduces the headache of having to pore through pages of documentation (if there even is any) to find how that one method works or what arguments it can take. This alone is immensely helpful and can keep you in a state of flow instead of sending you off on a potentially fruitless side quest that derails your whole train of thought.

    It's also taken me quite a bit of time, effort, and experimentation to find the right tools and the right ways to work AI into my workflows which I would bet that the developers mentioned in this article have not explored too deeply if at all.

    Claiming AI is rotting your brain because you can't one-shot an entire app or even a single feature is a straw man fallacy.

  • hirvi74 6 minutes ago

    I feel the opposite, but then again, I do not use AI to actually write the code for me. It's like the faster StackOverflow search.

  • andai 18 minutes ago

    The emperor has semi-transparent clothes.

  • jesse_dot_id 17 minutes ago

    I'm experiencing the opposite.