Why developers using AI are working longer hours

(scientificamerican.com)

55 points | by birdculture 2 hours ago ago

44 comments

  • qzira 21 minutes ago

    When people talk about AI increasing developer productivity, they usually focus on the coding part. In my experience, the bigger change happens after the code is written. When you move from writing code to supervising agents, your output increases — but your cognitive load increases too. Instead of writing every line yourself, you're now monitoring systems: Did the agent go off-script? Did it retry 50 times while I was asleep? What did that run actually cost? The strange part is that the mental burden doesn't disappear just because the agent is autonomous. In some ways it gets worse, because failures become harder to notice early and harder to contain once they start. It starts to feel less like programming and more like running operations for a team of extremely fast, extremely literal junior developers. Curious if others are seeing the same shift.

    • Waterluvian 6 minutes ago

      That really sounds like micro managing jr. developers.

      I wonder if the interface for this kind of thing might be better presented as a sort of JIRA ticket system. Define a dependency graph of work with the ability to break down any ticket into more tickets or change priority or relationships etc.

      Though I think the micro manage part still doesn’t fit into that model. You’d need the code-level view and not just a ticket covering the tests that satisfy the spec and performance goals.

    • democracy 10 minutes ago

      Yeah I am not sure many people gonna hang around this - I am not sure I wanna do this role. I like building and delivering and ai is great help but I will not be happy supervising agents, there are better jobs. Unless the money is not to be refused

  • diavelguru 2 hours ago

    This is a real thing. I spent all of January doing Greenfield development using Claude (I finished the requirements) and all I can say is thank goodness I had the Max 5x plan and not the 20x as I got breaks once the tokens were used up till the next cycle. I was forced to get up and do something else. That something else was biking, rowing, walking. My productivity had never been higher but at what cost? My health no thanks. So I'm glad I'm using the time till token reset for my health. I time it perfectly. I do a walk, row, bike for 1 hour then as I arrive back the tokens are reset. I get like 3 hours nonstop use per token batch with the 5x plan. I've been thinking about going 20x but am scared...

    • TheAceOfHearts 41 minutes ago

      Hypothesis: limiting usage / tokens could have a positive effect on project quality, since it forces the developer to think more carefully about the problems they're working on. When you're forced to stop and slow down, you try to be more deliberate with token usage. But if you have unlimited tokens you can just keep generating infinite lines of code without thinking as hard about the problem.

      I've seen people on social media bragging about how they're able to produce a mountain of code as if this was praiseworthy.

      • DrewADesign 32 minutes ago

        One might wonder if the trend holds when limiting token use to… zero?

    • unshavedyak 2 hours ago

      I don’t get this tbh, I use Claude too and my issue is the opposite - too many small breaks. Every time I hit enter my brain wants to checkout because the agent just spins while it creates thousands of tokens and churns on the subject. Even if it’s only 2m, that’s 2m where my mind has nothing to work on.

      Hard to stay in flow and engaged.

      Feels weirdly similar to being interrupted over slack.

      • diavelguru an hour ago

        you are correct flow is not achieved as this is not programming more like system design, architecture, QA, Product Owner work. It's using the swarm as your own dev team.

        • phil21 21 minutes ago

          > It's using the swarm as your own dev team. reply

          Managing high performance dev/ops teams is it's own form of a state of flow. In fact for me, it's much more addicting than any other as the outcomes are usually many multiples of any IC role you could have. Even crazier when you have a "follow the sun" team involved so there the work just gets sequentially handed off and is always in constant motion.

          I imagine AI coding is like this for a lot of folks.

        • LoganDark 38 minutes ago

          But it's also programming as you have to study outputs to ensure they're correct. Some (it seems many) don't do this, and then their outputs usually aren't correct.

          • haliskerbas 31 minutes ago

            That’s what my teammates are for, I pipe slack and jira to Claude and the asker and teammates tell me if there’s a bug

          • DrewADesign 31 minutes ago

            Sounds more like code-level QA to me.

      • androiddrew an hour ago

        I have never been in a flow state with an agent running. I use agents, but that isn’t flow.

        • diavelguru an hour ago

          and flow state is a luxury in 2026 with AI swarm most likely to be found sparingly if all. Good luck all!

        • diavelguru an hour ago

          yes agreed. I'm running 3-5 parallel Claude at once with requirements as the input. My prompt is say work on section 5.1 or something very specific. Then I'm monitoring the work across all instances.

      • arjie 34 minutes ago

        I have similar problem but I have to switch contexts and it makes the work a lot more intense.

      • MattGaiser an hour ago

        Are you a single agent user?

        At least in my case, flow is gone. It’s all context switching now.

      • amelius 21 minutes ago

        This. And another problem is that I feel not proud after completing the task. No sense of achievement.

    • democracy 22 minutes ago

      Great shilling attempt )

    • cpncrunch 39 minutes ago

      Does a person review all the AI generated code?

      • DrewADesign 27 minutes ago

        Not at all unless they’re a) competently b) making something worth anything at all that c) isn’t a proof-of-concept or the like.

        • cpncrunch 18 minutes ago

          Yes, of course. I mean, is all production code reviewed?

  • kazinator 10 minutes ago

    [delayed]

  • butILoveLife 18 minutes ago

    We've become cashiers.

    My 6 year old is doing my job.

    The best I can hope for is that HN article that said the word "Context".

    I know the magic words "Make me a single page html js web app"... or "Install Virtual Box with Fedora Cinnamon using CLI"....

    I'm 8x more productive than I was in 2022... And I jokingly say "I'm probably not going to have a job in 1 or 2 years"...

    We are going to create incredible value to humanity. 8x rate. I don't know what our hourly will be.

  • furyofantares 2 hours ago

    > Software engineering was supposed to be artificial intelligence’s easiest win.

    At what point in time? Did anyone foresee coding being one of the best and soonest applications of this stuff?

    • djeastm 20 minutes ago

      I seem to recall short snippets of IDE code completion being one of the first commercial applications of it.

    • throwaway314155 29 minutes ago

      No one saw it coming.

    • antonvs 2 hours ago

      They're probably talking about some point after the capabilities of LLMs started to become clear.

      It's why Codex, Claude Code, Gemini CLI etc. were developed at all - it was clear that if you wanted a concrete application of LLMs with clear productivity benefits, coding was low-hanging fruit, so all the AI vendors jumped on that and started hyping it.

      • whattheheckheck 19 minutes ago

        Because swe was the furthest advanced "collaborative cognition" field in terms of human workflows

      • furyofantares an hour ago

        Sure, but jumping from its amazing these things work for code at all to software engineering is solved is something only grifters or those drunk on the kool-aid did.

        I do agree that it was thought that these llm-agents would be extremely useful and that is why they were developed, and I happen to believe they in fact are extremely useful (without disagreeing that much of the stuff in the article definitely does happen.)

        I just sort of resent the setup that it was supposed to be X but actually it failed, when not only is there only minor evidence that it failed, but it was only a brief period in time when it was supposed to be X.

  • rglover 38 minutes ago

    I use it every day and I'm taking off weekends for the first time in a decade. It's done wonders for my mental health. I think teams should pay more attention to the value of pumping the brakes vs. incessant redlining. We may actually be able to have a healthy relationship with AI then.

  • dwhitney 16 minutes ago

    I feel totally the opposite. I feel like I'm better able to have more work-life balance. Our predictions are more accurate. I'm enjoying working on actual problems rather than boilerplate. These tools are amazing

  • Fordec 2 hours ago

    Selection bias? The early adopters that are motivated to adopt tools to deliver more, typically also were working more to start with and may have already been struggling with their rate of output?

  • poink 40 minutes ago

    Personally, I make a lot more "out of hour" commits than I used to because I'll batch up low priority tasks throughout the day and let the computer chug on them at night when I'm elsewhere. Commits are coming in at all hours, but I'm not actually looking at them until the next morning.

  • ausbah an hour ago

    two unthought out thoughts:

    1. llms allow devs to be more productive, so more free time is seen as opportunity for more work. ppl overshoot and just work more

    2. generalized tooling makes devs seem more replaceable putting downward pressure on job security (ie work harder or we’ll get someone who will, oh and for less money)

    3. llms allow for more “multitasking” (debatable) via many running background tasks, so more opportunities to “just finish one more thing”

  • SoftTalker 2 hours ago

    No silver bullet. We've known this since at least the 1980s. The fact that the authors of the code might not be human doesn't change this.

  • dworks an hour ago

    thouroughly reviewing and especially testing is faster than skipping manual review and tests

    • cpncrunch 38 minutes ago

      I'm just curious how much of this AI generated code is reviewed by humans at all, and if that is factored into the productivity gains.

      • 0xcafefood 21 minutes ago

        In my experience, code validation (unit testing, code review, manual testing, etc.) was more of a bottleneck than producing code for the most part. This means that faster code generation wouldn't produce significant gains in throughput unless the code validation speeds up too. In my workplace, I've seen evidence that the people showing the biggest productivity gains from AI coding are now shipping enormous commits that are barely getting any validation. Given the Zeitgeist, others are for some reason more lenient towards that than they normally would be (or should be).

  • antonvs 2 hours ago

    I can't deny that this might be a trend in practice, but at companies with reasonably self-aware practices, it isn't, or doesn't need to be.

    There's this weird thing that happens with new tools where people seem to surrender their autonomy to them, e.g. "welp, I just get pings from [Slack|my phone|etc] all the time, nothing I can do than just be interrupted constantly." More recently, it's "this failed because Claude chose..." No, Claude didn't choose, the person who submitted the PR chose to accept it.

    It's possible to use tools responsibly and effectively. It's also possible to encourage and mentor employees to do that. The idea that a dev has to be effectively on call because they're pushing AI slop is just wrong on so many levels.

    • fnimick 34 minutes ago

      > It's possible to use tools responsibly and effectively. It's also possible to encourage and mentor employees to do that.

      It's not in the company's interest to stop employees from overworking. Having people overwork for the same pay under pressure is the desired outcome, actually.

    • cejast an hour ago

      > More recently, it's "this failed because Claude chose..." No, Claude didn't choose, the person who submitted the PR chose to accept it.

      I can relate to this, unfortunately these tools are becoming a very convenient way to offload any kind of responsibility when something goes wrong.

      • democracy 15 minutes ago

        Well if the management want to get more AI, they gonna get more AI, and no, I am not gonna be running around making sure their dreams work smoothly under my human supervision - I am gonna let it go all the way they want. In the mean time I focus on improving my skills.