AI doesn’t reduce work, it intensifies it

(simonwillison.net)

153 points | by walterbell 7 hours ago ago

131 comments

  • latexr 24 minutes ago

    > the productivity boost these things can provide is exhausting.

    What I personally find exhausting is Simon¹ constantly discovering the obvious. Time after time after time it’s just “insights” every person who smoked one blunt in college has arrived at.

    Stop for a minute! You don’t have to keep churning out multiple blog posts a day, every day. Just stop and reflect. Sit back in your chair and let your mind rest. When a thought comes to you, let it go. Keep doing that until you regain your focus and learn to distinguish what matters from what is shiny.

    Yes, of course, you’re doing too much and draining yourself. Of course your “productivity” doesn’t result in extra time but is just filled with more of the same, that’s been true for longer than you’ve been alive. It’s a variation of Parkinson’s law.

    https://en.wikipedia.org/wiki/Parkinson%27s_law

    ¹ And others, but Simon is particularly prevalent on HN, so I bump into these more often.

    • alain13 19 minutes ago

      I find Simon’s blog and TILs to be some of the highest signal to noise content on the internet. I’ve picked up an incredible number of useful tips and tricks. Many of them I would not have found if he did not share things as soon as he discovered something that felt “obvious.” I also love how he documents small snippets and gists of code that are easy to link to and cross-reference. Wish I did more of that myself.

    • g-mork 12 minutes ago

      They have established themselves as a reliable communicator of the technology, they are read far and wide, that means they are in a great position to influence the industry-wide tone, and I'm personally glad they are bringing light to this issue. If it upsets you that someone else wrote about something you understood, perhaps consider starting a blog of your own.

    • co_king_3 7 minutes ago

      > You don’t have to keep churning out multiple blog posts a day, every day.

      How do you know that? You don't think he's being paid for all this marketing work?

      • simonw a minute ago

        I'm paid by my GitHub sponsors, who get a monthly summary of what I've been writing about on the basis that I don't want to out a paywall on my content but I'm happy for people to pay me to send them less stuff.

        I also make ~$600/month from the ads on my site - run by EthicalAds.

        I don't take payment to write about anything. That goes against my principals. It would also be illegal in the USA (FTC rules) if I didn't disclose it - and most importantly it would damage my credibility as a writer, which is the thing I value most.

        I have a set of disclosures here: https://simonwillison.net/about/#disclosures

    • wolfcola 20 minutes ago

      Also the absolute lack of historical or political awareness to suggest companies will want to find a “balance” for their employees…

      • ttoinou 18 minutes ago

        It’s called the market. If you can compete while providing better life balance, go ahead and compete with those bad companies

        • FeteCommuniste 11 minutes ago

          It's called the market. If you can compete while not employing eight year olds on your assembly lines and dumping carcinogens in the river, go ahead and compete with those bad companies.

        • co_king_3 6 minutes ago

          It's called the market. Get back to work, slave.

        • mcny 10 minutes ago

          > It’s called the market. If you can compete while providing better life balance, go ahead and compete with those bad companies

          With friends like you, who needs enemies? Imagine if we said that about everything. Go ahead and start a garment factory with unlocked exit doors and see if you can compete against these bad garment companies. Go ahead and start your own coal mines that pay in real money and not funny money only redeemable at the company store. Go ahead and start your own factory and guarantee eight hours work, eight hours sleep, eight hours recreation. It is called a market, BRO‽

  • falloutx an hour ago

    I am becoming more and more convinced that AI cant be used to make something better than what could have built before AI.

    You never needed 1000s of engineers to build software anyway, Winamp & VLC were build by less than four people. You only needed 1000s of people because the executive vision is always to add more useless junk into each product. And now with AI that might be even harder to avoid. This would mean there would be 1000s of do-everything websites in the future in the best case, or billions of doing one-thing terribly apps in the worst case.

    percentage of good, well planned, consistent and coherent software is going to approach zero in both cases.

    • cedws 25 minutes ago

      I’m finding that the code LLMs produce is just average. Not great, not terrible. Which makes sense, the model is basically a complex representation of the average of its training data right? If I want what I consider ‘good code’ I have to steer it.

      So I wouldn’t use LLMs to produce significant chunks of code for something I care about. And publishing vibe coded projects under my own GitHub user feels like it devalues my own work, so for now I’m just not publishing vibe coded projects. Maybe I will eventually, under a ‘pen name.’

    • jasode 26 minutes ago

      , Winamp & VLC were build by less than four people. You only needed 1000s of people because the executive vision is always to add more useless junk into each product.

      Many types of software have essential complexity and minimal features that still require hundreds/thousands of software engineers. Having just 4 people is simply not enough man-hours to build the capabilities customers desire.

      Complex software like 3D materials modeling and simulation, logistics software like factory and warehouse planning. Even the Linux kernel and userspace has thousands of contributors and the baseline features (drivers, sandbox, GUI, etc) that users want from a modern operating system cannot be done by a 4-person team.

      All that said, there a lots of great projects with tiny teams. SQLite is 3 people. Foobar2000 is one person. ShareX screensaver I think is 1 developer in Turkey.

      • Lalabadie a few seconds ago

        But big projects are where the quality of LLM contributions fall the most, and require (continuous, exhausting, thankless) supervision!

    • pousada an hour ago

      > percentage of good, well planned, consistent and coherent software is going to approach zero

      So everything stays exactly the same?

      • co_king_3 33 minutes ago

        > So everything stays exactly the same?

        No, we get applications so hideously inefficient that your $3000 developer machine feels like it it's running a Pentium II with 256 MB of RAM.

        We get software that's as slow as it was 30 years ago, for no reason other than our own arrogance and apathy.

        • rkomorn 25 minutes ago

          I find it hard to disagree with this (sadly).

          I do feels things in general are more "snappy" at the OS level, but once you get into apps (local or web), things don't feel much better than 30 years ago.

          The two big exceptions for me are video and gaming.

          I wonder how people who work in CAD, media editing, or other "heavy" workloads etc, feel.

          • co_king_3 17 minutes ago

            > I wonder how people who work in CAD, media editing, or other "heavy" workloads etc, feel.

            I would assume (generally speaking) that CAD and video editing applications are carefully designed for efficiency because it's an important differentiator between different applications in the same class.

            In my experience, these applications are some of the most exciting to use, because I feel like I'm actually able to leverage the power of my hardware.

            IMO the real issue are bloated desktop apps like Slack, Discord, Spotify, or Claude's TUI, which consume massive amounts of resources without doing much beyond displaying text or streaming audio files.

      • falloutx an hour ago

        I get this comment everytime I say this but there are levels to this. What you think is bad today could be considered artisan when things become worse than today.

        • bartread 29 minutes ago

          I mean, you've never used the desktop version of Deltek Maconomy, have you? Somehow I can tell.

          My point here is not to roast Deltek, although that's certainly fun (and 100% deserved), but to point out that the bar for how bad software can be and still, somehow, be commercially viable is already so low it basically intersects the Earth's centre of gravity.

          The internet has always been a machine that allows for the ever-accelerated publishing of complete garbage of all varieties, but it's also meant that in absolute terms more good stuff also gets published.

          The problem is one of volume not, I suspect, that the percentages of good versus crap change that much.

          So we'll need better tools to search and filter but, again, I suspect AI can help here too.

      • Chance-Device an hour ago

        Underrated comment. The reason that everyone complains about code all the time is because most code is bad, and it’s written by humans. I think this can only be a step up. Nailing validation is the trick now.

        • oblio 28 minutes ago

          Validation was always the hard part, outside of truly novel areas - think edges of computer science (which generally happen very rarely and only need to be explored once or a handful of times).

          Validation was always the hard part because great validation requires great design. You can't validate garbage.

    • zsoltkacsandi an hour ago

      Completely agree. There is a common misunderstanding/misconception in product development, that more features = better product.

      I’ve never seen a product/project manager questioning themselves: does this feature add any value? Should we remove it?

      In agile methodologies we measure the output of the developers. But we don’t care about that the output carries any meaningful value to the end user/business.

      • graemep 26 minutes ago

        Its also about marketing. People buy because of features.

        The people making the buying decisions may not have a good idea of what maximises "meaningful value" but they compare feature sets.

      • antupis an hour ago

        It’s more about operational resilience and serving customers than product development. If you run early WhatsApp like organisation just 1 person leaving can create awful problems. Same for serving customers especially big clients need all kinds of reports and resources that skeleton organisation can not provide.

        • zsoltkacsandi 44 minutes ago

          Yeah, that’s a misconception too based on my experience.

          I’ve seen many people (even myself) thinking the same: if I quit/something happens to me, there will be no one who knows how this works/how to do this. Turned out the businesses always survived. There was a tiny inconvenience, but other than that: nothing. There is always someone who is willing to pick up/take over the task in zero amount of time.

          I mean I agree with you, in theory. But that’s not what I’ve seen in practice.

      • 9rx an hour ago

        > I’ve never seen a product/project manager questioning themselves: does this feature add any value? Should we remove it?

        To be fair, it is a hard question to contend with. It is easier to keep users who don't know what they're missing happier than users who lost something they now know they want. Even fixing bugs can sometimes upset users who have come to depend on the bug as a feature.

        > In agile methodologies we measure the output of the developers.

        No we don't. "Individuals and interactions over processes and tools". You are bound to notice a developer with poor output as you interact with them, but explicitly measure them you will not. Remember, agile is all about removing managers from the picture. Without managers, who is even going to do the measuring?

        There are quite a few pre-agile methodologies out there that try to prepare a development team to operate without managers. It is possible you will find measurement in there, measuring to ensure that the people can handle working without mangers? Even agile itself recognizes in the 12 principles that it requires a team of special people to be able to handle agile.

        • zsoltkacsandi 34 minutes ago

          I didn’t mean the Agile Manifesto prescribes individual productivity measurement. I meant what often happens in “agile in the wild”: we end up tracking throughput proxies (story points completed, velocity, number of tickets closed, burndown charts) and treating that as success, while the harder question (“did this deliver user/business value?”) is weakly measured or ignored.

          Also, agile isn’t really “removing managers from the picture” so much as shifting management from command-and-control to enabling constraints, coaching, and removing impediments. Even in Scrum, you still have roles with accountability, and teams still need some form of prioritization and product decision-making (otherwise you just get activity without direction).

          So yeah: agile ideals don’t say “measure dev output.” But many implementations incentivize output/throughput, and that’s the misconception I was pointing at.

          • 9rx 27 minutes ago

            > we end up tracking throughput proxies (story points completed, velocity, number of tickets closed, burndown charts) and treating that as success

            That sounds more like scrum or something in that wheelhouse, which isn't agile, but what I earlier called pre-agile. They are associated with agile as they are intended to be used as a temporary transitionary tool. One day up and telling your developers "Good news, developers. We fired all the managers. Go nuts!" obviously would be a recipe for disaster. An organization wanting to adopt agile needs to slowly work into it and prove that the people involved can handle it. Not everyone can.

            > Also, agile isn’t really “removing managers from the picture” so much as shifting management from command-and-control to enabling constraints, coaching, and removing impediments.

            That's the pre-agile step. You don't get rid of managers immediately, you put them to work stepping in when necessary and helping developers learn how to manage without a guiding hand. "Business people" remain involved in agile. Perhaps you were thinking of that instead? Under agile they aren't managers, though, they are partners who work together with the developers.

    • imiric 15 minutes ago

      Wait, surely adding 10x more agents to my project will speed up development, improve the end product, and make me more productive by that same proportion, right?

      I will task a few of them to write a perfectly detailed spec up front, break up the project into actionable chunks, and then manage the other workers into producing, reviewing, and deploying the code. Agents can communicate and cooperate now, and hallucination is a solved problem. What could go wrong?

      Meanwhile, I can cook or watch a movie, and ocasionally steer them in the right direction. Now I can finally focus on the big picture, instead of getting bogged down by minutiae. My work is so valuable that no AI could ever replace me.

      /s

    • kvgr 31 minutes ago

      I just build a programming language in couple of hours, complete with interpreter with claude code. I know nothing about designing and implementing programming languages: https://github.com/m-o/MoonShot. Its crazy.

      • falloutx 13 minutes ago

        Yes, my point is that it was possible to build it before AI and in much less effort than people imagine. People in college build an interpreter in the less than couple weeks anyway and that probably has more utility.

        Consider two scenarios:

        1) I try to build an interpreter. I go and read some books, understand the process, build it in 2 weeks. Results: I have a toy interpreter. I understand said toy interpreter. I learnt how to do it, Learnt ideas in the field, applied my knowledge practically.

        2) I try to build an interpreter. I go and ask claude to do it. It spits out something which works: Result: I have black box interpreter. I dont understand said interpreter. I didnt build any skills in building it. Took me less than an hour.

        Toy interpreter is useless in both scenarios but Scenario one pay for the 2 week effort, while Scenario 2 is a vanity project.

        • kvgr 8 minutes ago

          Yes, but you can combine the solutions. Aka, you know what you are working on.You can make it much faster. Or you builds something and learn from it.

          I think there will be a lot of slop and a lot of usefull stuff. But also, what i did was just an experiment to see if it is possible, i don't think it is usable, nor do i have any plans to make it into new language. And it was done in less than 3 hours total time.

          So for example, if you want to try new language features. Like let's say total immutability, or nullability as a type. Then you can build small language and try to write a code in it. Instead of writing it for weeks, you can do it in hours.

      • lietuvis 22 minutes ago

        Took a quick look, this seems like a copy of writing an interpreter in go book by Thorsten Ball, but just much worse.

        Also using double equals to mutate variables, why?

        • kvgr 20 minutes ago

          Just because i wanted it to. I made some design choices that i found interesting.

      • oblio 27 minutes ago

        You built something.

        Now comes the hard or impossible part: is it any good? I would bet against it.

        • kvgr 19 minutes ago

          Oh, thank you for informing me.

  • fhd2 2 hours ago

    I feel agentic development is a time sink.

    Previously, I'd have an idea, sit on it for a while. In most cases, conclude it's not a good idea worth investing in. If I decided to invest, I'd think of a proper strategy to approach it.

    With agentic development, I have an idea, waste a few hours chasing it, then switch to other work, often abandoning the thing entirely.

    I still need to figure out how to deal with that, for now I just time box these sessions.

    But I feel I'm trading thinking time for execution time, and understanding time for testing time. I'm not yet convinced I like those tradeoffs.

    Edit: Just a clarification: I currently work in two modes, depending on the project. In some, I use agentic development. In most, I still do it "old school". That's what makes the side effects I'm noticing so surprising. Agentic development pulls me down rabbit holes and makes me loose the plot and focus. Traditional development doesn't, its side effects apparently keep me focused and in control.

    • jcims 17 minutes ago

      >With agentic development, I have an idea, waste a few hours chasing it, then switch to other work, often abandoning the thing entirely.

      How much of this is because you don't trust the result?

      I've found this same pattern in myself, and I think the lack of faith that the output is worth asking others to believe in is why it's a throwaway for me. Just yesterday someone mentioned a project underway in a meeting that I had ostensibly solved six months ago, but I didn't even demo it because I didn't have any real confidence in it.

      I do find that's changing for myself. I actually did demo something last week that I 'orchestrated into existence' with these tools. In part because the goal of the demo was to share a vision of a target state rather than the product itself. But also because I'm much more confident in the output. In part because the tools are better, but also because I've started to take a more active role in understanding how it works.

      Even if the LLMs come to a standstill in their ability to generate code, I think the practice of software development with them will continue to mature to a point where many (including myself) will start to have more confidence in the products.

    • darkwater 37 minutes ago

      > Previously, I'd have an idea, sit on it for a while.

      > With agentic development, I have an idea, waste a few hours chasing it,

      What's the difference between these 2 periods? Weren't you wasting time when sitting on it and thinking about your idea?

      • latexr 19 minutes ago

        Sitting on an idea doesn’t have to mean literally sitting and staring at the ceiling, thinking about it. It means you have an idea and let it stew for a while, your mind coming back to it on its own while you’re taking a shower, doing the dishes, going for a walk… The idea which never comes back is the one you abandon and would’ve been a waste of time to pursue. The idea which continues to be interesting and popping into your head is the worthwhile one.

        When you jump straight into execution because it’s easy to do so, you lose the distinction.

      • shakna 25 minutes ago

        Sitting on an idea doesn't necessarily mean being inactive. You can think at the same time as doing something else. "Shower thoughts" are often born of that process.

    • rvz 2 hours ago

      If you do not know what you want to build, how to ask the AI what you want and are unable to tell what the correct requirements are; then it becomes a waste of time and money.

      More importantly, As the problem becomes more complex, it then matters more if you know where the AI falls short.

      Case study: Security researchers were having a great time finding vulnerabilities and security holes in Openclaw.

      The Openclaw creators had a very limited background in security even when the AI entirely built Openclaw and the authors had to collaborate with the security experts to secure the whole project.

      • yason 2 hours ago

        > If you do not know what you want to build

        That describes the majority of cases actually worth working on as a programmer in the traditional sense of the word. You build something to begin to discover the correct requirements and to picture the real problem domain in question.

        • embedding-shape an hour ago

          > You build something to begin to discover the correct requirements and to picture the real problem domain in question.

          That's one way, another way is to keep the idea in your head (both actively and "in the background) for days/weeks, and then eventually you sit down and write a document, and you'll get 99% of the requirements down perfectly. Then implementation can start.

          Personally I prefer this hammock-style development and to me it seems better at building software that makes sense and solves real problems. Meanwhile "build something to discover" usually is best when you're working with people who need to be able to see something to believe there is progress, but the results are often worse and less well-thought out.

          • rvz 19 minutes ago

            This.

            It's better to have a solid concrete idea written down of the entire system that you know you want to build which has ironed out the limitations, requirements and the constraints first before jumping into the code implementation or getting the agent to write it for you.

            The build-something-to-discover approach is not for building robust solutions in the long run. By starting with the code first without knowing what it is you are solving or just getting the AI to generate something half-working but breaks easily and changing it once again for it to become even more complicated just wastes more time and tokens.

            Someone still has to read the code and understand why the project was built on a horrible foundation and needs to know how to untangle the AI vibe-coded mess.

    • yieldcrv an hour ago

      with agentic development, I've finally considered doing open source work for no reason aside from a utility existing

      before, I would narrow things down to only the most potentially economically viable, and laugh at ideas guys that were married to the one single idea in their life as if it was their only chance, seemingly not realizing they were competing with people that get multiple ideas a day

      back to the aforementioned epiphany, it reminds me of the world of Star Trek where everything was developed for its curiosity and utility instead of money

  • jillesvangurp 7 minutes ago

    That's the bane of all productivity increasing tools, any time you free up immediately gets consumed by more work.

    People keep on making the same naive assumption that the total amount of work is a constant when you mess with the cost of that work. The reality is that if you make something cheaper, people will want more of it. And it adds up to way more than what was asked before.

    That's why I'm not worried about losing my job. The whole notion is based on a closed world assumption, which is always a bad assumption.

    If you look at the history of computers and software engineering including compilers, ci/cd, frameworks/modules/etc. functional and OO programming paradigms, type inference, etc. There's something new every few years. Every time we make something easier and cheaper, demand goes up and the amount of programmers increases.

    And every time you have people being afraid to lose their jobs. Sometimes jobs indeed disappear because that particular job ceases to exist because technique X got replaced with technique Y. But mostly people just keep their jobs and learn the new thing on the job. Or they change jobs and skill up as they go. People generally only lose their jobs when companies fail or start shrinking. It's more tied to economical cycles than to technology. And some companies just fail to adapt. AI is going to be similar. Lots of companies are flirting with it but aren't taking it seriously yet. Adoption cycles are always longer than people seem to think.

    AI prompting is just a form of higher level programming and being able to program is a non optional skill to be able to prompt effectively. I'd use the word meta programming but of course that's one of those improvements we already had.

    • FeteCommuniste 3 minutes ago

      > That's why I'm not worried about losing my job. The whole notion is based on a closed world assumption, which is always a bad assumption.

      You might be right, but some of us haven't quite warmed to the idea that our new job description will be something like "high-level planner and bot-wrangler," with nary a line of code in sight.

  • localhoster an hour ago

    TBH, I have found AI addictive, you use it for the first time, and its incredible. You get a nice kick of dopamine. This kick of dopamine, is decreasing with every win you get. What once felt incredible, is just another prompt today.

    Those things don't excite you any more. Plus, the fact that you no longer exercise your brain at work any more. Plus, the constant feeling of FOMO.

    It deflates you, faster.

    • m_fayer 21 minutes ago

      What felt incredible was getting the setup and prompting right and then producing reasonable working code at 50x human speed. And you're right, that doesn't excite after a while.

      But I've found my way to what, for me, is a more durable and substantial source of satisfaction, if not excitement, and that is value. Excuse the cliche, but its true.

      My life has been filled with little utilities that I've been meaning to put together for years but never found the time. My homelab is full of various little applications that I use, that are backed up and managed properly. My home automation does more than it ever did, and my cabin in the countryside is monitored and adaptive to conditions to a whole new degree of sophistication. I have scripts and workflows to deal with a fairly significant administrative load - filing and accounting is largely automated, and I have a decent approximation of an always up-to-date accountant and lawyer on hand. Paper letters and PDFs are processed like its nothing.

      Does all the code that was written at machine-speed to achieve these things thrill me? No, that's the new normal. Is the fact that I'm clawing back time, making my Earthly affairs orderly in a whole new way, and breathing software-life into my surroundings without any cloud or big-tech encroachment thrilling? Yes, sometimes - but more importantly it's satisfying and calming.

      As far as using my brain - I devote as much of my cognitive energy to these things as I ever have, but now with far more to show for it. As the agents work for me, I try to learn and validate everything they do, and I'm the one stitching it all into a big cohesive picture. Like directing a film. And this is a new feeling.

    • raincole 26 minutes ago

      Isn't it just like programming?

      Many of programmers became programmers because they find the idea of programming fascinating, probably in their middle school days. And then they went to be professionals. Then they burned out and if they were lucky, transited to management.

      Of course not everyone is like that, but you can't say it isn't common, right.

    • altmanaltman an hour ago

      If what once felt incredible is just another prompt today, what is incredible today? Addictive personalities usually double down to get a bigger dopamine kick - that's why they stay addicted. So I don't think you truly found it addictive in the conventional sense of the term. Also excercising the brain has been optional in software for quite a while tbh.

      • rtrav an hour ago

        Apart from the addicts, AI also helps the liars, marketeers and bloggers. You can outsource the lies to the AI.

    • ap99 an hour ago

      Yeah if you want to keep your edge you have to find other ways to work your programming brain.

      But as far as output - we all have different reasons for enjoying software development but for me it's more making something useful and less in the coding itself. AI makes the fun parts more fun and the less fun parts almost invisible (at small scale).

      We'll all have to wrestle with this going forward.

      • co_king_3 an hour ago

        If you use an LLM you've given up your edge.

        • 9875325996435 23 minutes ago

          If you use a compiler you've given up your edge.

  • sph 4 minutes ago

    Is a blog reposting other content worth its own post?

    Previous discussion of the original article: https://news.ycombinator.com/item?id=46945755

  • singularfutur 2 hours ago

    This is not a technology problem. AI intensifies work because management turns every efficiency gain into higher output quotas. The solution is labor organization, not better software.

    • bckr 42 minutes ago

      Labor organization yes! I don't quite know how to achieve it. I also worry that my desire to become a manager is in direct conflict with my desire to contribute to labor organization.

      On a separate note, I have the intensification problem in my personal work as well. I sit down to study, but, first, let me just ask Claude to do some research in the background... Oh, and how is my Cursor doing on the dashboard? Ah, right, studying... Oh, Claude is done...

      • co_king_3 24 minutes ago

        > I don't quite know how to achieve it.

        Definitely not by posting on right-wing social media websites.

        > I also worry that my desire to become a manager is in direct conflict with my desire to contribute to labor organization.

        It is.

    • zozbot234 43 minutes ago

      The driving force is not management or even developers, it's always the end users. They get to do more with less, thanks to the growing output. This is something to the celebrated, not a problem to be "solved" with artificial quotas.

    • sawmurai 25 minutes ago

      I am all for labor organization. I just don’t see how it would be of benefit in this particular case.

      • co_king_3 23 minutes ago

        If I'm not mistaken it would appear that you're saying that you are in fact *not* for labor organization in this case.

        • sawmurai a minute ago

          No, absolutely not. I would even be for labor organization if it had no impact on this matter primarily because I don't see why it would be a negative.

    • MrBuddyCasino an hour ago

      The leftist thought process never ceases to amaze me:

      "This time, its going to be the correct version of socialism."

    • ap99 2 hours ago

      This argument has been used against every new technology since forever.

      And the initial gut reaction is to resist by organizing labor.

      Companies that succumb to organized labor get locked into that speed of operating. New companies get created that adopt 'the new thing' and blow old companies away.

      Repeat.

      • falloutx an hour ago

        > And the initial gut reaction is to resist by organizing labor.

        Yeah like tech workers have similar rights to union workers. We literally have 0 power compared to any previous group of workers. Organizing of labour cant even happen in tech as tech has large percentage of immigrant labour who have even less rights than citizens.

        Also there is no shared pain like union workers had, we all have been given different incentives, working under different corporations so without shared pain its impossible to organize. AI is the first shared pain we had, and even this caused no resistance from tech workers. Resistance has come from the users, which is the first good sign. Consumers have shown more ethics than workers and we have to applaud that. Any resistance to buying chatbot subscriptions has to be celebrated.

        • co_king_3 an hour ago

          Labor organizing is (obviously) banned on HackerNews.

          This isn't the place to kvetch about this; you will literally never see a unionization effort on this website because the accounts of the people posting about it will be [flagged] and shadowbanned.

        • ap99 an hour ago

          I'm curious as to what previous group you're comparing yourself (and the rest of us) to.

          I'm also curious as to what you do, where you do it, and who you work for that makes you feel like you have zero power.

          • falloutx 39 minutes ago

            Just a regular senior SDE at one of the Mag7. I can tell you everyone at these companies is replaceable within a day. Even within an hour. Even the head of depts have no power above them, they can be fired on short notice.

      • wiseowise an hour ago

        So race to the bottom where you work more and make less per unit of work? Great deal, splendid idea.

        The only winners here are CEOs/founders who make obscene money, liquidate/retire early while suckers are on the infinite treadmill justifying their existence.

      • wolfcola 11 minutes ago

        Do you like working 8 hours a day instead of 12? 5 days a week instead of 7? You can thank organized labor.

      • foxes an hour ago

        Maybe society shouldnt be optimising for that.

  • sprightlytogo 24 minutes ago

    As someone who prefers to do one task at a time, using AI tools makes me feel productive and unproductive at the same time: productive because I am able to do finish my task faster, unproductive because I feel like I am wasting my time while I am waiting for the AI to respond.

  • teekert 17 minutes ago

    Had a similar experience recently, set up Claude code, wrote plans, CLAUDE.md etc. The plan was to end up with a nice looking hugo/bootstrap/ website.

    Long story short, it was ugly and didn't really work as I wanted. So I'm learning Hugo myself now... The whole experience was kind of frustrating tbh.

    When I finally settled in en did some hours of manual work I felt much better because of it. I did benefit from my planning with Claude though...

    • fendy3002 6 minutes ago

      Now that you get accustomed with Hugo, I wonder if the way you plan & prompting now will produce better result or not

      • teekert 4 minutes ago

        I probably will, I use AI extensively, but mostly when I can't remember tedious syntax or suspect something can be done in a better way. And that work well for me... If I go too much towards vibe coding, the fun is sucked away for me.

  • bilekas 2 hours ago

    > I've had conversations with people recently who are losing sleep because they're finding building yet another feature with "just one more prompt" irresistible.

    This is actually a really good point that I have kind of noticed when using AI for side project, so being on my own time. The allure of thinking "Oh I wonder how it will perform with this feature request if I give it this amount of info".

    Can't say I would put off sleep for it but I get the sentiment for sure.

    • fhd2 2 hours ago

      What kills me personally is that I'm constantly 80% there, but the remaining 20% can be just insurmountable. It's really like gambling: Just one more round and it'll be useful, OK, not quite, just one more, for hours.

      • CSSer an hour ago

        Do you mean in terms of adding one more feature or in terms of how a feature you're adding almost works but not quite right?

        I find the latter a lot more challenging to cut my losses when it's on a good run (and often even when I know I could just write this by hand), especially because there's as much if not more intrigue about whether the tool can accomplish it or not. These are the moments where my mind has drifted to think about it the exact way you describe it here.

      • co_king_3 an hour ago

        If you think it's getting 80% right on its own, you're a victim of Anthropic and OpenAI's propaganda.

        • Melonai 8 minutes ago

          No I kind of see this too, but the 80% is very much the more simple stuff. AI genuinely saves me some time, but I always notice that if I try to "finish" a relatively complex task that's a bit unique in some regards, when a bit more complex work is necessary, something slightly domain-related maybe, I start prompting and prompting and banging my head against the terminal window to make it try to understand the issue, but somehow it still doesn't turn out well at all, and I end up throwing out most of the work done from that point on.

          Sometimes it looks like some of that comes from AI generally being very very sure of its initial idea "The issue is actually very simple, it's because..." and then it starts running around in circles once it tries and fails, you can pull it out with a bit more prompting, but it's tough. The thing is, it is sometimes actually right, from the very beginning, but if it isn't...

          This is just my own perspective after working with these agents for some time, I've definitely heard of people having different experiences.

      • gedy an hour ago

        And let's get real: AI companies will not be satisfied with you paying $20 or even $200 month if you can actually develop your product in a few days with their agents. They are either going to charge a lot more or string you along chasing that 20%.

        • bilekas 31 minutes ago

          That's an interesting business model actually : "Oh hey there, I see you're almost finished your project and ready to launch, watch theses adverts and participate in this survey to get the last 10% of your app completed"

  • dubeye 9 minutes ago

    Tell that to my family with whom I have been spending a lot more time recently having benefited a lot from increased productivity.

  • digikata 36 minutes ago

    A couple of historical notes that come to mind.

    When washing machines were introduced, the number of hours of doing the chore of laundry did not necessarily decrease until 40 years after the introduction.

    When project management software was introduced, it made the task of managing project tasks easier. One could create an order of magnitude or more of detailed plans in the same amount of time - poorly used this decreased the odds of project success, by eating up everyone's time. And the software itself has not moved the needle in terms of project success factors of successfully completing within budget, time, and resources planned.

  • Flavius 40 minutes ago

    This intensification is really a symptom of the race to the bottom. It only feels 'exhausting' for people who don't want to lose their job or business to an agent; for everyone else, the AI is just an excuse to do less.

    • zozbot234 32 minutes ago

      The way you avoid losing your job to an AI agent is not 'intensifying' its use, but learning to drive it better. Much of what people are calling 'intensification' here is really just babysitting and micromanaging their agent because it's perpetually running on vibes and fumes instead of being driven effectively with very long, clearly written (with AI assistance!) prompts and design documents. Writing clear design documentation for your agent is a light, sustainable and even enjoyable activity; babysitting its mistakes is not.

    • co_king_3 9 minutes ago

      I'm sorry but if you're losing your job to this shit you were too dumb to make it in the first place.

      Edit: Not to mention, this is what you get for not unionizing earlier. Get good or get cut.

  • apexalpha 15 minutes ago

    I just build my k8s homelab with AI.

    It’s insane how productive I am.

    I used to have “breaks” looking for specific keywords or values to enter while crafting a yaml.

    Now the AI makes me skip all of that, essentially.

  • bsenftner an hour ago

    I've been saying this since ChatGPT first came out: AI enables the lazy to dig intellectual holes they cannot dig out, while also enables those with active critical analysis and good secondary considerations to literally become the fabled 10x or more developer / knowledge worker. Which creates interesting scenarios as AI is being evaluated and adopted: the short sighted are loudly declaring success, which will be short term success, and they are bullying their work-peers that they have the method they all should follow. That method being intellectually lazy, allowing the AI to code for them, which they then verify with testing and believe they are done. Meanwhile, the quiet ones are figuring out how to eliminate the need for their coworkers at all. Managers are observing productivity growth, which falters with the loud ones, but not with those quiet ones... AI is here to make the scientifically minded excel and the short cut takers can footgun themselves out of there.

    • rtrav an hour ago

      Surely managers will finally recognize the contributions of the quiet ones! I cannot believe what I read here.

      We just saw the productivity growth in the vibe coded GitHub outages.

      • bsenftner 34 minutes ago

        Don't bet on it. Those managers are the previously loud short sighted thinkers that finagled their way out of coding. Those loud ones are their buddies.

    • falloutx 31 minutes ago

      This is a cope. Managers are not magicians who will finally understand who is good and who is just vibe coding demos. In fact now its gonna become even harder to understand differences for the managers. In fact its more likely that the managers are at the same risk because without a clique of software engineers, they would have nothing to manage.

  • Xarovin 41 minutes ago

    I like working on my own projects, and where I found AI really shone was by having something there to bounce ideas off and get feedback.

    That changes if you get it to write code for you. I tried vibe-coding an entire project once, and while I ended up with a pretty result that got some traction on Reddit, I didn't get any sense of accomplishment at all. It's kinda like doomscrolling in a way, it's hard to stop but it leaves you feeling empty.

  • mentalgear an hour ago

    I'm also coming to the conclusion that LLMs have basically the same value as when I tried them out with GPT-3 : good for semantic search / debugging. Bad for generation as you constantly have to check it, correct it, and the parts you trust it to get "right" are often those that are biting you afterwards - or if right introduce gaps in your own knowledge that make you slowly inefficient in your "generation controller" role.

    • co_king_3 39 minutes ago

      I've been saying since 2024 that these things are not getting meaningfully better at all.

      I think these companies have been manipulating social media sentiment for years in order to cover up their bunk product.

  • camgunz 31 minutes ago

    Comments on the original article: https://news.ycombinator.com/item?id=46945755

  • trash_cat an hour ago

    My two cents that this is part of the learning curve. With collective experience this type of work will be more understood, shared and explored. It is intense in the beginning because we are still discovering how to work with it. I think the other part being that this is a non-deterministic tool which does increase some cognitive load.

  • everdrive an hour ago

    People are a gas, and they expand to fill the space they're in. Tools that produce more work do make people's lives easier, they mean an individual just needs to do more work using their tools to do so. This is a disposition that most people have, and therefore it's unavoidable. AI is not exciting to me. I only need to use it so I don't fall behind my peers. Why would I ever be interested in that?

    • graemep 26 minutes ago

      "Word expands to fill the time available"

  • erelong 2 hours ago

    Doesn't have to be that way, it's about managers being realistic and not pushing people too far

    • wiseowise an hour ago

      Managers don’t even need to push anything. FOMO does all the work.

      Overheard a couple of conversations in the office how one IC spent all weekend setting up OpenClaw, another was vibe coding some bullshit application.

      I see hundreds of crazy people in our company Slack just posting/reposting twitter hype threads and coming up with ridiculous ideas how to “optimize” workflow with AI.

      Once this becomes the baseline, you’ll be seen as the slow one, because you’re not doing 5x work for the same pay.

      • co_king_3 41 minutes ago

        Don't worry about these people.

        These are internet cult victims.

    • Pingk an hour ago

      You do it to yourself, you do, and that's why it really hurts.

      > Importantly, the company did not mandate AI use (though it did offer enterprise subscriptions to commercially available AI tools). On their own initiative workers did more because AI made “doing more” feel possible, accessible, and in many cases intrinsically rewarding.

  • mrtksn 2 hours ago

    Computer languages were the lathe for shaping the machines to make them do whatever we want, AI is a CNC. Another abstraction layer for making machines do whatever we want them to do.

    • co_king_3 an hour ago

      AI is one of those early-2000s SUVs that gets 8 miles to the gallon and has a TV screen in the back of every seat.

      It's about presenting externally as a "bad ass" while:

      A) Constantly drowning out every moment of your life with low quality background noise.

      B) Aggressively polluting the environment and depleting our natural resources for no reason beyond pure arrogance.

      • co_king_3 an hour ago

        I feel that the popularization of bloated UI "frameworks", like React and Electron, coupled with the inefficiency tolerated in the "JS ecosystem" have been precursors to this dynamic.

        It seems perfectly fitting to me that Anthropic is using a wildly overcomplicated React renderer in their TUI.

        React devs are the perfect use case for "AI" dev tools. It is perfectly tolerated for them to write highly inefficient code, and these frameworks are both:

        A) Arcane and inconsistently documented

        B) Heavily overrepresented in open-source

        Meaning there are meaningful gains to be had from querying these "AI" tools for framework development.

        In my opinion, the shared problem is the acceptance of egregious inefficiency.

    • taosx an hour ago

      I don't disagree with the concept of AI being another abstraction layer (maybe) but I feel that's an insult to a CNC machine which is a very precise and accurate tool.

      • mrtksn an hour ago

        LLMs are quite accurate for programming, these days they almost always create a code that will compile without errors and errors are almost always fixable by feeding the error into the LLM. I would say this is extremely precise text generation, much better than most humans.

        Just like with CNC though, you need to feed it with the correct instructions. It's still on you for the machined output to do the expected thing. CNC's are also not perfect and their operators need to know the intricacies of machining.

        • co_king_3 41 minutes ago

          > LLMs are quite accurate for programming, these days they almost always create a code that will compile without errors and errors are almost always fixable by feeding the error into the LLM.

          What domains do you work in? This description does not match my experience whatsoever.

          • mrtksn 26 minutes ago

            I'm primarily into mobiles apps these days but using the LLMs I'm able to write software in languages that I don't know with tech that I don't understand well(like bluetooth).

            What did you try to do and the LLM failed you?

    • falloutx an hour ago

      100% disagree. CNC is a precision machine while AI is the literal opposite of precision.

      • rvz 12 minutes ago

        Tell them again.

  • blitzar an hour ago

    > I've had conversations with people recently who are losing sleep because they're finding building yet another feature with "just one more prompt" irresistible.

    alpha sigma grindset

  • maininformer 2 hours ago

    I have found that attending to one task keeps me going for longer.

    I prompt and sit there. Scrolling makes it worse. It's a good mental practice to just stay calm and watch the AI work.

    • co_king_3 43 minutes ago

      Isn't the point of AI that you can scroll endlessly while something else works for you?

      If you're going to stay single-minded, why wouldn't you just write the code yourself? You're going to have to double check and rewrite the AI's shitty work anyway

  • captainmuon an hour ago

    It's like the invention of the power loom, but for knowledge workers. Might be interesting to look at the history of industrialisation and the reactions to it.

  • zozbot234 an hour ago
  • ChrisMarshallNY an hour ago

    > help avoid burnout

    Yeah, good luck with that.

    Corporations have tried to reduce employee burnout exactly never times.

    That’s something that starts at the top. The execs tend to be “type A++” personalities, who run close to burnout, and don’t really have much empathy for employees in the same condition.

    But they also don’t believe that employees should have the same level of reward, for their stress.

    For myself, I know that I am not “getting maximum result” from using LLMs, but I feel as if they have been a real force multiplier, in my work, and don’t feel burnt out, at all.

  • rvz 2 hours ago

    Maybe it would be much better to just link to the original article instead of somewhere else for the full context to read. [0]

    Also this post should link to the original source as well.

    As per the submission guidelines [1]:

    ”Please submit the original source. If a post reports on something found on another site, submit the latter.”

    [0] https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies...

    [1] https://news.ycombinator.com/newsguidelines.html

    • wiseowise an hour ago

      It is in the title if you open the page.

  • wiseowise an hour ago

    > I've had conversations with people recently who are losing sleep because they're finding building yet another feature with "just one more prompt" irresistible.

    Literal work junkies.

    And what’s the point? If you’re working on your own project then “just one more feature, bro” isn’t going to make next Minecraft/Photopea/Stardew Valley/name your one man wonder. If you’re working for someone, then you’re a double fool, because you’re doing work of two people for the pay of one.

  • debarshri an hour ago

    intensification = productivity for me.

  • Noaidi an hour ago

    40 years ago when I was a history major in college one of my brilliant professors gave us a book to read called "the myth of domesticity".

    In the book The researcher explains that when washing machines were invented the women faced a whole new expectation of clean clothes all the time because washing clothes was much less of a labor. And statistics pointed out that women actually were washing clothes more often than doing more work after the washing machine was invented then before.

    This happens with any technology. AI is no different.

  • yieldcrv an hour ago

    this matches my experience

    it's good that people so quickly see it as impulsive and addicting, as opposed to the slow creep of doomscrolling and algorithm feeds

    • wiseowise an hour ago

      Hopefully it will be like an Ebola virus, so that everyone will see how deadly it is instead of like smoking where you die of cancer 40 years down the line.

      • co_king_3 43 minutes ago

        Frankly, it seems more like the Crack epidemic to me.

      • yieldcrv an hour ago

        I think I'll have a happier medium when I get additional inputs set up, such as just talking to a CLI running my full code base on a VPS, but through my phone and airpods, only when it needs help

        at least I won't be vegetating at a laptop, or shirking other possible responsibilities to get back to a laptop