> the productivity boost these things can provide is exhausting.
What I personally find exhausting is Simon¹ constantly discovering the obvious. Time after time after time it’s just “insights” every person who smoked one blunt in college has arrived at.
Stop for a minute! You don’t have to keep churning out multiple blog posts a day, every day. Just stop and reflect. Sit back in your chair and let your mind rest. When a thought comes to you, let it go. Keep doing that until you regain your focus and learn to distinguish what matters from what is shiny.
Yes, of course, you’re doing too much and draining yourself. Of course your “productivity” doesn’t result in extra time but is just filled with more of the same, that’s been true for longer than you’ve been alive. It’s a variation of Parkinson’s law.
I find Simon’s blog and TILs to be some of the highest signal to noise content on the internet. I’ve picked up an incredible number of useful tips and tricks. Many of them I would not have found if he did not share things as soon as he discovered something that felt “obvious.” I also love how he documents small snippets and gists of code that are easy to link to and cross-reference. Wish I did more of that myself.
They have established themselves as a reliable communicator of the technology, they are read far and wide, that means they are in a great position to influence the industry-wide tone, and I'm personally glad they are bringing light to this issue. If it upsets you that someone else wrote about something you understood, perhaps consider starting a blog of your own.
I'm paid by my GitHub sponsors, who get a monthly summary of what I've been writing about on the basis that I don't want to out a paywall on my content but I'm happy for people to pay me to send them less stuff.
I also make ~$600/month from the ads on my site - run by EthicalAds.
I don't take payment to write about anything. That goes against my principals. It would also be illegal in the USA (FTC rules) if I didn't disclose it - and most importantly it would damage my credibility as a writer, which is the thing I value most.
It's called the market. If you can compete while not employing eight year olds on your assembly lines and dumping carcinogens in the river, go ahead and compete with those bad companies.
> It’s called the market. If you can compete while providing better life balance, go ahead and compete with those bad companies
With friends like you, who needs enemies? Imagine if we said that about everything. Go ahead and start a garment factory with unlocked exit doors and see if you can compete against these bad garment companies. Go ahead and start your own coal mines that pay in real money and not funny money only redeemable at the company store. Go ahead and start your own factory and guarantee eight hours work, eight hours sleep, eight hours recreation. It is called a market, BRO‽
I am becoming more and more convinced that AI cant be used to make something better than what could have built before AI.
You never needed 1000s of engineers to build software anyway, Winamp & VLC were build by less than four people. You only needed 1000s of people because the executive vision is always to add more useless junk into each product. And now with AI that might be even harder to avoid. This would mean there would be 1000s of do-everything websites in the future in the best case, or billions of doing one-thing terribly apps in the worst case.
percentage of good, well planned, consistent and coherent software is going to approach zero in both cases.
I’m finding that the code LLMs produce is just average. Not great, not terrible. Which makes sense, the model is basically a complex representation of the average of its training data right? If I want what I consider ‘good code’ I have to steer it.
So I wouldn’t use LLMs to produce significant chunks of code for something I care about. And publishing vibe coded projects under my own GitHub user feels like it devalues my own work, so for now I’m just not publishing vibe coded projects. Maybe I will eventually, under a ‘pen name.’
, Winamp & VLC were build by less than four people. You only needed 1000s of people because the executive vision is always to add more useless junk into each product.
Many types of software have essential complexity and minimal features that still require hundreds/thousands of software engineers. Having just 4 people is simply not enough man-hours to build the capabilities customers desire.
Complex software like 3D materials modeling and simulation, logistics software like factory and warehouse planning. Even the Linux kernel and userspace has thousands of contributors and the baseline features (drivers, sandbox, GUI, etc) that users want from a modern operating system cannot be done by a 4-person team.
All that said, there a lots of great projects with tiny teams. SQLite is 3 people. Foobar2000 is one person. ShareX screensaver I think is 1 developer in Turkey.
I do feels things in general are more "snappy" at the OS level, but once you get into apps (local or web), things don't feel much better than 30 years ago.
The two big exceptions for me are video and gaming.
I wonder how people who work in CAD, media editing, or other "heavy" workloads etc, feel.
> I wonder how people who work in CAD, media editing, or other "heavy" workloads etc, feel.
I would assume (generally speaking) that CAD and video editing applications are carefully designed for efficiency because it's an important differentiator between different applications in the same class.
In my experience, these applications are some of the most exciting to use, because I feel like I'm actually able to leverage the power of my hardware.
IMO the real issue are bloated desktop apps like Slack, Discord, Spotify, or Claude's TUI, which consume massive amounts of resources without doing much beyond displaying text or streaming audio files.
I get this comment everytime I say this but there are levels to this. What you think is bad today could be considered artisan when things become worse than today.
I mean, you've never used the desktop version of Deltek Maconomy, have you? Somehow I can tell.
My point here is not to roast Deltek, although that's certainly fun (and 100% deserved), but to point out that the bar for how bad software can be and still, somehow, be commercially viable is already so low it basically intersects the Earth's centre of gravity.
The internet has always been a machine that allows for the ever-accelerated publishing of complete garbage of all varieties, but it's also meant that in absolute terms more good stuff also gets published.
The problem is one of volume not, I suspect, that the percentages of good versus crap change that much.
So we'll need better tools to search and filter but, again, I suspect AI can help here too.
Underrated comment. The reason that everyone complains about code all the time is because most code is bad, and it’s written by humans. I think this can only be a step up. Nailing validation is the trick now.
Validation was always the hard part, outside of truly novel areas - think edges of computer science (which generally happen very rarely and only need to be explored once or a handful of times).
Validation was always the hard part because great validation requires great design. You can't validate garbage.
Completely agree. There is a common misunderstanding/misconception in product development, that more features = better product.
I’ve never seen a product/project manager questioning themselves: does this feature add any value? Should we remove it?
In agile methodologies we measure the output of the developers. But we don’t care about that the output carries any meaningful value to the end user/business.
It’s more about operational resilience and serving customers than product development. If you run early WhatsApp like organisation just 1 person leaving can create awful problems. Same for serving customers especially big clients need all kinds of reports and resources that skeleton organisation can not provide.
Yeah, that’s a misconception too based on my experience.
I’ve seen many people (even myself) thinking the same: if I quit/something happens to me, there will be no one who knows how this works/how to do this. Turned out the businesses always survived. There was a tiny inconvenience, but other than that: nothing. There is always someone who is willing to pick up/take over the task in zero amount of time.
I mean I agree with you, in theory. But that’s not what I’ve seen in practice.
> I’ve never seen a product/project manager questioning themselves: does this feature add any value? Should we remove it?
To be fair, it is a hard question to contend with. It is easier to keep users who don't know what they're missing happier than users who lost something they now know they want. Even fixing bugs can sometimes upset users who have come to depend on the bug as a feature.
> In agile methodologies we measure the output of the developers.
No we don't. "Individuals and interactions over processes and tools". You are bound to notice a developer with poor output as you interact with them, but explicitly measure them you will not. Remember, agile is all about removing managers from the picture. Without managers, who is even going to do the measuring?
There are quite a few pre-agile methodologies out there that try to prepare a development team to operate without managers. It is possible you will find measurement in there, measuring to ensure that the people can handle working without mangers? Even agile itself recognizes in the 12 principles that it requires a team of special people to be able to handle agile.
I didn’t mean the Agile Manifesto prescribes individual productivity measurement. I meant what often happens in “agile in the wild”: we end up tracking throughput proxies (story points completed, velocity, number of tickets closed, burndown charts) and treating that as success, while the harder question (“did this deliver user/business value?”) is weakly measured or ignored.
Also, agile isn’t really “removing managers from the picture” so much as shifting management from command-and-control to enabling constraints, coaching, and removing impediments. Even in Scrum, you still have roles with accountability, and teams still need some form of prioritization and product decision-making (otherwise you just get activity without direction).
So yeah: agile ideals don’t say “measure dev output.” But many implementations incentivize output/throughput, and that’s the misconception I was pointing at.
> we end up tracking throughput proxies (story points completed, velocity, number of tickets closed, burndown charts) and treating that as success
That sounds more like scrum or something in that wheelhouse, which isn't agile, but what I earlier called pre-agile. They are associated with agile as they are intended to be used as a temporary transitionary tool. One day up and telling your developers "Good news, developers. We fired all the managers. Go nuts!" obviously would be a recipe for disaster. An organization wanting to adopt agile needs to slowly work into it and prove that the people involved can handle it. Not everyone can.
> Also, agile isn’t really “removing managers from the picture” so much as shifting management from command-and-control to enabling constraints, coaching, and removing impediments.
That's the pre-agile step. You don't get rid of managers immediately, you put them to work stepping in when necessary and helping developers learn how to manage without a guiding hand. "Business people" remain involved in agile. Perhaps you were thinking of that instead? Under agile they aren't managers, though, they are partners who work together with the developers.
Wait, surely adding 10x more agents to my project will speed up development, improve the end product, and make me more productive by that same proportion, right?
I will task a few of them to write a perfectly detailed spec up front, break up the project into actionable chunks, and then manage the other workers into producing, reviewing, and deploying the code. Agents can communicate and cooperate now, and hallucination is a solved problem. What could go wrong?
Meanwhile, I can cook or watch a movie, and ocasionally steer them in the right direction. Now I can finally focus on the big picture, instead of getting bogged down by minutiae. My work is so valuable that no AI could ever replace me.
I just build a programming language in couple of hours, complete with interpreter with claude code. I know nothing about designing and implementing programming languages: https://github.com/m-o/MoonShot. Its crazy.
Yes, my point is that it was possible to build it before AI and in much less effort than people imagine. People in college build an interpreter in the less than couple weeks anyway and that probably has more utility.
Consider two scenarios:
1) I try to build an interpreter. I go and read some books, understand the process, build it in 2 weeks. Results: I have a toy interpreter. I understand said toy interpreter. I learnt how to do it, Learnt ideas in the field, applied my knowledge practically.
2) I try to build an interpreter. I go and ask claude to do it. It spits out something which works: Result: I have black box interpreter. I dont understand said interpreter. I didnt build any skills in building it. Took me less than an hour.
Toy interpreter is useless in both scenarios but Scenario one pay for the 2 week effort, while Scenario 2 is a vanity project.
Yes, but you can combine the solutions. Aka, you know what you are working on.You can make it much faster. Or you builds something and learn from it.
I think there will be a lot of slop and a lot of usefull stuff.
But also, what i did was just an experiment to see if it is possible, i don't think it is usable, nor do i have any plans to make it into new language. And it was done in less than 3 hours total time.
So for example, if you want to try new language features. Like let's say total immutability, or nullability as a type. Then you can build small language and try to write a code in it. Instead of writing it for weeks, you can do it in hours.
Previously, I'd have an idea, sit on it for a while. In most cases, conclude it's not a good idea worth investing in. If I decided to invest, I'd think of a proper strategy to approach it.
With agentic development, I have an idea, waste a few hours chasing it, then switch to other work, often abandoning the thing entirely.
I still need to figure out how to deal with that, for now I just time box these sessions.
But I feel I'm trading thinking time for execution time, and understanding time for testing time. I'm not yet convinced I like those tradeoffs.
Edit: Just a clarification: I currently work in two modes, depending on the project. In some, I use agentic development. In most, I still do it "old school". That's what makes the side effects I'm noticing so surprising. Agentic development pulls me down rabbit holes and makes me loose the plot and focus. Traditional development doesn't, its side effects apparently keep me focused and in control.
>With agentic development, I have an idea, waste a few hours chasing it, then switch to other work, often abandoning the thing entirely.
How much of this is because you don't trust the result?
I've found this same pattern in myself, and I think the lack of faith that the output is worth asking others to believe in is why it's a throwaway for me. Just yesterday someone mentioned a project underway in a meeting that I had ostensibly solved six months ago, but I didn't even demo it because I didn't have any real confidence in it.
I do find that's changing for myself. I actually did demo something last week that I 'orchestrated into existence' with these tools. In part because the goal of the demo was to share a vision of a target state rather than the product itself. But also because I'm much more confident in the output. In part because the tools are better, but also because I've started to take a more active role in understanding how it works.
Even if the LLMs come to a standstill in their ability to generate code, I think the practice of software development with them will continue to mature to a point where many (including myself) will start to have more confidence in the products.
Sitting on an idea doesn’t have to mean literally sitting and staring at the ceiling, thinking about it. It means you have an idea and let it stew for a while, your mind coming back to it on its own while you’re taking a shower, doing the dishes, going for a walk… The idea which never comes back is the one you abandon and would’ve been a waste of time to pursue. The idea which continues to be interesting and popping into your head is the worthwhile one.
When you jump straight into execution because it’s easy to do so, you lose the distinction.
Sitting on an idea doesn't necessarily mean being inactive. You can think at the same time as doing something else. "Shower thoughts" are often born of that process.
If you do not know what you want to build, how to ask the AI what you want and are unable to tell what the correct requirements are; then it becomes a waste of time and money.
More importantly, As the problem becomes more complex, it then matters more if you know where the AI falls short.
Case study: Security researchers were having a great time finding vulnerabilities and security holes in Openclaw.
The Openclaw creators had a very limited background in security even when the AI entirely built Openclaw and the authors had to collaborate with the security experts to secure the whole project.
That describes the majority of cases actually worth working on as a programmer in the traditional sense of the word. You build something to begin to discover the correct requirements and to picture the real problem domain in question.
> You build something to begin to discover the correct requirements and to picture the real problem domain in question.
That's one way, another way is to keep the idea in your head (both actively and "in the background) for days/weeks, and then eventually you sit down and write a document, and you'll get 99% of the requirements down perfectly. Then implementation can start.
Personally I prefer this hammock-style development and to me it seems better at building software that makes sense and solves real problems. Meanwhile "build something to discover" usually is best when you're working with people who need to be able to see something to believe there is progress, but the results are often worse and less well-thought out.
It's better to have a solid concrete idea written down of the entire system that you know you want to build which has ironed out the limitations, requirements and the constraints first before jumping into the code implementation or getting the agent to write it for you.
The build-something-to-discover approach is not for building robust solutions in the long run. By starting with the code first without knowing what it is you are solving or just getting the AI to generate something half-working but breaks easily and changing it once again for it to become even more complicated just wastes more time and tokens.
Someone still has to read the code and understand why the project was built on a horrible foundation and needs to know how to untangle the AI vibe-coded mess.
with agentic development, I've finally considered doing open source work for no reason aside from a utility existing
before, I would narrow things down to only the most potentially economically viable, and laugh at ideas guys that were married to the one single idea in their life as if it was their only chance, seemingly not realizing they were competing with people that get multiple ideas a day
back to the aforementioned epiphany, it reminds me of the world of Star Trek where everything was developed for its curiosity and utility instead of money
That's the bane of all productivity increasing tools, any time you free up immediately gets consumed by more work.
People keep on making the same naive assumption that the total amount of work is a constant when you mess with the cost of that work. The reality is that if you make something cheaper, people will want more of it. And it adds up to way more than what was asked before.
That's why I'm not worried about losing my job. The whole notion is based on a closed world assumption, which is always a bad assumption.
If you look at the history of computers and software engineering including compilers, ci/cd, frameworks/modules/etc. functional and OO programming paradigms, type inference, etc. There's something new every few years. Every time we make something easier and cheaper, demand goes up and the amount of programmers increases.
And every time you have people being afraid to lose their jobs. Sometimes jobs indeed disappear because that particular job ceases to exist because technique X got replaced with technique Y. But mostly people just keep their jobs and learn the new thing on the job. Or they change jobs and skill up as they go. People generally only lose their jobs when companies fail or start shrinking. It's more tied to economical cycles than to technology. And some companies just fail to adapt. AI is going to be similar. Lots of companies are flirting with it but aren't taking it seriously yet. Adoption cycles are always longer than people seem to think.
AI prompting is just a form of higher level programming and being able to program is a non optional skill to be able to prompt effectively. I'd use the word meta programming but of course that's one of those improvements we already had.
> That's why I'm not worried about losing my job. The whole notion is based on a closed world assumption, which is always a bad assumption.
You might be right, but some of us haven't quite warmed to the idea that our new job description will be something like "high-level planner and bot-wrangler," with nary a line of code in sight.
TBH, I have found AI addictive, you use it for the first time, and its incredible. You get a nice kick of dopamine. This kick of dopamine, is decreasing with every win you get. What once felt incredible, is just another prompt today.
Those things don't excite you any more.
Plus, the fact that you no longer exercise your brain at work any more.
Plus, the constant feeling of FOMO.
What felt incredible was getting the setup and prompting right and then producing reasonable working code at 50x human speed. And you're right, that doesn't excite after a while.
But I've found my way to what, for me, is a more durable and substantial source of satisfaction, if not excitement, and that is value. Excuse the cliche, but its true.
My life has been filled with little utilities that I've been meaning to put together for years but never found the time. My homelab is full of various little applications that I use, that are backed up and managed properly. My home automation does more than it ever did, and my cabin in the countryside is monitored and adaptive to conditions to a whole new degree of sophistication. I have scripts and workflows to deal with a fairly significant administrative load - filing and accounting is largely automated, and I have a decent approximation of an always up-to-date accountant and lawyer on hand. Paper letters and PDFs are processed like its nothing.
Does all the code that was written at machine-speed to achieve these things thrill me? No, that's the new normal. Is the fact that I'm clawing back time, making my Earthly affairs orderly in a whole new way, and breathing software-life into my surroundings without any cloud or big-tech encroachment thrilling? Yes, sometimes - but more importantly it's satisfying and calming.
As far as using my brain - I devote as much of my cognitive energy to these things as I ever have, but now with far more to show for it. As the agents work for me, I try to learn and validate everything they do, and I'm the one stitching it all into a big cohesive picture. Like directing a film. And this is a new feeling.
Many of programmers became programmers because they find the idea of programming fascinating, probably in their middle school days. And then they went to be professionals. Then they burned out and if they were lucky, transited to management.
Of course not everyone is like that, but you can't say it isn't common, right.
If what once felt incredible is just another prompt today, what is incredible today? Addictive personalities usually double down to get a bigger dopamine kick - that's why they stay addicted. So I don't think you truly found it addictive in the conventional sense of the term. Also excercising the brain has been optional in software for quite a while tbh.
Yeah if you want to keep your edge you have to find other ways to work your programming brain.
But as far as output - we all have different reasons for enjoying software development but for me it's more making something useful and less in the coding itself. AI makes the fun parts more fun and the less fun parts almost invisible (at small scale).
We'll all have to wrestle with this going forward.
This is not a technology problem. AI intensifies work because management turns every efficiency gain into higher output quotas. The solution is labor organization, not better software.
Labor organization yes! I don't quite know how to achieve it. I also worry that my desire to become a manager is in direct conflict with my desire to contribute to labor organization.
On a separate note, I have the intensification problem in my personal work as well. I sit down to study, but, first, let me just ask Claude to do some research in the background... Oh, and how is my Cursor doing on the dashboard? Ah, right, studying... Oh, Claude is done...
The driving force is not management or even developers, it's always the end users. They get to do more with less, thanks to the growing output. This is something to the celebrated, not a problem to be "solved" with artificial quotas.
No, absolutely not. I would even be for labor organization if it had no impact on this matter primarily because I don't see why it would be a negative.
This argument has been used against every new technology since forever.
And the initial gut reaction is to resist by organizing labor.
Companies that succumb to organized labor get locked into that speed of operating. New companies get created that adopt 'the new thing' and blow old companies away.
> And the initial gut reaction is to resist by organizing labor.
Yeah like tech workers have similar rights to union workers. We literally have 0 power compared to any previous group of workers. Organizing of labour cant even happen in tech as tech has large percentage of immigrant labour who have even less rights than citizens.
Also there is no shared pain like union workers had, we all have been given different incentives, working under different corporations so without shared pain its impossible to organize. AI is the first shared pain we had, and even this caused no resistance from tech workers. Resistance has come from the users, which is the first good sign. Consumers have shown more ethics than workers and we have to applaud that. Any resistance to buying chatbot subscriptions has to be celebrated.
Labor organizing is (obviously) banned on HackerNews.
This isn't the place to kvetch about this; you will literally never see a unionization effort on this website because the accounts of the people posting about it will be [flagged] and shadowbanned.
Just a regular senior SDE at one of the Mag7. I can tell you everyone at these companies is replaceable within a day. Even within an hour. Even the head of depts have no power above them, they can be fired on short notice.
So race to the bottom where you work more and make less per unit of work? Great deal, splendid idea.
The only winners here are CEOs/founders who make obscene money, liquidate/retire early while suckers are on the infinite treadmill justifying their existence.
As someone who prefers to do one task at a time, using AI tools makes me feel productive and unproductive at the same time: productive because I am able to do finish my task faster, unproductive because I feel like I am wasting my time while I am waiting for the AI to respond.
I probably will, I use AI extensively, but mostly when I can't remember tedious syntax or suspect something can be done in a better way. And that work well for me... If I go too much towards vibe coding, the fun is sucked away for me.
> I've had conversations with people recently who are losing sleep because they're finding building yet another feature with "just one more prompt" irresistible.
This is actually a really good point that I have kind of noticed when using AI for side project, so being on my own time. The allure of thinking "Oh I wonder how it will perform with this feature request if I give it this amount of info".
Can't say I would put off sleep for it but I get the sentiment for sure.
What kills me personally is that I'm constantly 80% there, but the remaining 20% can be just insurmountable. It's really like gambling: Just one more round and it'll be useful, OK, not quite, just one more, for hours.
Do you mean in terms of adding one more feature or in terms of how a feature you're adding almost works but not quite right?
I find the latter a lot more challenging to cut my losses when it's on a good run (and often even when I know I could just write this by hand), especially because there's as much if not more intrigue about whether the tool can accomplish it or not. These are the moments where my mind has drifted to think about it the exact way you describe it here.
No I kind of see this too, but the 80% is very much the more simple stuff. AI genuinely saves me some time, but I always notice that if I try to "finish" a relatively complex task that's a bit unique in some regards, when a bit more complex work is necessary, something slightly domain-related maybe, I start prompting and prompting and banging my head against the terminal window to make it try to understand the issue, but somehow it still doesn't turn out well at all, and I end up throwing out most of the work done from that point on.
Sometimes it looks like some of that comes from AI generally being very very sure of its initial idea "The issue is actually very simple, it's because..." and then it starts running around in circles once it tries and fails, you can pull it out with a bit more prompting, but it's tough. The thing is, it is sometimes actually right, from the very beginning, but if it isn't...
This is just my own perspective after working with these agents for some time, I've definitely heard of people having different experiences.
And let's get real: AI companies will not be satisfied with you paying $20 or even $200 month if you can actually develop your product in a few days with their agents. They are either going to charge a lot more or string you along chasing that 20%.
That's an interesting business model actually : "Oh hey there, I see you're almost finished your project and ready to launch, watch theses adverts and participate in this survey to get the last 10% of your app completed"
When washing machines were introduced, the number of hours of doing the chore of laundry did not necessarily decrease until 40 years after the introduction.
When project management software was introduced, it made the task of managing project tasks easier. One could create an order of magnitude or more of detailed plans in the same amount of time - poorly used this decreased the odds of project success, by eating up everyone's time. And the software itself has not moved the needle in terms of project success factors of successfully completing within budget, time, and resources planned.
This intensification is really a symptom of the race to the bottom. It only feels 'exhausting' for people who don't want to lose their job or business to an agent; for everyone else, the AI is just an excuse to do less.
The way you avoid losing your job to an AI agent is not 'intensifying' its use, but learning to drive it better. Much of what people are calling 'intensification' here is really just babysitting and micromanaging their agent because it's perpetually running on vibes and fumes instead of being driven effectively with very long, clearly written (with AI assistance!) prompts and design documents. Writing clear design documentation for your agent is a light, sustainable and even enjoyable activity; babysitting its mistakes is not.
I've been saying this since ChatGPT first came out: AI enables the lazy to dig intellectual holes they cannot dig out, while also enables those with active critical analysis and good secondary considerations to literally become the fabled 10x or more developer / knowledge worker. Which creates interesting scenarios as AI is being evaluated and adopted: the short sighted are loudly declaring success, which will be short term success, and they are bullying their work-peers that they have the method they all should follow. That method being intellectually lazy, allowing the AI to code for them, which they then verify with testing and believe they are done. Meanwhile, the quiet ones are figuring out how to eliminate the need for their coworkers at all. Managers are observing productivity growth, which falters with the loud ones, but not with those quiet ones... AI is here to make the scientifically minded excel and the short cut takers can footgun themselves out of there.
Don't bet on it. Those managers are the previously loud short sighted thinkers that finagled their way out of coding. Those loud ones are their buddies.
This is a cope. Managers are not magicians who will finally understand who is good and who is just vibe coding demos. In fact now its gonna become even harder to understand differences for the managers. In fact its more likely that the managers are at the same risk because without a clique of software engineers, they would have nothing to manage.
I like working on my own projects, and where I found AI really shone was by having something there to bounce ideas off and get feedback.
That changes if you get it to write code for you. I tried vibe-coding an entire project once, and while I ended up with a pretty result that got some traction on Reddit, I didn't get any sense of accomplishment at all. It's kinda like doomscrolling in a way, it's hard to stop but it leaves you feeling empty.
I'm also coming to the conclusion that LLMs have basically the same value as when I tried them out with GPT-3 : good for semantic search / debugging. Bad for generation as you constantly have to check it, correct it, and the parts you trust it to get "right" are often those that are biting you afterwards - or if right introduce gaps in your own knowledge that make you slowly inefficient in your "generation controller" role.
My two cents that this is part of the learning curve. With collective experience this type of work will be more understood, shared and explored. It is intense in the beginning because we are still discovering how to work with it. I think the other part being that this is a non-deterministic tool which does increase some cognitive load.
People are a gas, and they expand to fill the space they're in. Tools that produce more work do make people's lives easier, they mean an individual just needs to do more work using their tools to do so. This is a disposition that most people have, and therefore it's unavoidable. AI is not exciting to me. I only need to use it so I don't fall behind my peers. Why would I ever be interested in that?
Managers don’t even need to push anything. FOMO does all the work.
Overheard a couple of conversations in the office how one IC spent all weekend setting up OpenClaw, another was vibe coding some bullshit application.
I see hundreds of crazy people in our company Slack just posting/reposting twitter hype threads and coming up with ridiculous ideas how to “optimize” workflow with AI.
Once this becomes the baseline, you’ll be seen as the slow one, because you’re not doing 5x work for the same pay.
You do it to yourself, you do, and that's why it really hurts.
> Importantly, the company did not mandate AI use (though it did offer enterprise subscriptions to commercially available AI tools). On their own initiative workers did more because AI made “doing more” feel possible, accessible, and in many cases intrinsically rewarding.
Computer languages were the lathe for shaping the machines to make them do whatever we want, AI is a CNC. Another abstraction layer for making machines do whatever we want them to do.
I feel that the popularization of bloated UI "frameworks", like React and Electron, coupled with the inefficiency tolerated in the "JS ecosystem" have been precursors to this dynamic.
It seems perfectly fitting to me that Anthropic is using a wildly overcomplicated React renderer in their TUI.
React devs are the perfect use case for "AI" dev tools. It is perfectly tolerated for them to write highly inefficient code, and these frameworks are both:
A) Arcane and inconsistently documented
B) Heavily overrepresented in open-source
Meaning there are meaningful gains to be had from querying these "AI" tools for framework development.
In my opinion, the shared problem is the acceptance of egregious inefficiency.
I don't disagree with the concept of AI being another abstraction layer (maybe) but I feel that's an insult to a CNC machine which is a very precise and accurate tool.
LLMs are quite accurate for programming, these days they almost always create a code that will compile without errors and errors are almost always fixable by feeding the error into the LLM. I would say this is extremely precise text generation, much better than most humans.
Just like with CNC though, you need to feed it with the correct instructions. It's still on you for the machined output to do the expected thing. CNC's are also not perfect and their operators need to know the intricacies of machining.
> LLMs are quite accurate for programming, these days they almost always create a code that will compile without errors and errors are almost always fixable by feeding the error into the LLM.
What domains do you work in? This description does not match my experience whatsoever.
I'm primarily into mobiles apps these days but using the LLMs I'm able to write software in languages that I don't know with tech that I don't understand well(like bluetooth).
> I've had conversations with people recently who are losing sleep because they're finding building yet another feature with "just one more prompt" irresistible.
Isn't the point of AI that you can scroll endlessly while something else works for you?
If you're going to stay single-minded, why wouldn't you just write the code yourself? You're going to have to double check and rewrite the AI's shitty work anyway
It's like the invention of the power loom, but for knowledge workers. Might be interesting to look at the history of industrialisation and the reactions to it.
Corporations have tried to reduce employee burnout exactly never times.
That’s something that starts at the top. The execs tend to be “type A++” personalities, who run close to burnout, and don’t really have much empathy for employees in the same condition.
But they also don’t believe that employees should have the same level of reward, for their stress.
For myself, I know that I am not “getting maximum result” from using LLMs, but I feel as if they have been a real force multiplier, in my work, and don’t feel burnt out, at all.
> I've had conversations with people recently who are losing sleep because they're finding building yet another feature with "just one more prompt" irresistible.
Literal work junkies.
And what’s the point?
If you’re working on your own project then “just one more feature, bro” isn’t going to make next Minecraft/Photopea/Stardew Valley/name your one man wonder.
If you’re working for someone, then you’re a double fool, because you’re doing work of two people for the pay of one.
40 years ago when I was a history major in college one of my brilliant professors gave us a book to read called "the myth of domesticity".
In the book The researcher explains that when washing machines were invented the women faced a whole new expectation of clean clothes all the time because washing clothes was much less of a labor. And statistics pointed out that women actually were washing clothes more often than doing more work after the washing machine was invented then before.
This happens with any technology. AI is no different.
Hopefully it will be like an Ebola virus, so that everyone will see how deadly it is instead of like smoking where you die of cancer 40 years down the line.
I think I'll have a happier medium when I get additional inputs set up, such as just talking to a CLI running my full code base on a VPS, but through my phone and airpods, only when it needs help
at least I won't be vegetating at a laptop, or shirking other possible responsibilities to get back to a laptop
> the productivity boost these things can provide is exhausting.
What I personally find exhausting is Simon¹ constantly discovering the obvious. Time after time after time it’s just “insights” every person who smoked one blunt in college has arrived at.
Stop for a minute! You don’t have to keep churning out multiple blog posts a day, every day. Just stop and reflect. Sit back in your chair and let your mind rest. When a thought comes to you, let it go. Keep doing that until you regain your focus and learn to distinguish what matters from what is shiny.
Yes, of course, you’re doing too much and draining yourself. Of course your “productivity” doesn’t result in extra time but is just filled with more of the same, that’s been true for longer than you’ve been alive. It’s a variation of Parkinson’s law.
https://en.wikipedia.org/wiki/Parkinson%27s_law
¹ And others, but Simon is particularly prevalent on HN, so I bump into these more often.
I find Simon’s blog and TILs to be some of the highest signal to noise content on the internet. I’ve picked up an incredible number of useful tips and tricks. Many of them I would not have found if he did not share things as soon as he discovered something that felt “obvious.” I also love how he documents small snippets and gists of code that are easy to link to and cross-reference. Wish I did more of that myself.
They have established themselves as a reliable communicator of the technology, they are read far and wide, that means they are in a great position to influence the industry-wide tone, and I'm personally glad they are bringing light to this issue. If it upsets you that someone else wrote about something you understood, perhaps consider starting a blog of your own.
> You don’t have to keep churning out multiple blog posts a day, every day.
How do you know that? You don't think he's being paid for all this marketing work?
I'm paid by my GitHub sponsors, who get a monthly summary of what I've been writing about on the basis that I don't want to out a paywall on my content but I'm happy for people to pay me to send them less stuff.
I also make ~$600/month from the ads on my site - run by EthicalAds.
I don't take payment to write about anything. That goes against my principals. It would also be illegal in the USA (FTC rules) if I didn't disclose it - and most importantly it would damage my credibility as a writer, which is the thing I value most.
I have a set of disclosures here: https://simonwillison.net/about/#disclosures
Also the absolute lack of historical or political awareness to suggest companies will want to find a “balance” for their employees…
It’s called the market. If you can compete while providing better life balance, go ahead and compete with those bad companies
It's called the market. If you can compete while not employing eight year olds on your assembly lines and dumping carcinogens in the river, go ahead and compete with those bad companies.
It's called the market. Get back to work, slave.
> It’s called the market. If you can compete while providing better life balance, go ahead and compete with those bad companies
With friends like you, who needs enemies? Imagine if we said that about everything. Go ahead and start a garment factory with unlocked exit doors and see if you can compete against these bad garment companies. Go ahead and start your own coal mines that pay in real money and not funny money only redeemable at the company store. Go ahead and start your own factory and guarantee eight hours work, eight hours sleep, eight hours recreation. It is called a market, BRO‽
I am becoming more and more convinced that AI cant be used to make something better than what could have built before AI.
You never needed 1000s of engineers to build software anyway, Winamp & VLC were build by less than four people. You only needed 1000s of people because the executive vision is always to add more useless junk into each product. And now with AI that might be even harder to avoid. This would mean there would be 1000s of do-everything websites in the future in the best case, or billions of doing one-thing terribly apps in the worst case.
percentage of good, well planned, consistent and coherent software is going to approach zero in both cases.
I’m finding that the code LLMs produce is just average. Not great, not terrible. Which makes sense, the model is basically a complex representation of the average of its training data right? If I want what I consider ‘good code’ I have to steer it.
So I wouldn’t use LLMs to produce significant chunks of code for something I care about. And publishing vibe coded projects under my own GitHub user feels like it devalues my own work, so for now I’m just not publishing vibe coded projects. Maybe I will eventually, under a ‘pen name.’
, Winamp & VLC were build by less than four people. You only needed 1000s of people because the executive vision is always to add more useless junk into each product.
Many types of software have essential complexity and minimal features that still require hundreds/thousands of software engineers. Having just 4 people is simply not enough man-hours to build the capabilities customers desire.
Complex software like 3D materials modeling and simulation, logistics software like factory and warehouse planning. Even the Linux kernel and userspace has thousands of contributors and the baseline features (drivers, sandbox, GUI, etc) that users want from a modern operating system cannot be done by a 4-person team.
All that said, there a lots of great projects with tiny teams. SQLite is 3 people. Foobar2000 is one person. ShareX screensaver I think is 1 developer in Turkey.
But big projects are where the quality of LLM contributions fall the most, and require (continuous, exhausting, thankless) supervision!
> percentage of good, well planned, consistent and coherent software is going to approach zero
So everything stays exactly the same?
> So everything stays exactly the same?
No, we get applications so hideously inefficient that your $3000 developer machine feels like it it's running a Pentium II with 256 MB of RAM.
We get software that's as slow as it was 30 years ago, for no reason other than our own arrogance and apathy.
I find it hard to disagree with this (sadly).
I do feels things in general are more "snappy" at the OS level, but once you get into apps (local or web), things don't feel much better than 30 years ago.
The two big exceptions for me are video and gaming.
I wonder how people who work in CAD, media editing, or other "heavy" workloads etc, feel.
> I wonder how people who work in CAD, media editing, or other "heavy" workloads etc, feel.
I would assume (generally speaking) that CAD and video editing applications are carefully designed for efficiency because it's an important differentiator between different applications in the same class.
In my experience, these applications are some of the most exciting to use, because I feel like I'm actually able to leverage the power of my hardware.
IMO the real issue are bloated desktop apps like Slack, Discord, Spotify, or Claude's TUI, which consume massive amounts of resources without doing much beyond displaying text or streaming audio files.
I get this comment everytime I say this but there are levels to this. What you think is bad today could be considered artisan when things become worse than today.
I mean, you've never used the desktop version of Deltek Maconomy, have you? Somehow I can tell.
My point here is not to roast Deltek, although that's certainly fun (and 100% deserved), but to point out that the bar for how bad software can be and still, somehow, be commercially viable is already so low it basically intersects the Earth's centre of gravity.
The internet has always been a machine that allows for the ever-accelerated publishing of complete garbage of all varieties, but it's also meant that in absolute terms more good stuff also gets published.
The problem is one of volume not, I suspect, that the percentages of good versus crap change that much.
So we'll need better tools to search and filter but, again, I suspect AI can help here too.
Underrated comment. The reason that everyone complains about code all the time is because most code is bad, and it’s written by humans. I think this can only be a step up. Nailing validation is the trick now.
Validation was always the hard part, outside of truly novel areas - think edges of computer science (which generally happen very rarely and only need to be explored once or a handful of times).
Validation was always the hard part because great validation requires great design. You can't validate garbage.
Completely agree. There is a common misunderstanding/misconception in product development, that more features = better product.
I’ve never seen a product/project manager questioning themselves: does this feature add any value? Should we remove it?
In agile methodologies we measure the output of the developers. But we don’t care about that the output carries any meaningful value to the end user/business.
Its also about marketing. People buy because of features.
The people making the buying decisions may not have a good idea of what maximises "meaningful value" but they compare feature sets.
It’s more about operational resilience and serving customers than product development. If you run early WhatsApp like organisation just 1 person leaving can create awful problems. Same for serving customers especially big clients need all kinds of reports and resources that skeleton organisation can not provide.
Yeah, that’s a misconception too based on my experience.
I’ve seen many people (even myself) thinking the same: if I quit/something happens to me, there will be no one who knows how this works/how to do this. Turned out the businesses always survived. There was a tiny inconvenience, but other than that: nothing. There is always someone who is willing to pick up/take over the task in zero amount of time.
I mean I agree with you, in theory. But that’s not what I’ve seen in practice.
> I’ve never seen a product/project manager questioning themselves: does this feature add any value? Should we remove it?
To be fair, it is a hard question to contend with. It is easier to keep users who don't know what they're missing happier than users who lost something they now know they want. Even fixing bugs can sometimes upset users who have come to depend on the bug as a feature.
> In agile methodologies we measure the output of the developers.
No we don't. "Individuals and interactions over processes and tools". You are bound to notice a developer with poor output as you interact with them, but explicitly measure them you will not. Remember, agile is all about removing managers from the picture. Without managers, who is even going to do the measuring?
There are quite a few pre-agile methodologies out there that try to prepare a development team to operate without managers. It is possible you will find measurement in there, measuring to ensure that the people can handle working without mangers? Even agile itself recognizes in the 12 principles that it requires a team of special people to be able to handle agile.
I didn’t mean the Agile Manifesto prescribes individual productivity measurement. I meant what often happens in “agile in the wild”: we end up tracking throughput proxies (story points completed, velocity, number of tickets closed, burndown charts) and treating that as success, while the harder question (“did this deliver user/business value?”) is weakly measured or ignored.
Also, agile isn’t really “removing managers from the picture” so much as shifting management from command-and-control to enabling constraints, coaching, and removing impediments. Even in Scrum, you still have roles with accountability, and teams still need some form of prioritization and product decision-making (otherwise you just get activity without direction).
So yeah: agile ideals don’t say “measure dev output.” But many implementations incentivize output/throughput, and that’s the misconception I was pointing at.
> we end up tracking throughput proxies (story points completed, velocity, number of tickets closed, burndown charts) and treating that as success
That sounds more like scrum or something in that wheelhouse, which isn't agile, but what I earlier called pre-agile. They are associated with agile as they are intended to be used as a temporary transitionary tool. One day up and telling your developers "Good news, developers. We fired all the managers. Go nuts!" obviously would be a recipe for disaster. An organization wanting to adopt agile needs to slowly work into it and prove that the people involved can handle it. Not everyone can.
> Also, agile isn’t really “removing managers from the picture” so much as shifting management from command-and-control to enabling constraints, coaching, and removing impediments.
That's the pre-agile step. You don't get rid of managers immediately, you put them to work stepping in when necessary and helping developers learn how to manage without a guiding hand. "Business people" remain involved in agile. Perhaps you were thinking of that instead? Under agile they aren't managers, though, they are partners who work together with the developers.
Wait, surely adding 10x more agents to my project will speed up development, improve the end product, and make me more productive by that same proportion, right?
I will task a few of them to write a perfectly detailed spec up front, break up the project into actionable chunks, and then manage the other workers into producing, reviewing, and deploying the code. Agents can communicate and cooperate now, and hallucination is a solved problem. What could go wrong?
Meanwhile, I can cook or watch a movie, and ocasionally steer them in the right direction. Now I can finally focus on the big picture, instead of getting bogged down by minutiae. My work is so valuable that no AI could ever replace me.
/s
I just build a programming language in couple of hours, complete with interpreter with claude code. I know nothing about designing and implementing programming languages: https://github.com/m-o/MoonShot. Its crazy.
Yes, my point is that it was possible to build it before AI and in much less effort than people imagine. People in college build an interpreter in the less than couple weeks anyway and that probably has more utility.
Consider two scenarios:
1) I try to build an interpreter. I go and read some books, understand the process, build it in 2 weeks. Results: I have a toy interpreter. I understand said toy interpreter. I learnt how to do it, Learnt ideas in the field, applied my knowledge practically.
2) I try to build an interpreter. I go and ask claude to do it. It spits out something which works: Result: I have black box interpreter. I dont understand said interpreter. I didnt build any skills in building it. Took me less than an hour.
Toy interpreter is useless in both scenarios but Scenario one pay for the 2 week effort, while Scenario 2 is a vanity project.
Yes, but you can combine the solutions. Aka, you know what you are working on.You can make it much faster. Or you builds something and learn from it.
I think there will be a lot of slop and a lot of usefull stuff. But also, what i did was just an experiment to see if it is possible, i don't think it is usable, nor do i have any plans to make it into new language. And it was done in less than 3 hours total time.
So for example, if you want to try new language features. Like let's say total immutability, or nullability as a type. Then you can build small language and try to write a code in it. Instead of writing it for weeks, you can do it in hours.
Took a quick look, this seems like a copy of writing an interpreter in go book by Thorsten Ball, but just much worse.
Also using double equals to mutate variables, why?
Just because i wanted it to. I made some design choices that i found interesting.
You built something.
Now comes the hard or impossible part: is it any good? I would bet against it.
Oh, thank you for informing me.
I feel agentic development is a time sink.
Previously, I'd have an idea, sit on it for a while. In most cases, conclude it's not a good idea worth investing in. If I decided to invest, I'd think of a proper strategy to approach it.
With agentic development, I have an idea, waste a few hours chasing it, then switch to other work, often abandoning the thing entirely.
I still need to figure out how to deal with that, for now I just time box these sessions.
But I feel I'm trading thinking time for execution time, and understanding time for testing time. I'm not yet convinced I like those tradeoffs.
Edit: Just a clarification: I currently work in two modes, depending on the project. In some, I use agentic development. In most, I still do it "old school". That's what makes the side effects I'm noticing so surprising. Agentic development pulls me down rabbit holes and makes me loose the plot and focus. Traditional development doesn't, its side effects apparently keep me focused and in control.
>With agentic development, I have an idea, waste a few hours chasing it, then switch to other work, often abandoning the thing entirely.
How much of this is because you don't trust the result?
I've found this same pattern in myself, and I think the lack of faith that the output is worth asking others to believe in is why it's a throwaway for me. Just yesterday someone mentioned a project underway in a meeting that I had ostensibly solved six months ago, but I didn't even demo it because I didn't have any real confidence in it.
I do find that's changing for myself. I actually did demo something last week that I 'orchestrated into existence' with these tools. In part because the goal of the demo was to share a vision of a target state rather than the product itself. But also because I'm much more confident in the output. In part because the tools are better, but also because I've started to take a more active role in understanding how it works.
Even if the LLMs come to a standstill in their ability to generate code, I think the practice of software development with them will continue to mature to a point where many (including myself) will start to have more confidence in the products.
> Previously, I'd have an idea, sit on it for a while.
> With agentic development, I have an idea, waste a few hours chasing it,
What's the difference between these 2 periods? Weren't you wasting time when sitting on it and thinking about your idea?
Sitting on an idea doesn’t have to mean literally sitting and staring at the ceiling, thinking about it. It means you have an idea and let it stew for a while, your mind coming back to it on its own while you’re taking a shower, doing the dishes, going for a walk… The idea which never comes back is the one you abandon and would’ve been a waste of time to pursue. The idea which continues to be interesting and popping into your head is the worthwhile one.
When you jump straight into execution because it’s easy to do so, you lose the distinction.
Sitting on an idea doesn't necessarily mean being inactive. You can think at the same time as doing something else. "Shower thoughts" are often born of that process.
If you do not know what you want to build, how to ask the AI what you want and are unable to tell what the correct requirements are; then it becomes a waste of time and money.
More importantly, As the problem becomes more complex, it then matters more if you know where the AI falls short.
Case study: Security researchers were having a great time finding vulnerabilities and security holes in Openclaw.
The Openclaw creators had a very limited background in security even when the AI entirely built Openclaw and the authors had to collaborate with the security experts to secure the whole project.
> If you do not know what you want to build
That describes the majority of cases actually worth working on as a programmer in the traditional sense of the word. You build something to begin to discover the correct requirements and to picture the real problem domain in question.
> You build something to begin to discover the correct requirements and to picture the real problem domain in question.
That's one way, another way is to keep the idea in your head (both actively and "in the background) for days/weeks, and then eventually you sit down and write a document, and you'll get 99% of the requirements down perfectly. Then implementation can start.
Personally I prefer this hammock-style development and to me it seems better at building software that makes sense and solves real problems. Meanwhile "build something to discover" usually is best when you're working with people who need to be able to see something to believe there is progress, but the results are often worse and less well-thought out.
This.
It's better to have a solid concrete idea written down of the entire system that you know you want to build which has ironed out the limitations, requirements and the constraints first before jumping into the code implementation or getting the agent to write it for you.
The build-something-to-discover approach is not for building robust solutions in the long run. By starting with the code first without knowing what it is you are solving or just getting the AI to generate something half-working but breaks easily and changing it once again for it to become even more complicated just wastes more time and tokens.
Someone still has to read the code and understand why the project was built on a horrible foundation and needs to know how to untangle the AI vibe-coded mess.
with agentic development, I've finally considered doing open source work for no reason aside from a utility existing
before, I would narrow things down to only the most potentially economically viable, and laugh at ideas guys that were married to the one single idea in their life as if it was their only chance, seemingly not realizing they were competing with people that get multiple ideas a day
back to the aforementioned epiphany, it reminds me of the world of Star Trek where everything was developed for its curiosity and utility instead of money
That's the bane of all productivity increasing tools, any time you free up immediately gets consumed by more work.
People keep on making the same naive assumption that the total amount of work is a constant when you mess with the cost of that work. The reality is that if you make something cheaper, people will want more of it. And it adds up to way more than what was asked before.
That's why I'm not worried about losing my job. The whole notion is based on a closed world assumption, which is always a bad assumption.
If you look at the history of computers and software engineering including compilers, ci/cd, frameworks/modules/etc. functional and OO programming paradigms, type inference, etc. There's something new every few years. Every time we make something easier and cheaper, demand goes up and the amount of programmers increases.
And every time you have people being afraid to lose their jobs. Sometimes jobs indeed disappear because that particular job ceases to exist because technique X got replaced with technique Y. But mostly people just keep their jobs and learn the new thing on the job. Or they change jobs and skill up as they go. People generally only lose their jobs when companies fail or start shrinking. It's more tied to economical cycles than to technology. And some companies just fail to adapt. AI is going to be similar. Lots of companies are flirting with it but aren't taking it seriously yet. Adoption cycles are always longer than people seem to think.
AI prompting is just a form of higher level programming and being able to program is a non optional skill to be able to prompt effectively. I'd use the word meta programming but of course that's one of those improvements we already had.
> That's why I'm not worried about losing my job. The whole notion is based on a closed world assumption, which is always a bad assumption.
You might be right, but some of us haven't quite warmed to the idea that our new job description will be something like "high-level planner and bot-wrangler," with nary a line of code in sight.
TBH, I have found AI addictive, you use it for the first time, and its incredible. You get a nice kick of dopamine. This kick of dopamine, is decreasing with every win you get. What once felt incredible, is just another prompt today.
Those things don't excite you any more. Plus, the fact that you no longer exercise your brain at work any more. Plus, the constant feeling of FOMO.
It deflates you, faster.
What felt incredible was getting the setup and prompting right and then producing reasonable working code at 50x human speed. And you're right, that doesn't excite after a while.
But I've found my way to what, for me, is a more durable and substantial source of satisfaction, if not excitement, and that is value. Excuse the cliche, but its true.
My life has been filled with little utilities that I've been meaning to put together for years but never found the time. My homelab is full of various little applications that I use, that are backed up and managed properly. My home automation does more than it ever did, and my cabin in the countryside is monitored and adaptive to conditions to a whole new degree of sophistication. I have scripts and workflows to deal with a fairly significant administrative load - filing and accounting is largely automated, and I have a decent approximation of an always up-to-date accountant and lawyer on hand. Paper letters and PDFs are processed like its nothing.
Does all the code that was written at machine-speed to achieve these things thrill me? No, that's the new normal. Is the fact that I'm clawing back time, making my Earthly affairs orderly in a whole new way, and breathing software-life into my surroundings without any cloud or big-tech encroachment thrilling? Yes, sometimes - but more importantly it's satisfying and calming.
As far as using my brain - I devote as much of my cognitive energy to these things as I ever have, but now with far more to show for it. As the agents work for me, I try to learn and validate everything they do, and I'm the one stitching it all into a big cohesive picture. Like directing a film. And this is a new feeling.
Isn't it just like programming?
Many of programmers became programmers because they find the idea of programming fascinating, probably in their middle school days. And then they went to be professionals. Then they burned out and if they were lucky, transited to management.
Of course not everyone is like that, but you can't say it isn't common, right.
If what once felt incredible is just another prompt today, what is incredible today? Addictive personalities usually double down to get a bigger dopamine kick - that's why they stay addicted. So I don't think you truly found it addictive in the conventional sense of the term. Also excercising the brain has been optional in software for quite a while tbh.
Apart from the addicts, AI also helps the liars, marketeers and bloggers. You can outsource the lies to the AI.
Yeah if you want to keep your edge you have to find other ways to work your programming brain.
But as far as output - we all have different reasons for enjoying software development but for me it's more making something useful and less in the coding itself. AI makes the fun parts more fun and the less fun parts almost invisible (at small scale).
We'll all have to wrestle with this going forward.
If you use an LLM you've given up your edge.
If you use a compiler you've given up your edge.
Is a blog reposting other content worth its own post?
Previous discussion of the original article: https://news.ycombinator.com/item?id=46945755
This is not a technology problem. AI intensifies work because management turns every efficiency gain into higher output quotas. The solution is labor organization, not better software.
Labor organization yes! I don't quite know how to achieve it. I also worry that my desire to become a manager is in direct conflict with my desire to contribute to labor organization.
On a separate note, I have the intensification problem in my personal work as well. I sit down to study, but, first, let me just ask Claude to do some research in the background... Oh, and how is my Cursor doing on the dashboard? Ah, right, studying... Oh, Claude is done...
> I don't quite know how to achieve it.
Definitely not by posting on right-wing social media websites.
> I also worry that my desire to become a manager is in direct conflict with my desire to contribute to labor organization.
It is.
The driving force is not management or even developers, it's always the end users. They get to do more with less, thanks to the growing output. This is something to the celebrated, not a problem to be "solved" with artificial quotas.
I am all for labor organization. I just don’t see how it would be of benefit in this particular case.
If I'm not mistaken it would appear that you're saying that you are in fact *not* for labor organization in this case.
No, absolutely not. I would even be for labor organization if it had no impact on this matter primarily because I don't see why it would be a negative.
The leftist thought process never ceases to amaze me:
"This time, its going to be the correct version of socialism."
This argument has been used against every new technology since forever.
And the initial gut reaction is to resist by organizing labor.
Companies that succumb to organized labor get locked into that speed of operating. New companies get created that adopt 'the new thing' and blow old companies away.
Repeat.
> And the initial gut reaction is to resist by organizing labor.
Yeah like tech workers have similar rights to union workers. We literally have 0 power compared to any previous group of workers. Organizing of labour cant even happen in tech as tech has large percentage of immigrant labour who have even less rights than citizens.
Also there is no shared pain like union workers had, we all have been given different incentives, working under different corporations so without shared pain its impossible to organize. AI is the first shared pain we had, and even this caused no resistance from tech workers. Resistance has come from the users, which is the first good sign. Consumers have shown more ethics than workers and we have to applaud that. Any resistance to buying chatbot subscriptions has to be celebrated.
Labor organizing is (obviously) banned on HackerNews.
This isn't the place to kvetch about this; you will literally never see a unionization effort on this website because the accounts of the people posting about it will be [flagged] and shadowbanned.
I'm curious as to what previous group you're comparing yourself (and the rest of us) to.
I'm also curious as to what you do, where you do it, and who you work for that makes you feel like you have zero power.
Just a regular senior SDE at one of the Mag7. I can tell you everyone at these companies is replaceable within a day. Even within an hour. Even the head of depts have no power above them, they can be fired on short notice.
So race to the bottom where you work more and make less per unit of work? Great deal, splendid idea.
The only winners here are CEOs/founders who make obscene money, liquidate/retire early while suckers are on the infinite treadmill justifying their existence.
Do you like working 8 hours a day instead of 12? 5 days a week instead of 7? You can thank organized labor.
Maybe society shouldnt be optimising for that.
As someone who prefers to do one task at a time, using AI tools makes me feel productive and unproductive at the same time: productive because I am able to do finish my task faster, unproductive because I feel like I am wasting my time while I am waiting for the AI to respond.
Had a similar experience recently, set up Claude code, wrote plans, CLAUDE.md etc. The plan was to end up with a nice looking hugo/bootstrap/ website.
Long story short, it was ugly and didn't really work as I wanted. So I'm learning Hugo myself now... The whole experience was kind of frustrating tbh.
When I finally settled in en did some hours of manual work I felt much better because of it. I did benefit from my planning with Claude though...
Now that you get accustomed with Hugo, I wonder if the way you plan & prompting now will produce better result or not
I probably will, I use AI extensively, but mostly when I can't remember tedious syntax or suspect something can be done in a better way. And that work well for me... If I go too much towards vibe coding, the fun is sucked away for me.
> I've had conversations with people recently who are losing sleep because they're finding building yet another feature with "just one more prompt" irresistible.
This is actually a really good point that I have kind of noticed when using AI for side project, so being on my own time. The allure of thinking "Oh I wonder how it will perform with this feature request if I give it this amount of info".
Can't say I would put off sleep for it but I get the sentiment for sure.
What kills me personally is that I'm constantly 80% there, but the remaining 20% can be just insurmountable. It's really like gambling: Just one more round and it'll be useful, OK, not quite, just one more, for hours.
Do you mean in terms of adding one more feature or in terms of how a feature you're adding almost works but not quite right?
I find the latter a lot more challenging to cut my losses when it's on a good run (and often even when I know I could just write this by hand), especially because there's as much if not more intrigue about whether the tool can accomplish it or not. These are the moments where my mind has drifted to think about it the exact way you describe it here.
If you think it's getting 80% right on its own, you're a victim of Anthropic and OpenAI's propaganda.
No I kind of see this too, but the 80% is very much the more simple stuff. AI genuinely saves me some time, but I always notice that if I try to "finish" a relatively complex task that's a bit unique in some regards, when a bit more complex work is necessary, something slightly domain-related maybe, I start prompting and prompting and banging my head against the terminal window to make it try to understand the issue, but somehow it still doesn't turn out well at all, and I end up throwing out most of the work done from that point on.
Sometimes it looks like some of that comes from AI generally being very very sure of its initial idea "The issue is actually very simple, it's because..." and then it starts running around in circles once it tries and fails, you can pull it out with a bit more prompting, but it's tough. The thing is, it is sometimes actually right, from the very beginning, but if it isn't...
This is just my own perspective after working with these agents for some time, I've definitely heard of people having different experiences.
And let's get real: AI companies will not be satisfied with you paying $20 or even $200 month if you can actually develop your product in a few days with their agents. They are either going to charge a lot more or string you along chasing that 20%.
That's an interesting business model actually : "Oh hey there, I see you're almost finished your project and ready to launch, watch theses adverts and participate in this survey to get the last 10% of your app completed"
Tell that to my family with whom I have been spending a lot more time recently having benefited a lot from increased productivity.
A couple of historical notes that come to mind.
When washing machines were introduced, the number of hours of doing the chore of laundry did not necessarily decrease until 40 years after the introduction.
When project management software was introduced, it made the task of managing project tasks easier. One could create an order of magnitude or more of detailed plans in the same amount of time - poorly used this decreased the odds of project success, by eating up everyone's time. And the software itself has not moved the needle in terms of project success factors of successfully completing within budget, time, and resources planned.
This intensification is really a symptom of the race to the bottom. It only feels 'exhausting' for people who don't want to lose their job or business to an agent; for everyone else, the AI is just an excuse to do less.
The way you avoid losing your job to an AI agent is not 'intensifying' its use, but learning to drive it better. Much of what people are calling 'intensification' here is really just babysitting and micromanaging their agent because it's perpetually running on vibes and fumes instead of being driven effectively with very long, clearly written (with AI assistance!) prompts and design documents. Writing clear design documentation for your agent is a light, sustainable and even enjoyable activity; babysitting its mistakes is not.
I'm sorry but if you're losing your job to this shit you were too dumb to make it in the first place.
Edit: Not to mention, this is what you get for not unionizing earlier. Get good or get cut.
I just build my k8s homelab with AI.
It’s insane how productive I am.
I used to have “breaks” looking for specific keywords or values to enter while crafting a yaml.
Now the AI makes me skip all of that, essentially.
I've been saying this since ChatGPT first came out: AI enables the lazy to dig intellectual holes they cannot dig out, while also enables those with active critical analysis and good secondary considerations to literally become the fabled 10x or more developer / knowledge worker. Which creates interesting scenarios as AI is being evaluated and adopted: the short sighted are loudly declaring success, which will be short term success, and they are bullying their work-peers that they have the method they all should follow. That method being intellectually lazy, allowing the AI to code for them, which they then verify with testing and believe they are done. Meanwhile, the quiet ones are figuring out how to eliminate the need for their coworkers at all. Managers are observing productivity growth, which falters with the loud ones, but not with those quiet ones... AI is here to make the scientifically minded excel and the short cut takers can footgun themselves out of there.
Surely managers will finally recognize the contributions of the quiet ones! I cannot believe what I read here.
We just saw the productivity growth in the vibe coded GitHub outages.
Don't bet on it. Those managers are the previously loud short sighted thinkers that finagled their way out of coding. Those loud ones are their buddies.
This is a cope. Managers are not magicians who will finally understand who is good and who is just vibe coding demos. In fact now its gonna become even harder to understand differences for the managers. In fact its more likely that the managers are at the same risk because without a clique of software engineers, they would have nothing to manage.
I like working on my own projects, and where I found AI really shone was by having something there to bounce ideas off and get feedback.
That changes if you get it to write code for you. I tried vibe-coding an entire project once, and while I ended up with a pretty result that got some traction on Reddit, I didn't get any sense of accomplishment at all. It's kinda like doomscrolling in a way, it's hard to stop but it leaves you feeling empty.
I'm also coming to the conclusion that LLMs have basically the same value as when I tried them out with GPT-3 : good for semantic search / debugging. Bad for generation as you constantly have to check it, correct it, and the parts you trust it to get "right" are often those that are biting you afterwards - or if right introduce gaps in your own knowledge that make you slowly inefficient in your "generation controller" role.
I've been saying since 2024 that these things are not getting meaningfully better at all.
I think these companies have been manipulating social media sentiment for years in order to cover up their bunk product.
Comments on the original article: https://news.ycombinator.com/item?id=46945755
My two cents that this is part of the learning curve. With collective experience this type of work will be more understood, shared and explored. It is intense in the beginning because we are still discovering how to work with it. I think the other part being that this is a non-deterministic tool which does increase some cognitive load.
People are a gas, and they expand to fill the space they're in. Tools that produce more work do make people's lives easier, they mean an individual just needs to do more work using their tools to do so. This is a disposition that most people have, and therefore it's unavoidable. AI is not exciting to me. I only need to use it so I don't fall behind my peers. Why would I ever be interested in that?
"Word expands to fill the time available"
Doesn't have to be that way, it's about managers being realistic and not pushing people too far
Managers don’t even need to push anything. FOMO does all the work.
Overheard a couple of conversations in the office how one IC spent all weekend setting up OpenClaw, another was vibe coding some bullshit application.
I see hundreds of crazy people in our company Slack just posting/reposting twitter hype threads and coming up with ridiculous ideas how to “optimize” workflow with AI.
Once this becomes the baseline, you’ll be seen as the slow one, because you’re not doing 5x work for the same pay.
Don't worry about these people.
These are internet cult victims.
You do it to yourself, you do, and that's why it really hurts.
> Importantly, the company did not mandate AI use (though it did offer enterprise subscriptions to commercially available AI tools). On their own initiative workers did more because AI made “doing more” feel possible, accessible, and in many cases intrinsically rewarding.
Computer languages were the lathe for shaping the machines to make them do whatever we want, AI is a CNC. Another abstraction layer for making machines do whatever we want them to do.
AI is one of those early-2000s SUVs that gets 8 miles to the gallon and has a TV screen in the back of every seat.
It's about presenting externally as a "bad ass" while:
A) Constantly drowning out every moment of your life with low quality background noise.
B) Aggressively polluting the environment and depleting our natural resources for no reason beyond pure arrogance.
I feel that the popularization of bloated UI "frameworks", like React and Electron, coupled with the inefficiency tolerated in the "JS ecosystem" have been precursors to this dynamic.
It seems perfectly fitting to me that Anthropic is using a wildly overcomplicated React renderer in their TUI.
React devs are the perfect use case for "AI" dev tools. It is perfectly tolerated for them to write highly inefficient code, and these frameworks are both:
A) Arcane and inconsistently documented
B) Heavily overrepresented in open-source
Meaning there are meaningful gains to be had from querying these "AI" tools for framework development.
In my opinion, the shared problem is the acceptance of egregious inefficiency.
I don't disagree with the concept of AI being another abstraction layer (maybe) but I feel that's an insult to a CNC machine which is a very precise and accurate tool.
LLMs are quite accurate for programming, these days they almost always create a code that will compile without errors and errors are almost always fixable by feeding the error into the LLM. I would say this is extremely precise text generation, much better than most humans.
Just like with CNC though, you need to feed it with the correct instructions. It's still on you for the machined output to do the expected thing. CNC's are also not perfect and their operators need to know the intricacies of machining.
> LLMs are quite accurate for programming, these days they almost always create a code that will compile without errors and errors are almost always fixable by feeding the error into the LLM.
What domains do you work in? This description does not match my experience whatsoever.
I'm primarily into mobiles apps these days but using the LLMs I'm able to write software in languages that I don't know with tech that I don't understand well(like bluetooth).
What did you try to do and the LLM failed you?
100% disagree. CNC is a precision machine while AI is the literal opposite of precision.
Tell them again.
> I've had conversations with people recently who are losing sleep because they're finding building yet another feature with "just one more prompt" irresistible.
alpha sigma grindset
I have found that attending to one task keeps me going for longer.
I prompt and sit there. Scrolling makes it worse. It's a good mental practice to just stay calm and watch the AI work.
Isn't the point of AI that you can scroll endlessly while something else works for you?
If you're going to stay single-minded, why wouldn't you just write the code yourself? You're going to have to double check and rewrite the AI's shitty work anyway
It's like the invention of the power loom, but for knowledge workers. Might be interesting to look at the history of industrialisation and the reactions to it.
Discussed https://news.ycombinator.com/item?id=46945755
> help avoid burnout
Yeah, good luck with that.
Corporations have tried to reduce employee burnout exactly never times.
That’s something that starts at the top. The execs tend to be “type A++” personalities, who run close to burnout, and don’t really have much empathy for employees in the same condition.
But they also don’t believe that employees should have the same level of reward, for their stress.
For myself, I know that I am not “getting maximum result” from using LLMs, but I feel as if they have been a real force multiplier, in my work, and don’t feel burnt out, at all.
Maybe it would be much better to just link to the original article instead of somewhere else for the full context to read. [0]
Also this post should link to the original source as well.
As per the submission guidelines [1]:
”Please submit the original source. If a post reports on something found on another site, submit the latter.”
[0] https://hbr.org/2026/02/ai-doesnt-reduce-work-it-intensifies...
[1] https://news.ycombinator.com/newsguidelines.html
It is in the title if you open the page.
> I've had conversations with people recently who are losing sleep because they're finding building yet another feature with "just one more prompt" irresistible.
Literal work junkies.
And what’s the point? If you’re working on your own project then “just one more feature, bro” isn’t going to make next Minecraft/Photopea/Stardew Valley/name your one man wonder. If you’re working for someone, then you’re a double fool, because you’re doing work of two people for the pay of one.
There's a word for that mindset: https://en.wikipedia.org/wiki/Karoshi
intensification = productivity for me.
40 years ago when I was a history major in college one of my brilliant professors gave us a book to read called "the myth of domesticity".
In the book The researcher explains that when washing machines were invented the women faced a whole new expectation of clean clothes all the time because washing clothes was much less of a labor. And statistics pointed out that women actually were washing clothes more often than doing more work after the washing machine was invented then before.
This happens with any technology. AI is no different.
this matches my experience
it's good that people so quickly see it as impulsive and addicting, as opposed to the slow creep of doomscrolling and algorithm feeds
Hopefully it will be like an Ebola virus, so that everyone will see how deadly it is instead of like smoking where you die of cancer 40 years down the line.
Frankly, it seems more like the Crack epidemic to me.
I think I'll have a happier medium when I get additional inputs set up, such as just talking to a CLI running my full code base on a VPS, but through my phone and airpods, only when it needs help
at least I won't be vegetating at a laptop, or shirking other possible responsibilities to get back to a laptop