Every large corporation is stuck in communication problems and approval processes. They have grown so large as to have minimal alignment between what the company attempts to produce, what makes the company profitable, and what people actually do. Enshittification, The Gervais Principle, Bullshit Jobs. Pick your favorite, flawed way to look at what is going on, it's all blind people touching different parts of the same elephant.
The way AI makes your processes go faster will have little to do with cutting software development time in itself, but by letting an organization be made with fewer people, which in itself lowers your misalignment issues. A giant company of 200K people will still be about as messy as one today, but you might be able to do a lot more with the same number of people, just like a lone programmer today, without AI, already does quite a bit more than anyone could do by themselves the 80s.
Maybe some of the advantages are that you don't need quite as many developers, or maybe you can use a smaller marketing team, or you don't need to spend that much time answering questions, because an LLM is doing it for you, and it's tracking what it's been asked of it, turning the questions into product research. Either way, the gains come from being able to run leaner, and therefore minimizing organizational misalignment.
I think when LLMs first came out people thought they could just say something like, "Make a Facebook clone". But now we're realizing we need to be more exact with our requirements and define things better. That has always been the bottle neck in software.
When I was working we used to get requirements that literally said things like, "Get data and give it to the user". No definition of what data is, where its stored, or in what format to return it. We would then spend a significant amount of time with the product person trying to figure out what they really wanted.
In order to get good results with LLMs we need to do something similar. Vague requirements get vague results.
It's worse. Vague requirements still only power vague interpretations of the problem. Even if you provide good requirements, you still only have vague interpretations at your fingertips. The promise is that such things won't be a problem in the future, which is obviously not materialising.
"Make a facebook clone" is the vague human promise to the end user. The reality is that it leads to so many assumptions which are insurmountable due to the vague interpretation so you have to change your requirements in the end to claim success.
Thus everything turns into a mediocre compromise. There is no exceptional outcome, which is what makes a marketable product. There are just corpses everywhere.
You need something better to both define requirements and implement them than this technology.
> I think when LLMs first came out people thought they could just say something like, "Make a Facebook clone". But now we're realizing we need to be more exact with our requirements and define things better. That has always been the bottle neck in software.
This was substantially predicted by Fred Brooks in 1986 in the classic No Silver Bullets [1] essay under the sections "Expert Systems" and "Automatic Programming".
In it, he lays out the core features of vibe coding and exactly the experience we are having now with it: Initial success in a few carefully chosen domains and then a reasonable but not ground breaking increase in productivity as it expands outside of those domains.
The LLMs turn out fully formed clones of stuff for which there exists copious amounts of code openly searchable on the web doing the exact same thing.
LLMs require developer-like specification, task/subtask breakdown and detail where such example code already exists.
As a professional prior to LLMs, how many problems that you work on have many existing free solutions but you neglected to use that code and decided to spend days doing it yourself?
We now have product owners trying to farm out their work to an LLM. The process didn’t work before because the person writing the requirements either put out vague requirements or bad requirements because they didn’t understand the business intent (or were careless).
LLMs just take the same vague or poor requirements and make them look believable until you dig in to them.
> When I was working we used to get requirements that literally said things like, "Get data and give it to the user". No definition of what data is, where its stored, or in what format to return it. We would then spend a significant amount of time with the product person trying to figure out what they really wanted.
This is a big HN LLM discussion divide. I am in the same no-specs work background camp, and so the idea that the humans who input that into dev teams are suddenly going to get anything out of an LLM if they directly input the same is laughable. In my career most orgs there has been no product person and we just talked directly to end users.
For that kind of org, it will accelerate some parts of the SWEs job at different multipliers, but all the non-dev work to get there with discussions, discovery, iteration, rework, etc remains.
If the input to your work is a 20 page specification document to accompany multi-paragraph Jira tickets with embedded acceptance criteria / test cases / etc, then yes there is a danger the person creating that input just feed it into an LLM.
I’ve never understood engineers who complain about vague specs… if the spec was complete, it would be code and the job would be done already! Getting a 20 page spec delivered from upon high and mechanically translating it to code without any chance to send feedback up the chain sounds like… a compiler.
Yes, I don't think a job where I am programmed by a product manager would be terribly interesting. I would move on to be the product manager if I found myself in such a role.
On the one hand, this is a clean post that explains exactly what a lot of us have been thinking and seeing on the job at large organizations doing tech work. Dear Author, I agree with you 110% and want everybody else to come to understand what you have written.
On the other hand, it feels like we've been over this tens of times recently, on HN specifically and IRL at work. Another blog post isn't going to convince leaders that this is how the world works when they are socially and financially incentivized to pretend like AI really will speed things up. So now I just wait for their AI projects to fail or go as slowly as previous projects and hope they learn something.
Sadly I think you’re right. I even shy away from sharing these types of posts at work because it feels like anything that doesn’t mesh with the status quo isn’t received well.
I disagree, I think the visuals, Gantt charts, are precisely the kind of "PM speak" that can be understood. Sure it won't solve anything as long as C-suite and investors do innovation signaling but that itself can only last so long.
Yep. I have the luxury of having my mortgage paid off and being able to be a bit picky about my work for a little bit.
So I am spending my days gardening and obsessively working on personal coding projects with these agentic tools. Y'know, building a high performance OLTP database from scratch, and a whole new logic relational persistent programming environment, a synthesizer based on some funky math, an FPGA soft processor. Y'know, normal things normal people do.
So I know what these tools are capable of in a single person's hands. They're amazing.
But I hear the stories from my friends employed at companies setting minimum token quotas or having leaderboards of people who are "star AI coders" telling people "not to do code reviews" and "stop doing any coding by hand" and I shake my head.
I dipped my toes into some contract work in the winter and it was fine but it mostly degraded into dueling LLMs on code reviews while the founder vibe coded an entire new project every weekend.
These tools suck for team work or any real team software engineering work.
I'll just let this shake out and sit out until the industry figures it out. The only places that are going to be sane to work at are places with older wiser people on staff who know how to say "slow down!" and get away with it.
In the meantime, quantities of cut rhubarb $5 a bunch in Hamilton, Ontario area for sale. Also asparagus. Lots and lots of asparagus.
If that sounds familiar, it’s because it’s what dang did over the course of several years.
It’s taken a few weeks. I started right around May, and now it’s able to render large HN threads (900+ comments) within a factor of five of production HN performance. (Thank you to dang for giving actual performance numbers to compare against.)
A couple days ago, mostly out of curiosity, I ran Claude with “/goal make this as fast as HN.” Somewhat surprisingly, it got the job done within a couple hours. I kept the experiment on separate branches, because the code is a mess, just like all AI generated code starts as. But the remarkable part is that it worked, and I can technically claim to have recreated HN within a few weeks.
The real work is in the specifications. My port of HN is missing around a hundred features. Things from favorited comments, to hiding threads, to being able to unvote and re-vote.
But catching up to HN is clearly a matter of effort (time spent actually working on the problem with Claude), not complexity. Each feature in isolation is relatively easy. Getting them all done within a short time span without ruining the codebase is the hard part. And I think that’s where a lot of people get tripped up: you can do a lot, but you have to manage it tightly, or else the codebase explodes into an unreadable mess.
It’s true that if you don’t do that crucial step of “manage the results”, you’ll end up making more work for yourself in the long run, by a large factor. But it’s also true that AI sped me up so much that I was able to do in weeks what would’ve otherwise taken years (and did take dang years). I’m not claiming parity, just that I got close enough to be an interesting comparison point.
AI can clearly accelerate us. But we need to be disciplined in how we use it, just like any other new tool. That doesn’t change the fact that it does work, and I think people might be underestimating how good the results can be.
I've had a handful of software projects in my career land essentially on the day I predicted, sometimes several months out, and the commonality across all of those projects was that the specification was crystal clear. Two of them were actual ports of an existing piece of software over to a new system. And so any time we had a question about the implementation, we could look at the existing version and immediately have our questions answered about what "correct" was.
I think projects where correct is very clearly defined can benefit from LLM acceleration, as you're describing here.
But so much of modern software development is figuring out what the right thing to build is. And in those situations, I don't think LLMs provide nearly as much benefit.
I think there's an interesting dichotomy. I find that for things I'm already capable at, LLMs are relatively inconsequential. But for things I'm no good at, it's a huge game changer. For a large company, that's going to be able to hire out most needed roles for any given project, this means the overall effect is going to be relatively inconsequential. At best, they may be able to cut down on labor costs by having one guy do a mediocre job at 5 people's jobs in exchange for a worse product. Short-term gains for long-term costs, wcgw?
But for a small studio, or independent developer, LLMs are a big game changer. Being able to do a mediocre job at 5 people's jobs is a huge leap over trying to get by without those jobs - relying on third party assets or other sorts of content, or even worse - doing a really awful job of trying to improv those jobs. See the UI of basically any program ever that was clearly laid out by a programmer and not a designer. Or there's the whole trying to rip off stuff from dribbble, but lacking the skills to do so. Whereas with AI, you can suddenly competently rip off everything and everybody - it's basically their entire MO.
Handholding is an issue which is affected by 3 factors: the model, the tooling and the human expertise. Out of the three, the last is the weakest link, due to the fact that it takes the longest to nurture.
Once tooling (e.g. agent harnesses, external tools) becomes more mature and consistent, the other 2 will become less of a bottleneck.
If I were to take a gamble here, I would argue that development will at one point reach the more ideal scenario, whereas the project planning, the scoping, will become longer. Also, the documentation section will take almost the same as the development, slightly longer at the edges.
The new ai-assisted era will most likely push companies to adopt a Waterfall management, rather than an Agile one.
> Yes, AI can generate code quickly (whether that’s a good thing is open for debate), but that doesn’t mean it’s generating the correct code.
No, the code is actually almost always correct. The way it’s added is probably not what you’re going to like, if you know your code base well enough. You know there’s some ceremony about where things are added, how they are named, how much comments you’d like to add and where exactly. Stuff like that seems to irritate people like me when not being done right by the agent, and it seems to fail even if it’s in the AGENTS.md.
> If you were to give human developers the same amount of feature/scope documentation you would also see your productivity skyrocket.
Almost 2 decades in IT and I absolutely do not believe this can ever happen. And if it does, it’s so rare, it’s not even worth talking about it.
This is all substantially correct and gives us hints as to where to focus for AI to make the processes go faster.
Eg: I had a product manager say to me that he envisions a future where any meeting with stakeholders that does not result in an interactive prototype by the end of the meeting would be considered a failure. This feels directionally correct to me.
The other thing I expect to see is Vibecoding being the "Excel 2.0" where it allows significant self-serve of building interactive apps that's engaged in a continual war with IT to turn them into something with better security guarantees, proper access control & logging, scalability, change management etc.
But the larger historical point here is that every revolutionary transition produces, in the early stages, "Steam Horses". The invention of the steam engine had people imagining that the future of transportation would involve horse shaped objects, powered by steam, pulling along conventional carts. It wasn't until later developments that we understood the function of transportation as divorced from the form.
I started talking about Steam Horses originally in the context of MOOCs, which was a classic Steam Horse idea.
Instead of mandatory AI workshops simply cancel all meetings with more than 3 people and no written agenda. Instead block the meeting time for productive work.
That’ll be 2000$ of advisory fees for the insane productivity gains I just unlocked you. You’re welcome
Yes, there are MANY in tech/non-tech management that will quietly admit that a lot of this top-down stuff is to create the appearance of motion to appease a higher more tech/AI ignorant authority.
Some organizations added a ton of process around software development because it is expensive and risky. They require a ton of approvals and sign-offs, then some managing overhead on top to check if their investment is on the right track. This approval process is bound to change by the fact that development is far cheaper and faster now.
Another aspect that is not captured here is that the lawyers and subject matter experts will also be using AI to speed up their parts.
> Every software developer knows that you can’t make projects go faster just by typing faster. If that were the case we would all be taking typing lessons.
So well said.
AI is unveiling how the bureaucracy is the slow part.
> AI is unveiling how the bureaucracy is the slow part.
Computing has been doing that for decades. If your process is fucked, computers make it fucked faster.
It’s just that now, we have entire generations alive that have never seem a world without digital computers. ~LLMs~ AI is a fun new lever in some uses so clearly it is finally the hammer that will drive the screws and bolts for us, with less effort on our part!
They just have to learn from experience. It’s what you do when you can’t be bothered to learn the lessons of the past.
Bureaucracy cannot learn the problems of the past with bureaucracy because it is against their self interest.
Work in large orgs long enough and you will recognize these creatures. Ladder climbing is a skill orthogonal to adding any value to the customer/company.
You're right it's just like any other mechanization/automation revolution. Except it's not.
It's happening about 10x faster than any other I've seen or read about.
Conceive how long it took just to get barcode scanners rolled out in grocery stores. Or direct payment terminals. Or how many decades it's been getting robotics into the manufacturing of cars at scale. I worked through the .com boom and I can tell you that "webification" took 10 years or more for most businesses (and many of them now just gave up and just have a Facebook page instead etc)
This is a little insane what's happening now. It really does change everything. People who don't work in software I don't think have any idea what's coming.
It's highly salient to management, and being forced top-down by them at 10x speed, for sure, because they see a future cost save to reduce headcount.
For certain technical roles its a force multiplier and already very saturated for sure.
On the other hand there's a lot of solution-looking-for-problem going on in large orgs where layers of management have been banging the table for 2-3 years on AI KPIs without any value being delivered.
In the weekly AI wins mail at a friends company, multiple non-technicals were bragging how AI has saved them 15 minutes a day by summarizing their morning inbox. This was the big game changer for them.
The promise of AI is in doing things at all that couldn't be automated before, at least economically. And when you find a use case where a bit of automated inference is sufficient and can replace human inference, it can wildly speed up a process, from when Susan has time for it, to right now.
Delivering more complete details for a task at hand is a noble goal, but there is a problem.
Programming is a logical circuit breaker. There is a wide range of incompleteness that halts development or puts the solutions in an unpublishable state.
A product person has no compiler, no RAM, no database, no state machine. There is nothing that can fail. There are probably strategies to weed out some issues, but none will be perfect.
We need to combine reality with computers. Computers set the constraints and we can only check if we are in bounds of the constraints by solving the problems with computers.
Oddly enough AI has so far nothing to offer to improve the "product people" problems.
Large corporations with orthodox methodologies will take time to extract the best benefits from AI. Small teams, which still remember the original Agile Manifesto, will soar and overtake their competitors.
Speaking about the middle, once I was shown advice from ai that a particular ticket would stall at “frozen middle management” and should be shelved until “coordination” improved. That sounds accurate, but can you imagine what a token-obsessed PM might say?
Yes, it is true for large enterprises, but not for startups ans individual creators. AI is accelerating speed for anyone who is not stuck in Corporate breaucratic processes.
> ...but that doesn’t mean it’s generating the correct code.
Something I'm observing is that now a lot of the pressure moves to the product team to actually figure out the correct thing to build. Some product teams are simply not used to this and are YOLO-ing prototypes now, iterating, finding out they built and shipped the wrong thing, and then unwinding.
Before, when there was the notion that "building is expensive", product teams would think things through, do user interviews up-front, actually do discovery around the customer + business context + underlying human process being facilitated with software.
This has shortened the cycle to first working prototype, but I'd guess that in the longer scale, it extends the time to final product because more time is wasted shifting the deliverable and experience on the user during this process of discovery versus nailing most of the product experience in big, stable chunks through design.
At the end of the day, there is a hidden cost to fast iterative shifts on the fundamental design of the software intended for humans to use and for which humans are responsible for operation. First is the cost on the end users who have to stop, provide feedback, and then retrain on each cycle. Second is that such compounding complexities in the underlying implementation as product learns requirements and vibe-codes the solution creates a system that becomes very challenging for humans to operationalize and maintain.
Ultimately, I think the bookends of the software development process are being neglected (as author points out) to the detriment of both the end users and the teams that end up supporting the software. I do wonder if we're entering an "Ikea era" of software where we should just treat everything as disposable artifacts instead.
If the underlying workflow is noisy, ambiguous, or overloaded with coordination overhead, faster generation just produces more low-context output to review and reconcile.
Our current most popular methods of using AI with software development is either waterfall or autocomplete. We aren't at a great pair programming experience yet. I presume that would improve speed and accuracy, but it's still unclear.
It’s amazing to see some people talk with 100% confidence about the macro view of AI assisted development when we have had strong coding agents available for less than a year.
Maybe my existing processes not but it can help you enormously.
I literally found a problem with AI analyzing packages in Wireshark and it hinted and steered me in the direction in me finding the error setting in the end. Could a senior network guy found it? Yes but probably not even faster. Did I as a L2 SWE not being familiar with much of networking and the companies stack(was like 1 Month at this company) found it with no AI, absolutely no.
So we have spent 40 years trying to get management and investors to understand that 9 people can't make a baby in one month.
There's no point in falling under the illusion that they'll finally get it now. This will all fall on deaf ears. They're convinced they're automating us out of existence when in fact they'll need the services of people who can surf complex systems more than ever.
We will be able to do more than ever and potentially faster. The issue remains that most of the things these people ask us to do and want us to do and pay us to do remains basically stupid and as TFA points out, the last mile of getting shit properly shipped isn't going to speed up. It's going to slow down.
If you want to see what happens when you put people in charge who sincerely believe in the "AI automates SWEs out of existence" mantra, take a look at the code quality of Claude Code and the recent "bun rewrite in Rust" fiasco.
It absolutely will make some things faster. Anyone that has ever churned out some boilerplate code with it knows that.
...but yeah most organizational processes & people aren't set up for leveraging it and roll out will be slow (same on learning where it does / doesn't work).
I’m not convinced. I’ve been using AI pretty heavily for about 18 months and agents for a little over 6 months.
I’m currently working on a data migration for an enormous dataset. I’m writing the tooling in go, which is a language I used to be very familiar with, but that I hadn’t touched in about 12 years when I started this. It definitely helped me get back into go faster.
But after the initial speed up, I found myself in the last 10% takes the other 90% of the time phase. And it definitely took longer for me to wrap my head around the code than it would have if I’d skipped the AI. I might have some overall speed up, but if so it’s on the order of 10-20%. Nothing revolutionary.
I have been able to vibe code a few little one off tools that have made my life a little easier. And I have vibe coded a few iPad games for my kids for car trips, but for work I still have to understand the code and reading code is still harder than writing it.
This is also not from lack of trying , I spent $1000 last week during a company wide “AI week”. Mostly on trying to get AI to replicate my migration tooling, complete with verification agents, testing agents, quality gates, elaborate test harnesses etc…
I’d let Claude (opus 4.7 max effort) crank away overnight only to immediately find that had added some horrible new bug or managed to convince the verification agent that it wasn’t really cheating to pass my quality tests.
What I learned from last week is that we are so far away from not needing to understand the code that everyone who says otherwise is probably full of shit. Other people who I trust who have been running the same experiments have told me the same thing.
Until and unless we get to that point, it’s always going to be a 10-50% speed up (if that).
>if so it’s on the order of 10-20%. Nothing revolutionary.
For many businesses that is revolutionary.
Not sure that's enough magic to make the math work for the trillions being invested, but on a ground level within companies even small wins stack up. You may have burned through $1000 without getting much done, but from a company perspective they've probably got an employee with better instincts as to what does or doesn't work
Exactly. The larger the organization the less percentage time devs actually are doing dev work and the less direct benefit there is from AI assisted coding tools.
I have a colleague who vibes the shit out of his part, and it results in large commits that take a lot of time to understand, and that makes cooperation practically impossible. LLMs are not team players.
I get much different results than others when using these tools. Turns out there is some skill in wielding them, and knowing the domain in which you do.
Every large corporation is stuck in communication problems and approval processes. They have grown so large as to have minimal alignment between what the company attempts to produce, what makes the company profitable, and what people actually do. Enshittification, The Gervais Principle, Bullshit Jobs. Pick your favorite, flawed way to look at what is going on, it's all blind people touching different parts of the same elephant.
The way AI makes your processes go faster will have little to do with cutting software development time in itself, but by letting an organization be made with fewer people, which in itself lowers your misalignment issues. A giant company of 200K people will still be about as messy as one today, but you might be able to do a lot more with the same number of people, just like a lone programmer today, without AI, already does quite a bit more than anyone could do by themselves the 80s.
Maybe some of the advantages are that you don't need quite as many developers, or maybe you can use a smaller marketing team, or you don't need to spend that much time answering questions, because an LLM is doing it for you, and it's tracking what it's been asked of it, turning the questions into product research. Either way, the gains come from being able to run leaner, and therefore minimizing organizational misalignment.
I think when LLMs first came out people thought they could just say something like, "Make a Facebook clone". But now we're realizing we need to be more exact with our requirements and define things better. That has always been the bottle neck in software.
When I was working we used to get requirements that literally said things like, "Get data and give it to the user". No definition of what data is, where its stored, or in what format to return it. We would then spend a significant amount of time with the product person trying to figure out what they really wanted.
In order to get good results with LLMs we need to do something similar. Vague requirements get vague results.
It's worse. Vague requirements still only power vague interpretations of the problem. Even if you provide good requirements, you still only have vague interpretations at your fingertips. The promise is that such things won't be a problem in the future, which is obviously not materialising.
"Make a facebook clone" is the vague human promise to the end user. The reality is that it leads to so many assumptions which are insurmountable due to the vague interpretation so you have to change your requirements in the end to claim success.
Thus everything turns into a mediocre compromise. There is no exceptional outcome, which is what makes a marketable product. There are just corpses everywhere.
You need something better to both define requirements and implement them than this technology.
> I think when LLMs first came out people thought they could just say something like, "Make a Facebook clone". But now we're realizing we need to be more exact with our requirements and define things better. That has always been the bottle neck in software.
This was substantially predicted by Fred Brooks in 1986 in the classic No Silver Bullets [1] essay under the sections "Expert Systems" and "Automatic Programming".
In it, he lays out the core features of vibe coding and exactly the experience we are having now with it: Initial success in a few carefully chosen domains and then a reasonable but not ground breaking increase in productivity as it expands outside of those domains.
[1] https://worrydream.com/refs/Brooks_1986_-_No_Silver_Bullet.p...
It's interesting how predictable some of this is.
The LLMs turn out fully formed clones of stuff for which there exists copious amounts of code openly searchable on the web doing the exact same thing.
LLMs require developer-like specification, task/subtask breakdown and detail where such example code already exists.
As a professional prior to LLMs, how many problems that you work on have many existing free solutions but you neglected to use that code and decided to spend days doing it yourself?
We now have product owners trying to farm out their work to an LLM. The process didn’t work before because the person writing the requirements either put out vague requirements or bad requirements because they didn’t understand the business intent (or were careless).
LLMs just take the same vague or poor requirements and make them look believable until you dig in to them.
Plausible requirement generators as inputs to plausible code generators.. what could go wrong!
It’s a giant tragedy of the commons. I’ve fired remote people who pretended to work, knowing that I wouldn’t hire remote workers ever again after AI.
We arrived to that state today with Codex and Claude Code. I really don't know what people are doing wrong?
> When I was working we used to get requirements that literally said things like, "Get data and give it to the user". No definition of what data is, where its stored, or in what format to return it. We would then spend a significant amount of time with the product person trying to figure out what they really wanted.
This is a big HN LLM discussion divide. I am in the same no-specs work background camp, and so the idea that the humans who input that into dev teams are suddenly going to get anything out of an LLM if they directly input the same is laughable. In my career most orgs there has been no product person and we just talked directly to end users.
For that kind of org, it will accelerate some parts of the SWEs job at different multipliers, but all the non-dev work to get there with discussions, discovery, iteration, rework, etc remains.
If the input to your work is a 20 page specification document to accompany multi-paragraph Jira tickets with embedded acceptance criteria / test cases / etc, then yes there is a danger the person creating that input just feed it into an LLM.
I’ve never understood engineers who complain about vague specs… if the spec was complete, it would be code and the job would be done already! Getting a 20 page spec delivered from upon high and mechanically translating it to code without any chance to send feedback up the chain sounds like… a compiler.
Yes, I don't think a job where I am programmed by a product manager would be terribly interesting. I would move on to be the product manager if I found myself in such a role.
Probably why I haven't ended up in any.
On the one hand, this is a clean post that explains exactly what a lot of us have been thinking and seeing on the job at large organizations doing tech work. Dear Author, I agree with you 110% and want everybody else to come to understand what you have written.
On the other hand, it feels like we've been over this tens of times recently, on HN specifically and IRL at work. Another blog post isn't going to convince leaders that this is how the world works when they are socially and financially incentivized to pretend like AI really will speed things up. So now I just wait for their AI projects to fail or go as slowly as previous projects and hope they learn something.
Sadly I think you’re right. I even shy away from sharing these types of posts at work because it feels like anything that doesn’t mesh with the status quo isn’t received well.
I disagree, I think the visuals, Gantt charts, are precisely the kind of "PM speak" that can be understood. Sure it won't solve anything as long as C-suite and investors do innovation signaling but that itself can only last so long.
Yep. I have the luxury of having my mortgage paid off and being able to be a bit picky about my work for a little bit.
So I am spending my days gardening and obsessively working on personal coding projects with these agentic tools. Y'know, building a high performance OLTP database from scratch, and a whole new logic relational persistent programming environment, a synthesizer based on some funky math, an FPGA soft processor. Y'know, normal things normal people do.
So I know what these tools are capable of in a single person's hands. They're amazing.
But I hear the stories from my friends employed at companies setting minimum token quotas or having leaderboards of people who are "star AI coders" telling people "not to do code reviews" and "stop doing any coding by hand" and I shake my head.
I dipped my toes into some contract work in the winter and it was fine but it mostly degraded into dueling LLMs on code reviews while the founder vibe coded an entire new project every weekend.
These tools suck for team work or any real team software engineering work.
I'll just let this shake out and sit out until the industry figures it out. The only places that are going to be sane to work at are places with older wiser people on staff who know how to say "slow down!" and get away with it.
In the meantime, quantities of cut rhubarb $5 a bunch in Hamilton, Ontario area for sale. Also asparagus. Lots and lots of asparagus.
I actually have data on this. I’ve been building sharc, a Common Lisp port of Hacker News. https://www.github.com/shawwn/sharc
If that sounds familiar, it’s because it’s what dang did over the course of several years.
It’s taken a few weeks. I started right around May, and now it’s able to render large HN threads (900+ comments) within a factor of five of production HN performance. (Thank you to dang for giving actual performance numbers to compare against.)
A couple days ago, mostly out of curiosity, I ran Claude with “/goal make this as fast as HN.” Somewhat surprisingly, it got the job done within a couple hours. I kept the experiment on separate branches, because the code is a mess, just like all AI generated code starts as. But the remarkable part is that it worked, and I can technically claim to have recreated HN within a few weeks.
The real work is in the specifications. My port of HN is missing around a hundred features. Things from favorited comments, to hiding threads, to being able to unvote and re-vote.
But catching up to HN is clearly a matter of effort (time spent actually working on the problem with Claude), not complexity. Each feature in isolation is relatively easy. Getting them all done within a short time span without ruining the codebase is the hard part. And I think that’s where a lot of people get tripped up: you can do a lot, but you have to manage it tightly, or else the codebase explodes into an unreadable mess.
It’s true that if you don’t do that crucial step of “manage the results”, you’ll end up making more work for yourself in the long run, by a large factor. But it’s also true that AI sped me up so much that I was able to do in weeks what would’ve otherwise taken years (and did take dang years). I’m not claiming parity, just that I got close enough to be an interesting comparison point.
AI can clearly accelerate us. But we need to be disciplined in how we use it, just like any other new tool. That doesn’t change the fact that it does work, and I think people might be underestimating how good the results can be.
I've had a handful of software projects in my career land essentially on the day I predicted, sometimes several months out, and the commonality across all of those projects was that the specification was crystal clear. Two of them were actual ports of an existing piece of software over to a new system. And so any time we had a question about the implementation, we could look at the existing version and immediately have our questions answered about what "correct" was.
I think projects where correct is very clearly defined can benefit from LLM acceleration, as you're describing here.
But so much of modern software development is figuring out what the right thing to build is. And in those situations, I don't think LLMs provide nearly as much benefit.
I think there's an interesting dichotomy. I find that for things I'm already capable at, LLMs are relatively inconsequential. But for things I'm no good at, it's a huge game changer. For a large company, that's going to be able to hire out most needed roles for any given project, this means the overall effect is going to be relatively inconsequential. At best, they may be able to cut down on labor costs by having one guy do a mediocre job at 5 people's jobs in exchange for a worse product. Short-term gains for long-term costs, wcgw?
But for a small studio, or independent developer, LLMs are a big game changer. Being able to do a mediocre job at 5 people's jobs is a huge leap over trying to get by without those jobs - relying on third party assets or other sorts of content, or even worse - doing a really awful job of trying to improv those jobs. See the UI of basically any program ever that was clearly laid out by a programmer and not a designer. Or there's the whole trying to rip off stuff from dribbble, but lacking the skills to do so. Whereas with AI, you can suddenly competently rip off everything and everybody - it's basically their entire MO.
Handholding is an issue which is affected by 3 factors: the model, the tooling and the human expertise. Out of the three, the last is the weakest link, due to the fact that it takes the longest to nurture.
Once tooling (e.g. agent harnesses, external tools) becomes more mature and consistent, the other 2 will become less of a bottleneck.
If I were to take a gamble here, I would argue that development will at one point reach the more ideal scenario, whereas the project planning, the scoping, will become longer. Also, the documentation section will take almost the same as the development, slightly longer at the edges.
The new ai-assisted era will most likely push companies to adopt a Waterfall management, rather than an Agile one.
> Yes, AI can generate code quickly (whether that’s a good thing is open for debate), but that doesn’t mean it’s generating the correct code.
No, the code is actually almost always correct. The way it’s added is probably not what you’re going to like, if you know your code base well enough. You know there’s some ceremony about where things are added, how they are named, how much comments you’d like to add and where exactly. Stuff like that seems to irritate people like me when not being done right by the agent, and it seems to fail even if it’s in the AGENTS.md.
> If you were to give human developers the same amount of feature/scope documentation you would also see your productivity skyrocket.
Almost 2 decades in IT and I absolutely do not believe this can ever happen. And if it does, it’s so rare, it’s not even worth talking about it.
This is all substantially correct and gives us hints as to where to focus for AI to make the processes go faster.
Eg: I had a product manager say to me that he envisions a future where any meeting with stakeholders that does not result in an interactive prototype by the end of the meeting would be considered a failure. This feels directionally correct to me.
The other thing I expect to see is Vibecoding being the "Excel 2.0" where it allows significant self-serve of building interactive apps that's engaged in a continual war with IT to turn them into something with better security guarantees, proper access control & logging, scalability, change management etc.
But the larger historical point here is that every revolutionary transition produces, in the early stages, "Steam Horses". The invention of the steam engine had people imagining that the future of transportation would involve horse shaped objects, powered by steam, pulling along conventional carts. It wasn't until later developments that we understood the function of transportation as divorced from the form.
I started talking about Steam Horses originally in the context of MOOCs, which was a classic Steam Horse idea.
Instead of mandatory AI workshops simply cancel all meetings with more than 3 people and no written agenda. Instead block the meeting time for productive work. That’ll be 2000$ of advisory fees for the insane productivity gains I just unlocked you. You’re welcome
If people got paid for telling the truth you’d be rich.
Yes, there are MANY in tech/non-tech management that will quietly admit that a lot of this top-down stuff is to create the appearance of motion to appease a higher more tech/AI ignorant authority.
Some organizations added a ton of process around software development because it is expensive and risky. They require a ton of approvals and sign-offs, then some managing overhead on top to check if their investment is on the right track. This approval process is bound to change by the fact that development is far cheaper and faster now.
Another aspect that is not captured here is that the lawyers and subject matter experts will also be using AI to speed up their parts.
> Every software developer knows that you can’t make projects go faster just by typing faster. If that were the case we would all be taking typing lessons.
So well said.
AI is unveiling how the bureaucracy is the slow part.
> AI is unveiling how the bureaucracy is the slow part.
Computing has been doing that for decades. If your process is fucked, computers make it fucked faster.
It’s just that now, we have entire generations alive that have never seem a world without digital computers. ~LLMs~ AI is a fun new lever in some uses so clearly it is finally the hammer that will drive the screws and bolts for us, with less effort on our part!
They just have to learn from experience. It’s what you do when you can’t be bothered to learn the lessons of the past.
Bureaucracy cannot learn the problems of the past with bureaucracy because it is against their self interest.
Work in large orgs long enough and you will recognize these creatures. Ladder climbing is a skill orthogonal to adding any value to the customer/company.
Completely agree. It amazes me how some folks think AI is unlike any other technology revolution. History repeats.
You're right it's just like any other mechanization/automation revolution. Except it's not.
It's happening about 10x faster than any other I've seen or read about.
Conceive how long it took just to get barcode scanners rolled out in grocery stores. Or direct payment terminals. Or how many decades it's been getting robotics into the manufacturing of cars at scale. I worked through the .com boom and I can tell you that "webification" took 10 years or more for most businesses (and many of them now just gave up and just have a Facebook page instead etc)
This is a little insane what's happening now. It really does change everything. People who don't work in software I don't think have any idea what's coming.
It both is & isn't moving 10x faster.
It's highly salient to management, and being forced top-down by them at 10x speed, for sure, because they see a future cost save to reduce headcount.
For certain technical roles its a force multiplier and already very saturated for sure.
On the other hand there's a lot of solution-looking-for-problem going on in large orgs where layers of management have been banging the table for 2-3 years on AI KPIs without any value being delivered.
In the weekly AI wins mail at a friends company, multiple non-technicals were bragging how AI has saved them 15 minutes a day by summarizing their morning inbox. This was the big game changer for them.
The promise of AI is in doing things at all that couldn't be automated before, at least economically. And when you find a use case where a bit of automated inference is sufficient and can replace human inference, it can wildly speed up a process, from when Susan has time for it, to right now.
Delivering more complete details for a task at hand is a noble goal, but there is a problem.
Programming is a logical circuit breaker. There is a wide range of incompleteness that halts development or puts the solutions in an unpublishable state.
A product person has no compiler, no RAM, no database, no state machine. There is nothing that can fail. There are probably strategies to weed out some issues, but none will be perfect.
We need to combine reality with computers. Computers set the constraints and we can only check if we are in bounds of the constraints by solving the problems with computers.
Oddly enough AI has so far nothing to offer to improve the "product people" problems.
> If you were to give human developers the same amount of feature/scope documentation you would also see your productivity skyrocket.
This is how I felt when I first started seeing people discuss things like AGENTS.md etc.
People are far too charitable about an industry with chronic short-term thinking. We'll just lower the standards to whatever fits the success story.
Someone I know said "software is made of decisions". <https://siderea.dreamwidth.org/1219758.html> Seems very applicable here.
Large corporations with orthodox methodologies will take time to extract the best benefits from AI. Small teams, which still remember the original Agile Manifesto, will soar and overtake their competitors.
Speaking about the middle, once I was shown advice from ai that a particular ticket would stall at “frozen middle management” and should be shelved until “coordination” improved. That sounds accurate, but can you imagine what a token-obsessed PM might say?
Yes, it is true for large enterprises, but not for startups ans individual creators. AI is accelerating speed for anyone who is not stuck in Corporate breaucratic processes.
Before, when there was the notion that "building is expensive", product teams would think things through, do user interviews up-front, actually do discovery around the customer + business context + underlying human process being facilitated with software.
This has shortened the cycle to first working prototype, but I'd guess that in the longer scale, it extends the time to final product because more time is wasted shifting the deliverable and experience on the user during this process of discovery versus nailing most of the product experience in big, stable chunks through design.
At the end of the day, there is a hidden cost to fast iterative shifts on the fundamental design of the software intended for humans to use and for which humans are responsible for operation. First is the cost on the end users who have to stop, provide feedback, and then retrain on each cycle. Second is that such compounding complexities in the underlying implementation as product learns requirements and vibe-codes the solution creates a system that becomes very challenging for humans to operationalize and maintain.
Ultimately, I think the bookends of the software development process are being neglected (as author points out) to the detriment of both the end users and the teams that end up supporting the software. I do wonder if we're entering an "Ikea era" of software where we should just treat everything as disposable artifacts instead.
If the underlying workflow is noisy, ambiguous, or overloaded with coordination overhead, faster generation just produces more low-context output to review and reconcile.
Our current most popular methods of using AI with software development is either waterfall or autocomplete. We aren't at a great pair programming experience yet. I presume that would improve speed and accuracy, but it's still unclear.
It’s amazing to see some people talk with 100% confidence about the macro view of AI assisted development when we have had strong coding agents available for less than a year.
Maybe my existing processes not but it can help you enormously. I literally found a problem with AI analyzing packages in Wireshark and it hinted and steered me in the direction in me finding the error setting in the end. Could a senior network guy found it? Yes but probably not even faster. Did I as a L2 SWE not being familiar with much of networking and the companies stack(was like 1 Month at this company) found it with no AI, absolutely no.
So we have spent 40 years trying to get management and investors to understand that 9 people can't make a baby in one month.
There's no point in falling under the illusion that they'll finally get it now. This will all fall on deaf ears. They're convinced they're automating us out of existence when in fact they'll need the services of people who can surf complex systems more than ever.
We will be able to do more than ever and potentially faster. The issue remains that most of the things these people ask us to do and want us to do and pay us to do remains basically stupid and as TFA points out, the last mile of getting shit properly shipped isn't going to speed up. It's going to slow down.
If you want to see what happens when you put people in charge who sincerely believe in the "AI automates SWEs out of existence" mantra, take a look at the code quality of Claude Code and the recent "bun rewrite in Rust" fiasco.
It absolutely will make some things faster. Anyone that has ever churned out some boilerplate code with it knows that.
...but yeah most organizational processes & people aren't set up for leveraging it and roll out will be slow (same on learning where it does / doesn't work).
I’m not convinced. I’ve been using AI pretty heavily for about 18 months and agents for a little over 6 months.
I’m currently working on a data migration for an enormous dataset. I’m writing the tooling in go, which is a language I used to be very familiar with, but that I hadn’t touched in about 12 years when I started this. It definitely helped me get back into go faster.
But after the initial speed up, I found myself in the last 10% takes the other 90% of the time phase. And it definitely took longer for me to wrap my head around the code than it would have if I’d skipped the AI. I might have some overall speed up, but if so it’s on the order of 10-20%. Nothing revolutionary.
I have been able to vibe code a few little one off tools that have made my life a little easier. And I have vibe coded a few iPad games for my kids for car trips, but for work I still have to understand the code and reading code is still harder than writing it.
This is also not from lack of trying , I spent $1000 last week during a company wide “AI week”. Mostly on trying to get AI to replicate my migration tooling, complete with verification agents, testing agents, quality gates, elaborate test harnesses etc…
I’d let Claude (opus 4.7 max effort) crank away overnight only to immediately find that had added some horrible new bug or managed to convince the verification agent that it wasn’t really cheating to pass my quality tests.
What I learned from last week is that we are so far away from not needing to understand the code that everyone who says otherwise is probably full of shit. Other people who I trust who have been running the same experiments have told me the same thing.
Until and unless we get to that point, it’s always going to be a 10-50% speed up (if that).
>if so it’s on the order of 10-20%. Nothing revolutionary.
For many businesses that is revolutionary.
Not sure that's enough magic to make the math work for the trillions being invested, but on a ground level within companies even small wins stack up. You may have burned through $1000 without getting much done, but from a company perspective they've probably got an employee with better instincts as to what does or doesn't work
cars are not faster than horses
It makes small teams without organizational overhead go lightning fast.
It might be the ultimate tool of disruption.
Exactly. The larger the organization the less percentage time devs actually are doing dev work and the less direct benefit there is from AI assisted coding tools.
I have a colleague who vibes the shit out of his part, and it results in large commits that take a lot of time to understand, and that makes cooperation practically impossible. LLMs are not team players.
I get much different results than others when using these tools. Turns out there is some skill in wielding them, and knowing the domain in which you do.
That's a you guys problem. Maybe one or both of you.
My LLM outputs are intentional, in my style, and tightly reviewed by myself.
I'm also emitting Rust, which I've found to be the very best language to work with in AI.