The year is 2026. The unemployment rate just printed 4.28%, AI capex is 2% of GDP (650bn), AI adjacent commodities are up 65% since Jan-23 and approximately 2,800 data centers are planned for construction in the US. In spite of the current displacement narrative – job postings for software engineers are rising rapidly, up 11% YoY. ... We wrote last week that we see the near-term dynamics around the AI capex story as inflationary, but given markets are focused on the forward narrative, we outline a more constructive take on the end state below. Before that, however, it’s worth reflecting that the imminent disintermediation narrative rests on the speed of diffusion.
The chart "Job Postings For Software Engineers Are Rapidly Rising" seems to show a rise from 65 to 71 for "Indeed job postings" from October 2025 to March 2025. That's a 9% increase. Then they inflate that by extrapolating it to a year. The graph exaggerates the change by depressing the zero line to way off the bottom and expanding the scale. This could just be noise.
The chart "Adoption Rate of Generative AI at Work and Home versus the Rate for Other Technologies" has one (1) data point for Generative AI.
This article bashes some iffy numbers into supporting their narrative.
Interesting chart that confirms hiring dynamics for SE have not much to do with AI despite all the PR, as in 2023 models and agents capabilities were quite limited, and now that capabilities increase hiring is picking up. I hope more journalists will start to challenge that narrative.
While I like that you debunked the article . . . I want to hear an argument for where the SWE job market can grow in a post-Claude world. I might expect something like: “CEOs are naturally greedy. So after trimming the team, they then recognized (versus “replacing” people with AI) they could actually accomplish _more_ with more engineers, each empowered with AI.
But I do like folks calling out the OP for being AI spam.
I'm not sure whether it's AI spam, or somebody at an investment company who actually writes like that. It's an exaggerated version of the style in McKinsey reports.
They're addressing a very important question, and one for which there is surprisingly little hard data. It's too soon to try to see a trend from low-quality data. Three years of this data might be meaningful.
As long as software engineers are needed to leverage AI (they can manage the output, refine the prompts, check the BS), there is plenty of software to write and not having SWEs still means you will have to write less of it.
Personally, I prefer vibe coding in the sense of stitching things together at the function-to-method level.
Unlike people who take the extreme position that vibe coders are useless, I do think LLMs often write individual functions or methods better than I do. But in a way, that does not fundamentally change the nature of the work. Even before LLMs, many functions and methods were effectively assembled from libraries, Stack Overflow snippets, documentation examples, and copied patterns.
The real limitation comes from the nature of transformer-based LLMs and their context windows. Agentic coding has a ceiling. Once the codebase reaches a scale where the agent can no longer hold the relevant structure in context, you need a programmer again.
At that point, software engineering becomes necessary: knowing how to split things according to cohesion and coupling, using patterns to constrain degrees of freedom, and designing boundaries that keep the system understandable.
In my experience, agentic coding is useful for building skeletons. But if you let the agent write everything by itself, the codebase tends to degrade. The human role is to divide the work into task units that the agent can handle well.
Eventually, a person is still needed.
If you make an agent do everything, it tends to create god objects, or it strangely glues things together even when the structure could have been separated with a simpler pattern. Thinking about it now, this may be exactly why I was drawn to books like EIB: they teach how to constrain freedom in software design so the system does not collapse under its own flexibility.
The models are improving. The software that harnesses them is also improving. It wasn't that long ago that the models were quite bad at a lot of the tasks that they are excelling at today. I do agree there's probably a ceiling to what we can get out of these, but I also don't think we have quite hit that point yet.
I agree with what you said. And perhaps my belief that “people like me are still needed” is just a desperate form of self-persuasion.
If AI replaces everything, then I become unnecessary. So maybe I am simply trying to convince myself that developers like me are still needed.
That said, realistically, I still think there are limits unless the essence of architecture itself changes. I also acknowledge part of your perspective.
Those of us who are not in the AI field tend to experience AI progress not as a linear or continuous process, but as a series of discrete events, such as major model releases. Because of that, there is inevitably a gap in perspective.
People inside the industry, at least those who are not just promoting hype, often seem to feel that technological progress is exponential. But since we are not part of that industry, we experience it more episodically, as separate events.
At the same time, capital has a self-fulfilling quality. If enough capital concentrates in one direction, what looked like linear progress may suddenly accelerate in an almost exponential way.
However, even that kind of model can eventually hit a specific limit. I do not know when that limit will arrive, because I am not an AI industry insider. More precisely, I am closer to someone who uses Hugging Face models, builds around them, and serves them, rather than someone working on AI R&D itself.
“people like me are still needed” is just a desperate form of self-persuasion.
No, no it's not. I've seen what "PM armed with an LLM" will do. Trust me, if you're a decent enough Full Stack software engineer that can take an idea and run with it to implement it, you'll have a leg up over the PM with the idea that has no idea how to "do computers".
Most of what these PMs can produce nowadays turns boardroom heads, sure. But it's just that: visuals and just enough prototype functionality that it fools the people you're demoing to. Seen enough of these in the recent past.
Will there be some PMs that can become "software developers" while armed with an LLM? Sure!
But that's not the majority. On the other hand, yes there are going to be "software developers" that will be out of a job because of LLMs, because the devs that were FS and could take an idea from 0-1 with very little overhead even in the past can now do so much faster and further without handing off to the intermediates and juniors. They mentor their LLM intern rather than their intermediates and juniors. The perpetual intermediate devs with 20 years of experience are the ones that are gonna have a larger and larger problem I'd say.
The Staff engineer that was able to run circles around others all along? They'll teach their LLM intern into an intermediate rather than having to "10 times" a bunch of perpetual intermediates with 20 years of experience.
What I'd love to see is videos of nontechnical folks using language models to create software.
When I use them myself, I just see them crushing it and think, this thing is now doing my job for basically $0, I am no longer economically relevant. But I've spent a lifetime learning to program, so it's possible I only get good results because of the way I think to prompt it.
I really can't get the outside view so I can't decide whether AI is going to make me homeless or not. I think we need the videos.
I agree with you, so far what I see is that AI amplifies an individuals output in many domains, but the value of that is 100% contingent on their judgment. It changes the economics of many tasks, but fundamentally it can't really help you if you don't actually know what you want—which is sort of a shocking number of people in the corporate world where most people are there for a paycheck, and perhaps to pursue some social marker of "success".
I'm under no illusions about the goals of AI company execs to justify their valuations (and expenses!) by capturing a huge chunk of global employment value, and the CEOs of many big companies whose financials are getting squeezed for all sorts of reasons and are all too happy to jump on the efficiency narrative of AI to justify layoffs that would have been necessary anyway. Also, AI will keep getting better and it could certainly will move up the food chain—it's already replaced a lot of what I did and I assume capabilities will continue improving for a while even after model capabilities plateau as we improve harnessing, tooling and practice.
So yeah, it can replace a lot of what we do, but I'm not running scared because every step of the way I've seen software people are the ones who actually get the most out of LLMs. Sure it can write all the code so the job changes, but even our workflows completely change, it's giving us more of an edge (if we're open to it) than it does to anyone non-technical. At this stage it still feels empowering on an individual level.
Now I do worry about the consolidation of power and wealth in a tech oligarchy, but that's an issue we need to deal with at a societal and government policy level. Essentially, I can see AI as having radically different outcome potential based on how it's governed. In one way it can be very empowering to small teams, and reduce coordination costs, and increase competition by allowing smaller groups of people to make more scalable companies. But it could also lead to unprecedented concentration of wealth and power if a small set of AI companies are allowed to capture all the economic gains. I don't think there are any easy answers, but I do feel hopeful that we can figure something out as a society—it certainly seems to be creating some unified sentiment across political lines that have been so polarized and divisive over the last decade.
It amplifies by 1000x is the problem for our jobs. However, I do agree that developers with experience are needed to actually harness these tools. I’ve been able to do wonders with them, but I can’t see a junior dev doing 10% of the work that I can with them.
I'm with you at the "bargaining" phase of AI grief (sure AI is useful but it won't replace me!).
I think my reasoning is you still need a tech person to translate from feature to architecture. AI can do both but not everyone knows they need the latter.
What do you base this on? For me it is almost impossible to guess what fits into the context of an llm. Sometimes trivial tasks fail, sometimes quite complex things get one shotted.
its not necessarily better, but its certainly good enough, if youre already used to distributing work to different people
the scale of code doesnt really matter that much, as long as a programmer can point it at the right places.
i think actually you want to be really involved in the skeleton, since from what ive seen the agent is quite bad at making skeletons that it can do a good job extending.
if you get the base right though, the agent can make precise changes in large code bases
Thinking about it, I think what is interesting about the output of agentic coding is this:
I mostly agree with the general tendency that it starts to break down as the context grows. But there is also a difference in how people evaluate it. Some people say agents are good at building the skeleton, while others say they are better at extending an existing structure.
I think this depends on the setup, and it is ultimately a trade-off.
In my case, I usually work on codebases around 60,000 LoC. The programs I deliver are generally between 60,000 and 80,000 lines of code. I think I can fairly call myself a specialist at that scale, since I have personally delivered close to 40 projects of that size.
At that scale, I felt that agentic coding was actually very good at building the initial skeleton.
I do not know what kind of work you usually do, but if your work involves highly precise, low-level tasks, then I can understand why you might feel differently.
In my case, I mostly assemble high-level libraries and frameworks into working systems, so that may be why I experience it this way.
I find LLMs are good at skeletons but only if you are tedious about writing down what you want before you start. Then give that text to GPT 5.5 Pro, and be prepared for a number of iterations.
I've found the LLM limitation of codebase size is removed with correct design of the codebase.
If you organize your product into a collection of appropriately scoped libraries (the library is the right size for the LLM to be able to comprehend the whole thing) then the project size is not limited by the LLM comprehension.
Your task management has to match, the organization of your ticketing system has to parallel the codebase.
With this the LLM can think at different scales at different times.
I agree. Language models are good at codegen, in some sense they are just another codegen tool, except instead of transforming a structured language (like a config file or markdown) into code, they can convert natural language into code. Genuinely useful for the repetitive boilerplate grunt work. If that's all you do, then I can see fearing getting replaced. Thankfully by handling the drudgery, it frees us up to work on more complex and cutting edge work.
Like, it's not surprising that the developers who frequently talk about +90% of their work being delegated to LLMs are web developers. That is a field with very little innovative or complex code, it's mostly just grunt work translating knowledge of style rules and markup to code, or managing CRUD. I'm really thankful I can have a language model do that drudgery for me.
But compare that to eg. writing a multithreaded multiplayer networking service in Rust, they fall woefully short at generating code for me. They can be used in auxiliary aspects, like search or debugging, but the code it produces without substantial steering is not usable. It's often faster for me to write the code myself, because it's not a substantial amount of low impact code required, but a small amount of complex high impact code which needs to satisfy many invariants. This is fast to type, the majority of the work is elsewhere. At the end of the day, they work really well to replace typing the boilerplate, which is much appreciated.
Companies hiring more people to build AI based, self-healing and self-developing systems faster?
„We don’t need those old programmers, we need new people who know how to build harnesses around AI”. Hiring those „old” programmers, but from other companies.
I foresee the need for engineers to be really "wavy".
I have personally never been busier or more productive. It's like all the "work" of my work has disappeared. There are no more blockers and I can just run free and get as much done as I want and the only thing slowing me down is Jira.
The real downturn is going to be the SaaS apocalypse. In the next year or two there will be a reckoning where all these expensive low-code/no-code middleware applications suddenly don't make sense.
So I think it will be less about the ranks of engineers being thinned out unilaterally, and more about large swathes of products being obsolete.
None of them because those who think SaaS companies are just a bunch of bad code that is going to be quickly rewritten have no clue what they're talking about. No sane company is going to vibecode a replacement for Salesforce, because then they have a half-assed, buggy, broken pile of code they have to maintain, instead of outsourcing that problem along with legal, compliance and support to someone else.
It's honestly tiresome to keep having to debunk this with people who have no clue at all how large companies operate.
90% of the job ads I see have the word "AI" in them. It can be a startup hoping for a get-rick-quick opportunity from the AI hype, or an established company.
Both types expect you to spend as many tokens as possible so that the AI bubble doesn't burst (presumably because leadership has a financial interest in this).
Your actual productivity isn't important. If you point out that you're much faster writing code on your own in 90% of cases, you will be told you're not good at AI, you're not prompting it correctly and that generally you're not AI-native and that you'll be left behind. To be precise, token usage is a performance metric, so you'll be let go if Claude is not running continuously 8 hours a day.
I'd like to know how many places have mandates to write 100% of your code using AI, as well as to max out your AI agent's plan. For some reason nobody talks about it even though I know several companies around the world that are forcing this on their employees.
If you're looking for a job then you don't have a choice, it's better to have an income. But if you're looking to change jobs to get away from AI to actually be productive and gain experience then it's a very bad job market.
Fashion is when developers jump on the next web framework because they got bored of the old one.
But when you get fired for not enough token usage, that's something else. When bosses start demanding you write 100% of your code using AI, and then a few months later Anthropic reports 30% increase in usage, that's not fashion. People who invested in AI are putting a lot of pressure on developers to ensure their investment pays off.
It feels like when Java and Object-oriented programming were popular. You must use the object orientation, it is the future. Imagine not being able to reuse code, etc.
our labor market is cyclic, relatively short busts and long initially-slow-and-faster-and-faster booms. We had busts of 2000-2003, 2008-2010(11?), 2022- i guess 2026. I wasn't in US in 199x, yet i guess beginning of the 199x also was a bit tough.
Unavoidable AI-based productivity growth, in software and in all the other industries, will lead to the software, specifically AI in this case, not just eating the wold, it would be devouring it. Such AI revolution will mean even more need for software engineers, just like the Personal Computer revolution and the Internet revolution did in their times. Of course the software engineering will get changed like it did in those previous revolutions.
There is no productivity growth attributed to AI. In fact, serious attempts to measure AI performance show that even if AI makes some code entry tasks faster, total product delivery times are, in fact, increased.
(This should be obvious once you realize coding AIs are technical debt generation machines.)
There's no "productivity growth attributed to AI" -- yet.
I think we've gone beyond anecdotal evidence of experience engineers finding true value in this new tech. It may not have registered yet, but skilled people are unequivocally finding value in these tools.
I agree that we have yet to settle down on the true costs involved (which will probably end up at "slightly less than a junior engineer" or something like that) - but we are months beyond the idea that it's all smoke and mirrors and no one is getting value out of it.
I get you, but as the months progress, we keep finding that more and more experienced engineers are finding a lot of time-saving value in this new tech.
I think we are past the point where we can just dismiss their input - these new tools do legitimately add value, it appears.
that is today. The first cars - with steam engine, the very first in 1769! - and even the ones from the first half of 19th century also didn't look like an improvement. The AI today is more like the internal combustion engine toward the end of the 19th century - on the brink of becoming the dominating tech while using a horse was still a viable option for a time.
What did they write that article with?
The year is 2026. The unemployment rate just printed 4.28%, AI capex is 2% of GDP (650bn), AI adjacent commodities are up 65% since Jan-23 and approximately 2,800 data centers are planned for construction in the US. In spite of the current displacement narrative – job postings for software engineers are rising rapidly, up 11% YoY. ... We wrote last week that we see the near-term dynamics around the AI capex story as inflationary, but given markets are focused on the forward narrative, we outline a more constructive take on the end state below. Before that, however, it’s worth reflecting that the imminent disintermediation narrative rests on the speed of diffusion.
The chart "Job Postings For Software Engineers Are Rapidly Rising" seems to show a rise from 65 to 71 for "Indeed job postings" from October 2025 to March 2025. That's a 9% increase. Then they inflate that by extrapolating it to a year. The graph exaggerates the change by depressing the zero line to way off the bottom and expanding the scale. This could just be noise.
The chart "Adoption Rate of Generative AI at Work and Home versus the Rate for Other Technologies" has one (1) data point for Generative AI.
This article bashes some iffy numbers into supporting their narrative.
Suggested reading: [1]
[1] https://en.wikipedia.org/wiki/How_to_Lie_with_Statistics
Worth seeing the whole chart in perspective:
https://fred.stlouisfed.org/series/IHLIDXUSTPSOFTDEVE
Worth also noting that this chart has the bottom of the Y-axis cut off, exaggerating differences and making visual intuition basically useless.
Wow. Huge crash between 2022 and 2023, from 230 to down around 80. Why? That's the real question. What happened? It's post-COVID.
Then stuck in the 60-80 range since 2023. The sample period chosen by Citadel is wildly deceptive.
This is an important question and these crap stats are not helping.
Interesting chart that confirms hiring dynamics for SE have not much to do with AI despite all the PR, as in 2023 models and agents capabilities were quite limited, and now that capabilities increase hiring is picking up. I hope more journalists will start to challenge that narrative.
Wow, that says a lot with data. Thank you.
While I like that you debunked the article . . . I want to hear an argument for where the SWE job market can grow in a post-Claude world. I might expect something like: “CEOs are naturally greedy. So after trimming the team, they then recognized (versus “replacing” people with AI) they could actually accomplish _more_ with more engineers, each empowered with AI.
But I do like folks calling out the OP for being AI spam.
I'm not sure whether it's AI spam, or somebody at an investment company who actually writes like that. It's an exaggerated version of the style in McKinsey reports.
They're addressing a very important question, and one for which there is surprisingly little hard data. It's too soon to try to see a trend from low-quality data. Three years of this data might be meaningful.
As long as software engineers are needed to leverage AI (they can manage the output, refine the prompts, check the BS), there is plenty of software to write and not having SWEs still means you will have to write less of it.
Personally, I prefer vibe coding in the sense of stitching things together at the function-to-method level.
Unlike people who take the extreme position that vibe coders are useless, I do think LLMs often write individual functions or methods better than I do. But in a way, that does not fundamentally change the nature of the work. Even before LLMs, many functions and methods were effectively assembled from libraries, Stack Overflow snippets, documentation examples, and copied patterns.
The real limitation comes from the nature of transformer-based LLMs and their context windows. Agentic coding has a ceiling. Once the codebase reaches a scale where the agent can no longer hold the relevant structure in context, you need a programmer again.
At that point, software engineering becomes necessary: knowing how to split things according to cohesion and coupling, using patterns to constrain degrees of freedom, and designing boundaries that keep the system understandable.
In my experience, agentic coding is useful for building skeletons. But if you let the agent write everything by itself, the codebase tends to degrade. The human role is to divide the work into task units that the agent can handle well.
Eventually, a person is still needed.
If you make an agent do everything, it tends to create god objects, or it strangely glues things together even when the structure could have been separated with a simpler pattern. Thinking about it now, this may be exactly why I was drawn to books like EIB: they teach how to constrain freedom in software design so the system does not collapse under its own flexibility.
The models are improving. The software that harnesses them is also improving. It wasn't that long ago that the models were quite bad at a lot of the tasks that they are excelling at today. I do agree there's probably a ceiling to what we can get out of these, but I also don't think we have quite hit that point yet.
I agree with what you said. And perhaps my belief that “people like me are still needed” is just a desperate form of self-persuasion.
If AI replaces everything, then I become unnecessary. So maybe I am simply trying to convince myself that developers like me are still needed.
That said, realistically, I still think there are limits unless the essence of architecture itself changes. I also acknowledge part of your perspective.
Those of us who are not in the AI field tend to experience AI progress not as a linear or continuous process, but as a series of discrete events, such as major model releases. Because of that, there is inevitably a gap in perspective.
People inside the industry, at least those who are not just promoting hype, often seem to feel that technological progress is exponential. But since we are not part of that industry, we experience it more episodically, as separate events.
At the same time, capital has a self-fulfilling quality. If enough capital concentrates in one direction, what looked like linear progress may suddenly accelerate in an almost exponential way.
However, even that kind of model can eventually hit a specific limit. I do not know when that limit will arrive, because I am not an AI industry insider. More precisely, I am closer to someone who uses Hugging Face models, builds around them, and serves them, rather than someone working on AI R&D itself.
Most of what these PMs can produce nowadays turns boardroom heads, sure. But it's just that: visuals and just enough prototype functionality that it fools the people you're demoing to. Seen enough of these in the recent past.
Will there be some PMs that can become "software developers" while armed with an LLM? Sure!
But that's not the majority. On the other hand, yes there are going to be "software developers" that will be out of a job because of LLMs, because the devs that were FS and could take an idea from 0-1 with very little overhead even in the past can now do so much faster and further without handing off to the intermediates and juniors. They mentor their LLM intern rather than their intermediates and juniors. The perpetual intermediate devs with 20 years of experience are the ones that are gonna have a larger and larger problem I'd say.
The Staff engineer that was able to run circles around others all along? They'll teach their LLM intern into an intermediate rather than having to "10 times" a bunch of perpetual intermediates with 20 years of experience.
What I'd love to see is videos of nontechnical folks using language models to create software.
When I use them myself, I just see them crushing it and think, this thing is now doing my job for basically $0, I am no longer economically relevant. But I've spent a lifetime learning to program, so it's possible I only get good results because of the way I think to prompt it.
I really can't get the outside view so I can't decide whether AI is going to make me homeless or not. I think we need the videos.
I agree with you, so far what I see is that AI amplifies an individuals output in many domains, but the value of that is 100% contingent on their judgment. It changes the economics of many tasks, but fundamentally it can't really help you if you don't actually know what you want—which is sort of a shocking number of people in the corporate world where most people are there for a paycheck, and perhaps to pursue some social marker of "success".
I'm under no illusions about the goals of AI company execs to justify their valuations (and expenses!) by capturing a huge chunk of global employment value, and the CEOs of many big companies whose financials are getting squeezed for all sorts of reasons and are all too happy to jump on the efficiency narrative of AI to justify layoffs that would have been necessary anyway. Also, AI will keep getting better and it could certainly will move up the food chain—it's already replaced a lot of what I did and I assume capabilities will continue improving for a while even after model capabilities plateau as we improve harnessing, tooling and practice.
So yeah, it can replace a lot of what we do, but I'm not running scared because every step of the way I've seen software people are the ones who actually get the most out of LLMs. Sure it can write all the code so the job changes, but even our workflows completely change, it's giving us more of an edge (if we're open to it) than it does to anyone non-technical. At this stage it still feels empowering on an individual level.
Now I do worry about the consolidation of power and wealth in a tech oligarchy, but that's an issue we need to deal with at a societal and government policy level. Essentially, I can see AI as having radically different outcome potential based on how it's governed. In one way it can be very empowering to small teams, and reduce coordination costs, and increase competition by allowing smaller groups of people to make more scalable companies. But it could also lead to unprecedented concentration of wealth and power if a small set of AI companies are allowed to capture all the economic gains. I don't think there are any easy answers, but I do feel hopeful that we can figure something out as a society—it certainly seems to be creating some unified sentiment across political lines that have been so polarized and divisive over the last decade.
It amplifies by 1000x is the problem for our jobs. However, I do agree that developers with experience are needed to actually harness these tools. I’ve been able to do wonders with them, but I can’t see a junior dev doing 10% of the work that I can with them.
It's a strategy problem, and the current version of the US is spectacularly bad at strategy.
Once upon a time the US had visionaries steering DARPA and making useful bets on the future.
Now strategy is defined by stonks-go-up, quarterly returns, democracy bad, and CEO narcissism, and that's a potently catastrophic combination.
I'm with you at the "bargaining" phase of AI grief (sure AI is useful but it won't replace me!).
I think my reasoning is you still need a tech person to translate from feature to architecture. AI can do both but not everyone knows they need the latter.
Of course but unfortunately it reduces the amount of jobs 100x or more. You don’t need 30 software developers at a startup anymore. You just need one.
At $800B collective spend, you would hope these things are improving. The point is that have the improvements been worth $800B and counting.
The ceiling will soon be super-human.
What do you base this on? For me it is almost impossible to guess what fits into the context of an llm. Sometimes trivial tasks fail, sometimes quite complex things get one shotted.
its not necessarily better, but its certainly good enough, if youre already used to distributing work to different people
the scale of code doesnt really matter that much, as long as a programmer can point it at the right places.
i think actually you want to be really involved in the skeleton, since from what ive seen the agent is quite bad at making skeletons that it can do a good job extending.
if you get the base right though, the agent can make precise changes in large code bases
Thinking about it, I think what is interesting about the output of agentic coding is this:
I mostly agree with the general tendency that it starts to break down as the context grows. But there is also a difference in how people evaluate it. Some people say agents are good at building the skeleton, while others say they are better at extending an existing structure.
I think this depends on the setup, and it is ultimately a trade-off.
In my case, I usually work on codebases around 60,000 LoC. The programs I deliver are generally between 60,000 and 80,000 lines of code. I think I can fairly call myself a specialist at that scale, since I have personally delivered close to 40 projects of that size.
At that scale, I felt that agentic coding was actually very good at building the initial skeleton.
I do not know what kind of work you usually do, but if your work involves highly precise, low-level tasks, then I can understand why you might feel differently.
In my case, I mostly assemble high-level libraries and frameworks into working systems, so that may be why I experience it this way.
The coding agents are good at growing code.
Like a child growing up!
Also, like a cancer.
Similar process, different outcomes.
I find LLMs are good at skeletons but only if you are tedious about writing down what you want before you start. Then give that text to GPT 5.5 Pro, and be prepared for a number of iterations.
We all got agents at work now and still the engineers haven't equalized
EIB?
I've found the LLM limitation of codebase size is removed with correct design of the codebase.
If you organize your product into a collection of appropriately scoped libraries (the library is the right size for the LLM to be able to comprehend the whole thing) then the project size is not limited by the LLM comprehension.
Your task management has to match, the organization of your ticketing system has to parallel the codebase.
With this the LLM can think at different scales at different times.
I agree. Language models are good at codegen, in some sense they are just another codegen tool, except instead of transforming a structured language (like a config file or markdown) into code, they can convert natural language into code. Genuinely useful for the repetitive boilerplate grunt work. If that's all you do, then I can see fearing getting replaced. Thankfully by handling the drudgery, it frees us up to work on more complex and cutting edge work.
Like, it's not surprising that the developers who frequently talk about +90% of their work being delegated to LLMs are web developers. That is a field with very little innovative or complex code, it's mostly just grunt work translating knowledge of style rules and markup to code, or managing CRUD. I'm really thankful I can have a language model do that drudgery for me.
But compare that to eg. writing a multithreaded multiplayer networking service in Rust, they fall woefully short at generating code for me. They can be used in auxiliary aspects, like search or debugging, but the code it produces without substantial steering is not usable. It's often faster for me to write the code myself, because it's not a substantial amount of low impact code required, but a small amount of complex high impact code which needs to satisfy many invariants. This is fast to type, the majority of the work is elsewhere. At the end of the day, they work really well to replace typing the boilerplate, which is much appreciated.
Companies hiring more people to build AI based, self-healing and self-developing systems faster? „We don’t need those old programmers, we need new people who know how to build harnesses around AI”. Hiring those „old” programmers, but from other companies.
I foresee the need for engineers to be really "wavy".
I have personally never been busier or more productive. It's like all the "work" of my work has disappeared. There are no more blockers and I can just run free and get as much done as I want and the only thing slowing me down is Jira.
The real downturn is going to be the SaaS apocalypse. In the next year or two there will be a reckoning where all these expensive low-code/no-code middleware applications suddenly don't make sense.
So I think it will be less about the ranks of engineers being thinned out unilaterally, and more about large swathes of products being obsolete.
Which SaaS companies/products do you think are at risk?
We are already in the works of removing 2: backoffice software that we moved to an in house react app and a library that has a license fee.
None of these are really because of cost. But more because we can get a superior product by doing so.
None of them because those who think SaaS companies are just a bunch of bad code that is going to be quickly rewritten have no clue what they're talking about. No sane company is going to vibecode a replacement for Salesforce, because then they have a half-assed, buggy, broken pile of code they have to maintain, instead of outsourcing that problem along with legal, compliance and support to someone else.
It's honestly tiresome to keep having to debunk this with people who have no clue at all how large companies operate.
On the off chance you care - you can keep javascript disabled on this article, and just a No Style page style to read it.
So there will be again waves of hiring developers only for companies to realize after 5 years that they have too many employees and fire them again?
Like James Franco said in The Ballad of Buster Scruggs, "First time?"
Title is editorialized and the report is from two months ago.
90% of the job ads I see have the word "AI" in them. It can be a startup hoping for a get-rick-quick opportunity from the AI hype, or an established company.
Both types expect you to spend as many tokens as possible so that the AI bubble doesn't burst (presumably because leadership has a financial interest in this).
Your actual productivity isn't important. If you point out that you're much faster writing code on your own in 90% of cases, you will be told you're not good at AI, you're not prompting it correctly and that generally you're not AI-native and that you'll be left behind. To be precise, token usage is a performance metric, so you'll be let go if Claude is not running continuously 8 hours a day.
I'd like to know how many places have mandates to write 100% of your code using AI, as well as to max out your AI agent's plan. For some reason nobody talks about it even though I know several companies around the world that are forcing this on their employees.
If you're looking for a job then you don't have a choice, it's better to have an income. But if you're looking to change jobs to get away from AI to actually be productive and gain experience then it's a very bad job market.
I’ve been programming for 25 years, I’d struggle to think of a scenario where I’m faster writing code manually than prompting ai to do it
[edit 25 years not 20]
You read the AI-generated code, right? That takes time and effort. Whereas if you wrote it yourself then you already read it.
"AI" is everywhere, because it's the fashion. A lot of jobs do not require AI mastery, or even heavy use.
I'm searching for a job for many months, and I do see the uptick quite clearly.
> "AI" is everywhere, because it's the fashion.
Fashion is when developers jump on the next web framework because they got bored of the old one.
But when you get fired for not enough token usage, that's something else. When bosses start demanding you write 100% of your code using AI, and then a few months later Anthropic reports 30% increase in usage, that's not fashion. People who invested in AI are putting a lot of pressure on developers to ensure their investment pays off.
It feels like when Java and Object-oriented programming were popular. You must use the object orientation, it is the future. Imagine not being able to reuse code, etc.
> your AI agent's plan
Token billing is coming very-very soon, there won't be a "plan".
What will these companies do then?
I will probably use a local model
"AI-native" lmao, what a term
our labor market is cyclic, relatively short busts and long initially-slow-and-faster-and-faster booms. We had busts of 2000-2003, 2008-2010(11?), 2022- i guess 2026. I wasn't in US in 199x, yet i guess beginning of the 199x also was a bit tough.
Unavoidable AI-based productivity growth, in software and in all the other industries, will lead to the software, specifically AI in this case, not just eating the wold, it would be devouring it. Such AI revolution will mean even more need for software engineers, just like the Personal Computer revolution and the Internet revolution did in their times. Of course the software engineering will get changed like it did in those previous revolutions.
> Unavoidable AI-based productivity growth
There is no productivity growth attributed to AI. In fact, serious attempts to measure AI performance show that even if AI makes some code entry tasks faster, total product delivery times are, in fact, increased.
(This should be obvious once you realize coding AIs are technical debt generation machines.)
There's no "productivity growth attributed to AI" -- yet.
I think we've gone beyond anecdotal evidence of experience engineers finding true value in this new tech. It may not have registered yet, but skilled people are unequivocally finding value in these tools.
I agree that we have yet to settle down on the true costs involved (which will probably end up at "slightly less than a junior engineer" or something like that) - but we are months beyond the idea that it's all smoke and mirrors and no one is getting value out of it.
I think part of the problem is that it is such a generic catch all term:
- AI will replace all workers (unlikely today) - AI speeds up programming (yes today)
> but skilled people are unequivocally finding value in these tools.
Sure, whatever. That would be anecdotal evidence.
I get you, but as the months progress, we keep finding that more and more experienced engineers are finding a lot of time-saving value in this new tech.
I think we are past the point where we can just dismiss their input - these new tools do legitimately add value, it appears.
that is today. The first cars - with steam engine, the very first in 1769! - and even the ones from the first half of 19th century also didn't look like an improvement. The AI today is more like the internal combustion engine toward the end of the 19th century - on the brink of becoming the dominating tech while using a horse was still a viable option for a time.