It's funny, but I think the accidental complexity is through the roof. It's skyrocketing.
Nothing about cajoling a model to write what you want it to is essential complexity in software dev.
In addition when you do a lot of building with no theory you tend you make lots and lots of new non-essential complexity.
Devtools are no exception. There was already lots of nonessential complexity in them and in the model era is that gone? ...no don't worry it's all still there. We built all the shiny new layers right on top of all the old decaying layers, like putting lipstick on a pig.
To the point on Jevons Paradox, the number of people/developers joining GitHub had been accelerating as of the last Octoverse report. Related: "In 2023, GitHub crossed 100 million developers after nearly three years of growth from 50 million to 100 million. But the past year alone has rewritten that curve with our fastest absolute growth yet. Today, more than 180 million developers build on GitHub."
Fortran is all about symbolic programming. There is no probability in the internal workings of Fortran compiler. Almost any person can learn rules and count on them.
LLMs are all about probabilistic programming. While they are harnessed by a lot of symbolic processing (tokens as simple example), the core is probabilistic. No hard rules can be learned.
And, for what it worth, "Real programmers don't use Pascal" [1] was not written about assembler programmers, it was written about Fortran programmers, a new Priesthood.
StrongDM is doing it. In fact, their Attractor agentic loop, which generates, tests, and deploys code written as specs, has been released—as a spec, not code. Their installation instructions are pretty much "feed this into your LLM". They are building out not only complete applications, but test harnesses for those applications that clone popular web apps like Slack and JIRA, with no humans in the loop beyond writing the initial spec and giving final approval to deploy.
We're witnessing a "horses to automobile" moment in software development. Programming, as a professional discipline, is going to be over in a year or two on the outside. We're getting the "end of software engineering in six months" before we're getting a real "year of the Linux desktop". Or GTA VI.
StrongDM is attempting that. The code they produced is not inspiring confidence on a relatively small scale [0], and based on what I saw with a cursory inspection I very much doubt I wouldn't find much deeper issues if I took the time to really dig into their code.
Don't get me wrong, "sort of works if you squint at it" is downright miraculous by the standards of five years ago, but current models and harnesses are not sufficient to replace developers at this scale.
That's really cool, I can see the utility in it but why on this earth would anyone want this for anything other than doing internal integration tests.
I can see a company who spends upwards of $2000 a month on just slack, but can you justify building, deploying and maintaining your own personal slack clone even if you can get an AI to do all of that. What happens when, inevitably, there are issues that the AI gets stumped on? You will need some level of humans intervention or at least review the issue before letting the autonomous loop run. Even if you ignore all of that, if the loop doesn't catch the bug but a human does, they're still the ones left reporting it.
You're shifting this entire process of owning a product to an AI, which even in the best case leaves you paying a bill probably larger than a $2000/mo. You're 100% better off just deploying an OSS alternative, maybe sponsoring them for some feature requests than managing an AI loop to build, deploy and maintain.
Maybe at the scale of paying $20,000/mo or $50,000/mo you could start to justify it in some way but when you're able to pay that much for a productivity enhancement service what really are you going to do better than them. Their incentive is to earn and yours is to save, theirs is much stronger to deliver than yours. The argument that SaaS is dead is very poorly focused. I get your specific workplace pays for a service that they only use 2-3 features of, that's not the case for major SaaS products. Something like dynamics 365 takes month for process engineers to implement business processes, there is 0 coding involved just interviewing, documenting and configuring. The actual act of programming might take a single digit percentage of the total cost to deliver a complete deployment.
Ignoring all the business talk, I don't think anyone denies LLMs ability to bring huge productivity boost. I can wholeheartedly support the argument that companies no longer have an excuse to significantly over hire and micro delegate tasks over hundreds of engineers because it's more efficient to let fewer engineers have higher productivity.
I'm part of a small team that's embraced AI for code assistance right at ChatGPT 3.5. There are still cases where when AI doesn't have enough examples about a problem in it's training data, for no matter how long you let it ruminate, debug, harness etc. would it be able to solve those certain issues. If you had let it run for long enough it may have completely reimplemented the library back on it's own, failed then attempted to reimplement the entire saas product as a MVP replacement.
Even if we setup the AI could fully autonomously implement, test, debug and deploy features fully autonomously the case where there it runs into an issue, services fail to deploy, new business use cases to implement can possibly lead to huge costs/lost revenue which isn't worth not having software engineers for. If your company's business needs require 30 servers with 300-500 instances of different services deployed across them, and you keep software engineers for that worst case scenario you can't keep just 2-3 because a human can only push their cognitive load so far. The number of software engineers needed to maintain and manage will remain more or less the same, the only difference will be they'll have more time to contribute to more important parts of the business. Maybe even manage that Slack clone.
> My concerns about obsolescence have shifted toward curiosity about what remains to be built. The accidental complexity of coding is plummeting, but the essential complexity remains. The abstraction is rising again, to tame problems we haven't yet named.
what if AI is better at tackling essential complexity too?
The essential complexity isn't solvable by computer systems. That was the point Fred Brooks was making.
You can reduce it by process re-engineering, by changing the requirements, by managing expectations. But not by programming.
If we get an LLM to manage the rest of the organisation, then conceivably we could get it to reduce the essential complexity of the programming task. But that's putting the cart before the horse - getting an LLM to rearrange the organisation processes so that it has less complexity to deal with when coding seems like a bad deal.
And complexity is one of the things we're still not seeing much improvement in LLMs managing. The common experience from people using LLM coding agents is that simple systems become easy, but complex systems will still cause problems with LLM usage. LLMs are not coping well with complexity. That may change, of course, but that's the situation now.
> With the price of computation so high, that inefficiency was like lighting money on fire. The small group of contributors capable of producing efficient and correct code considered themselves exceedingly clever, and scoffed at the idea that they could be replaced.
There will always be someone ready to drag down prices of computation low enough so that it is then democratized for all, some may disagree but that would eventually be local inference as computer hardware gets better with clever software algorithms.
In this AI story, you can take a guess who are the "The Priesthood" of the 2020s are.
> You still have to know what you want the computer to do, and that can be very hard. While not everyone wrote computer programs, the number of computers in the world exploded.
One can say, the number of AI agents will explode and surpass humans on the internet in the next few years, and reading the code and understanding what it does when generated from an AI will be even more important than writing it.
So you do not get horrific issues like this [0] since now the comments in the code are now consumed by the LLM and due to their inherent probabilistic and unpredictable nature, different LLMs produce different code and cannot guarrantee that it is correct other than a team of expert humans.
We'll see if you're ready to read (and fix) an abundance of lots of AI slop and messy architectures built by vibe-coders as maintainance costs and security risks skyrocket.
It's funny, but I think the accidental complexity is through the roof. It's skyrocketing.
Nothing about cajoling a model to write what you want it to is essential complexity in software dev.
In addition when you do a lot of building with no theory you tend you make lots and lots of new non-essential complexity.
Devtools are no exception. There was already lots of nonessential complexity in them and in the model era is that gone? ...no don't worry it's all still there. We built all the shiny new layers right on top of all the old decaying layers, like putting lipstick on a pig.
To the point on Jevons Paradox, the number of people/developers joining GitHub had been accelerating as of the last Octoverse report. Related: "In 2023, GitHub crossed 100 million developers after nearly three years of growth from 50 million to 100 million. But the past year alone has rewritten that curve with our fastest absolute growth yet. Today, more than 180 million developers build on GitHub."
https://github.blog/news-insights/octoverse/octoverse-a-new-...
Fortran is all about symbolic programming. There is no probability in the internal workings of Fortran compiler. Almost any person can learn rules and count on them.
LLMs are all about probabilistic programming. While they are harnessed by a lot of symbolic processing (tokens as simple example), the core is probabilistic. No hard rules can be learned.
And, for what it worth, "Real programmers don't use Pascal" [1] was not written about assembler programmers, it was written about Fortran programmers, a new Priesthood.
[1] https://web.archive.org/web/20120206010243/http://www.ee.rye...
Thus, what I expect is for new Priesthood to emerge - prompt writing specialists. And this is what we see, actually.
Fortran is about numerical programming without having to deal with explicit addresses. Symbolic programming is something like lisp.
Llms do not manipulate symbols according to rules, they predict tokens, or arbitrary glyphs in human parlance, based on statistical rules.
> LLMs ... completing tasks at the scale of full engineering teams.
Ah, a work of fiction.
StrongDM is doing it. In fact, their Attractor agentic loop, which generates, tests, and deploys code written as specs, has been released—as a spec, not code. Their installation instructions are pretty much "feed this into your LLM". They are building out not only complete applications, but test harnesses for those applications that clone popular web apps like Slack and JIRA, with no humans in the loop beyond writing the initial spec and giving final approval to deploy.
We're witnessing a "horses to automobile" moment in software development. Programming, as a professional discipline, is going to be over in a year or two on the outside. We're getting the "end of software engineering in six months" before we're getting a real "year of the Linux desktop". Or GTA VI.
StrongDM is attempting that. The code they produced is not inspiring confidence on a relatively small scale [0], and based on what I saw with a cursory inspection I very much doubt I wouldn't find much deeper issues if I took the time to really dig into their code.
Don't get me wrong, "sort of works if you squint at it" is downright miraculous by the standards of five years ago, but current models and harnesses are not sufficient to replace developers at this scale.
[0] https://news.ycombinator.com/item?id=46927737
That's really cool, I can see the utility in it but why on this earth would anyone want this for anything other than doing internal integration tests.
I can see a company who spends upwards of $2000 a month on just slack, but can you justify building, deploying and maintaining your own personal slack clone even if you can get an AI to do all of that. What happens when, inevitably, there are issues that the AI gets stumped on? You will need some level of humans intervention or at least review the issue before letting the autonomous loop run. Even if you ignore all of that, if the loop doesn't catch the bug but a human does, they're still the ones left reporting it.
You're shifting this entire process of owning a product to an AI, which even in the best case leaves you paying a bill probably larger than a $2000/mo. You're 100% better off just deploying an OSS alternative, maybe sponsoring them for some feature requests than managing an AI loop to build, deploy and maintain.
Maybe at the scale of paying $20,000/mo or $50,000/mo you could start to justify it in some way but when you're able to pay that much for a productivity enhancement service what really are you going to do better than them. Their incentive is to earn and yours is to save, theirs is much stronger to deliver than yours. The argument that SaaS is dead is very poorly focused. I get your specific workplace pays for a service that they only use 2-3 features of, that's not the case for major SaaS products. Something like dynamics 365 takes month for process engineers to implement business processes, there is 0 coding involved just interviewing, documenting and configuring. The actual act of programming might take a single digit percentage of the total cost to deliver a complete deployment.
Ignoring all the business talk, I don't think anyone denies LLMs ability to bring huge productivity boost. I can wholeheartedly support the argument that companies no longer have an excuse to significantly over hire and micro delegate tasks over hundreds of engineers because it's more efficient to let fewer engineers have higher productivity.
I'm part of a small team that's embraced AI for code assistance right at ChatGPT 3.5. There are still cases where when AI doesn't have enough examples about a problem in it's training data, for no matter how long you let it ruminate, debug, harness etc. would it be able to solve those certain issues. If you had let it run for long enough it may have completely reimplemented the library back on it's own, failed then attempted to reimplement the entire saas product as a MVP replacement.
Even if we setup the AI could fully autonomously implement, test, debug and deploy features fully autonomously the case where there it runs into an issue, services fail to deploy, new business use cases to implement can possibly lead to huge costs/lost revenue which isn't worth not having software engineers for. If your company's business needs require 30 servers with 300-500 instances of different services deployed across them, and you keep software engineers for that worst case scenario you can't keep just 2-3 because a human can only push their cognitive load so far. The number of software engineers needed to maintain and manage will remain more or less the same, the only difference will be they'll have more time to contribute to more important parts of the business. Maybe even manage that Slack clone.
> My concerns about obsolescence have shifted toward curiosity about what remains to be built. The accidental complexity of coding is plummeting, but the essential complexity remains. The abstraction is rising again, to tame problems we haven't yet named.
what if AI is better at tackling essential complexity too?
The essential complexity isn't solvable by computer systems. That was the point Fred Brooks was making.
You can reduce it by process re-engineering, by changing the requirements, by managing expectations. But not by programming.
If we get an LLM to manage the rest of the organisation, then conceivably we could get it to reduce the essential complexity of the programming task. But that's putting the cart before the horse - getting an LLM to rearrange the organisation processes so that it has less complexity to deal with when coding seems like a bad deal.
And complexity is one of the things we're still not seeing much improvement in LLMs managing. The common experience from people using LLM coding agents is that simple systems become easy, but complex systems will still cause problems with LLM usage. LLMs are not coping well with complexity. That may change, of course, but that's the situation now.
> With the price of computation so high, that inefficiency was like lighting money on fire. The small group of contributors capable of producing efficient and correct code considered themselves exceedingly clever, and scoffed at the idea that they could be replaced.
There will always be someone ready to drag down prices of computation low enough so that it is then democratized for all, some may disagree but that would eventually be local inference as computer hardware gets better with clever software algorithms.
In this AI story, you can take a guess who are the "The Priesthood" of the 2020s are.
> You still have to know what you want the computer to do, and that can be very hard. While not everyone wrote computer programs, the number of computers in the world exploded.
One can say, the number of AI agents will explode and surpass humans on the internet in the next few years, and reading the code and understanding what it does when generated from an AI will be even more important than writing it.
So you do not get horrific issues like this [0] since now the comments in the code are now consumed by the LLM and due to their inherent probabilistic and unpredictable nature, different LLMs produce different code and cannot guarrantee that it is correct other than a team of expert humans.
We'll see if you're ready to read (and fix) an abundance of lots of AI slop and messy architectures built by vibe-coders as maintainance costs and security risks skyrocket.
[0] https://sketch.dev/blog/our-first-outage-from-llm-written-co...
This is well worth a read!
Why? What is compelling about it?