It's interesting to revisit Brooks' "surgical team" in light of AI. For example, I frequently have Claude act as a "toolsmith", creating bespoke project-specific tools on the fly, which are then documented in Skills that Claude can use going forward. What has changed is that a) One person (or rather, one person-AI hybrid) plays all the roles within the surgical team, and b) Internal frictions such as cost, development time, and communication overhead have all been dramatically slashed.
Notably, his essay “no silver bullet” states that there has never been a new technology or way of thinking or working that has led to a 10X increase in the speed of software development.
That was true for almost seventy years until roughly last year.
AI is the silver bullet - my output is genuinely 10X what it was before claude code existed.
I haven't yet seen anyone with a concrete example project (public ideally, but even describing private efforts in enough detail to enable potential criticism would be fine) making a claim as strong as 10x. Are you willing to break the mould and show us what we're all missing?
It's more like ∞x (or N/Ax if you prefer) because the majority of the projects I did with LLM agents wouldn't have existed without them, because I would've never found enough time to work on them.
One of the latest things I made with Claude was a tool that allowed me to move a bunch of very low traffic Cloud Run services to a single VPS without losing any of the Cloud Run benefits such as easy Docker-based deployment and automatic certificate provisioning. I thought about making something like that for quite some time, and Claude finally made it possible, which makes me quite happy.
The fun thing here is that no other soul genuinely cares about it, or any other code I might publish. The code, especially AI generated, is so cheap that if anyone wants to repeat my steps to get rid of Cloud Run services, they will probably vibe-code their own tool instead of figuring out how to use mine, just like I did that instead of spending time on learning Dokku or similar solutions.
So, yes, 10x and more, but no one cares about the result, which makes the whole 10x measurement less useful.
The incredulity at 10x claims is often unearned because how much do these skeptics actually notice and appreciate the depth of work of ten developers collaborating on something (if not their own org)? Dev output slips by quietly. There are reams of unnoticed projects even at the scale of a life’s work.
AI is certainly able to increase coding speed, especially for experienced engineers who can design the analytical parts themselves (data structures, interfaces, invariants, and process), but in large projects and/or organizations, queuing theory (especially as understood by lean development practitioners like Don Reinertsen) is going to be nasty.
Lean development theory teaches us that in a multi-workstream, multi-stage development process, developers should be kept at roughly 65-75% utilization. Otherwise, contra-intuitively, work queue lengths increase exponentially the closer utilization gets to 100%. The reason is that slack in the system absorbs and smooths perturbations and variability, which are inevitable.
Furthermore, underutilization is also highly comparable to stock market options: their value increases as variability increases. Slack enables quick pivots with less advance notice. It builds responsiveness into the system. And as the Agile Manifesto tells us, excellent software development is more characterized by the ability to respond to change than the mere ability to follow a plan. Customers appreciate responsiveness from software vendors; it builds trust.
But AI-driven development threatens to increase, not decrease individual engineer utilization. More is expected, more is possible, and frankly, once you learn how to guardrail the AI and give it no trust to design well analytically, the speed a senior engineer can achieve often feels intoxicating.
I think we're going to go through a whole new spat of hard, counterintuitive lessons similar to those many 1960s and 70s developers like Fred Brooks and his IBM team learned the hard way.
I'm curious to check how faster AAA games will hit the market in the next years compared to the pre-LLM era. Or how much of the aging COBOL code base out there will disappear in the next decade.
When concrete things like that start to happen, then I will start to believe in the 10x claim.
I'm not sure those are great examples. Why not just consider normal apps?
I don't think we'll see AAA game velocity change until asset generation progresses quite a bit, not to mention stuff like rigging. Even then, there's still a layer between code and engine where you have to wire everything together which an LLM will struggle with.
Replacing some old COBOL is probably more of a management decision based on appetite for change and politics rather than development speed.
Aren't there some measurable things like github repo creation, PRs, app store additions, etc. that can be correlated to LLM adoption? Didn't Show HN have to get throttled after LLMs arrived?
I feel like that’s tied to the hardware the companies are using. All the banks I’ve worked at run z/OS mainframes, can they even deploy modern run of the mill Go/Python/Rust code or is getting off COBOL reliant on hardware changes?
This was true as programming languages evolved too. It was so much easier to write scripting languages than C. You could crap out scripts like crazy - no cc refusing to give you a binary to get in your way.
Clearly..it still wasn't a silver bullet. Because output as a metric is a bad one. I thought it was only one managers valued..but apparently Anthropic has convinced devs to value it finally? i guess it def hits that dopamine receptor hard.
The main point of mythical man month was that communication cost across people was the main cost as project grow in complexity.
So increasing individual output by itself is not enough to affect the argument. It could, if you also reduce the size of people needed for a project, where people are everyone included in the project, not just SWE. But there are strong forces in large orgs to pull toward larger project sizes: budgeting overhead and other similar large orgs optimize for legibility kind of arguments.
IMO the only way this will change is when new companies will challenge existing big guys. I think AI will help achieve this (e.g. agentic e-commerce challenging the existing players), but it will take time.
First counterexample that comes to mind: Rails vs 90s networked/shared line-of-business crud app development was a 10x factor. It also enabled a lot of internal tools that wouldn't have been worth doing without it.
But after people's expectations adjusted it was just back on the treadmill.
I don't think we've found a new steady-state yet, but I have some gut feeling guesses about where it's going to be.
Most of my work has been in core infra at large companies. Having the code written faster does not change rollout velocity all that much... It does help with signals and idiot proofing on bugs but when things break and cost real (very real) dollars AI is not an explanation. In that instance, its not even close. Development might be 10-20 percent of the actual work to get a change out.
Code is always easy to multiply fruitlessly, always has been.
Features are harder to show the limits of, but have you ever had a client or boss who didn't know what they wanted, they just kept asking for stuff? 100 sequential tickets to change the contrast of some button can be closed in record time, but the final impact is still just the final one of the sequence.
Or have you experienced bike-shedding* from coworkers in meetings? It doesn't matter what metaphorical colour the metaphorical bike shed gets painted.
Or, as a user, had a mandatory update that either didn't seem to do anything at all, or worse moved things in the UX around so you couldn't find features you actually did use? Something I get with many apps and operating systems; I'd say MacOS's UX peaked back when versions were named after cats. Non-UX stuff got better since then, but the UX (even the creation of SwiftUI as an attempt to replace UIKit and AppKit) feels like it was CV-driven development, not something that benefits me as a user.
You can add a lot of features and close a lot of tickets while adding zero-to-negative business value. When code was expensive, that cost could be used directly as a reason to say "let's delay this"; now you have to explain more directly to the boss or the client why they're asking for an actively bad thing instead of it being a replacement of an expensive gamble with a cheap gamble. This is not something most of us are trained to do well, I think. Worse, even those of us who are skilled at that kind of client interactions, the fact of code suddenly being cheap means that many of us have mis-trained instincts on what's actually important, in exactly the way that those customers and bosses should be suspicious of.
At Microsoft I wrote a feature to support customer setting a preferred AZ for their database. Took a couple weeks as a side project. Nearly 2 years later it reached customers
Writing code is a part (sometimes a big part, sometimes not) of delivering software to production. The overall system throughput is the interesting thing to look at.
I've been thinking about this and have wanted to discuss it with people.
I think the 10x thing has been broken, but I don't think it's because the premise of "No Silver Bullet" was false - I think it's because LLMs have the ability to navigate some of the _essential_ complexity of problems.
I don't think anyone has really wrestled with the implications of that yet - we've started talking about "deskilling" and "congnitive debt" but mostly in the context of "programmers are going to forget how to structure code - how to use the syntax of their languages, etc et etc)." I'm not worried about that as it's the same sort of thing we've seen for decades - compilers, higher-order languages, better abstracts, etc etc etc.
The fact that LLMs are able to wrestle with essential complexity means that using them is going to push us further and further from the actual problems we're trying to solve. Right now, it's the wrestling with problems that helps us understand what those problems are. As our organizations adopt LLMs that are able to take on _those_ problems - that is, customer problems, not problems of data, scaling, and so forth - will we hit a brick wall where we lose that understanding? Where we keep shipping stuff but it gets further and further from what our customers need? How do we avoid that?
I agree with this sentiment but I think LLMs are really close to the Brooks idea of a silver bullet.
I don't know if, overall, it's a 10x improvement or 6x or 14x but it's a serious contender. Part of it is the LLMs are very uneven in their performance across domains. If all I build is simple landing pages, it might be a 100x improvement. If I work on more complex, proprietary work where there aren't great examples in the training data then it might be a 10% improvement (it helps me write better comments or something)
"claude, connect to a k8s pod in prod and grab a 30s cpu profile, analyze and create a performance test locally for the top outlier, verify your fix and create a PR"
The premise of "no silver bullet" is wrong (LLM just made it clear, but it has always been wrong).
The premise is that the software development had been mostly "essential complexity" rather than "accidental complexity." But I think anyone who worked as SE in the past decade would have found the opposite is true.
It's not only that software development is full of accidental complexity. Programmers (and the decision makers above them) have always been actively creating accidental complexity. Making a GUI program hasn't gotten easier since Visual Basic. In fact for each JavaScript framework and technique that wraps around DOM render engine, it has got harder over years. Until LLMs made it easier again (by creating a permanent dependency on LLMs. If you intend to edit the code manually afterwards, it became even harder!)
Fortunate to be reminded of this right now, especially the pull-quote about conceptual integrity.
This is the reason why AI-assisted programming has not turned out to be the silver bullet we have been hoping for, at least yet. Muddled prompting by humans gets you the Homer Simpson car you wished for, that will eventually collapse under its own weight.
I've been thinking a lot about Programming as Theory Building [0] as the missing piece in AI-assisted engineering. Perhaps there are approaches which naturally focus on the essence while ignoring the accidents, but I'm still looking for them. Right now the state of the art I see ignores both accident and essence alike, and degrades the ability to make progress.
Please inform me if there are any approaches you know that work! And lest this sound pessimistic, far from it. This state of affairs is actually intoxicatingly motivating. Feels like we have found silver, and just need to start learning to mould bullets.
The bearing of a child takes nine months, no matter how many women are assigned.
For the human makers of things, the incompletenesses and inconsistencies of our ideas become clear only during implementation.
Conceptual integrity is the most important consideration in system design.
There is no single development, in either technology or management technique, which by itself promises even one order-of-magnitude improvement in productivity.
---
These ideas still apply very well to modern society.
but,
Personally, I hope science advances to the point where nine women really can have a baby in parallel.
We may need that to prevent demographic collapse and keep the pension system from running out of money.
Nine women can already have babies in parallel. That is, nine women cannot have a baby in one month, but nine women can have nine babies in nine months.
It would probably be more practical to make old age less expensive than to inject more people into the bottom of the demographic pyramid. Those young people eventually get old too. I am looking forward to my sentient robot caretaker:
Oddly from your comment I can't quite tell which end of the political spectrum you're on. I think I agree with you, but I'm not sure until I know which team you're on.
Life was becoming increasingly more affordable, but that stopped being the case years ago. It is now declining. I would like it to either stay the same as it was years ago or start increasing again.
As a software engineering manager, I always look to staff up a project at the beginning as much as possible, looking for doing as much in parallel up-front as we can. If some things take longer than expected, then I already have a team of engineers with all the context since the project kicked off that can help each other with any longer running tasks. An engineer that has completed a smaller chunk of work can help out with the items on the critical path, for example.
>I always look to staff up a project at the beginning as much as possible, looking for doing as much in parallel up-front as we can.
Ah, maybe this is what you think he would take issue with? Fair enough. Perhaps I should have said:
>I always look to staff up as much as is economically and organizationally optimal, to exploit all genuine parallelism opportunities, being careful not to overstaff.
You mileage may vary but in my (unfortunate) experience, stuffing up by any other reason than grassroots "we need more hands" raised by engineers themselves typically backfires. Teams that are constrained by people resources often find creative ways to work smarter. Teams that have an abundance of labor, often end up working unnecessarily harder, duplicating work, reinventing the wheel, not solving the right problems, etc. See also intensive vs extensive development.
"The programmer, like the poet, works only slightly removed from pure thought-stuff. He builds his castles in the air, from air, creating by exertion of the imagination." -FB
Indeed a lot of things have changed. A worthwhile exercise is to read the book, contemplate how things have changed, and try to map lessons from the book onto modern technology and organizational practices. A LOT of the core principles are still relevant IMO, even if many of the implementation details are not.
It's interesting to revisit Brooks' "surgical team" in light of AI. For example, I frequently have Claude act as a "toolsmith", creating bespoke project-specific tools on the fly, which are then documented in Skills that Claude can use going forward. What has changed is that a) One person (or rather, one person-AI hybrid) plays all the roles within the surgical team, and b) Internal frictions such as cost, development time, and communication overhead have all been dramatically slashed.
Notably, his essay “no silver bullet” states that there has never been a new technology or way of thinking or working that has led to a 10X increase in the speed of software development.
That was true for almost seventy years until roughly last year.
AI is the silver bullet - my output is genuinely 10X what it was before claude code existed.
I haven't yet seen anyone with a concrete example project (public ideally, but even describing private efforts in enough detail to enable potential criticism would be fine) making a claim as strong as 10x. Are you willing to break the mould and show us what we're all missing?
It's more like ∞x (or N/Ax if you prefer) because the majority of the projects I did with LLM agents wouldn't have existed without them, because I would've never found enough time to work on them.
One of the latest things I made with Claude was a tool that allowed me to move a bunch of very low traffic Cloud Run services to a single VPS without losing any of the Cloud Run benefits such as easy Docker-based deployment and automatic certificate provisioning. I thought about making something like that for quite some time, and Claude finally made it possible, which makes me quite happy.
The fun thing here is that no other soul genuinely cares about it, or any other code I might publish. The code, especially AI generated, is so cheap that if anyone wants to repeat my steps to get rid of Cloud Run services, they will probably vibe-code their own tool instead of figuring out how to use mine, just like I did that instead of spending time on learning Dokku or similar solutions.
So, yes, 10x and more, but no one cares about the result, which makes the whole 10x measurement less useful.
The incredulity at 10x claims is often unearned because how much do these skeptics actually notice and appreciate the depth of work of ten developers collaborating on something (if not their own org)? Dev output slips by quietly. There are reams of unnoticed projects even at the scale of a life’s work.
AI is certainly able to increase coding speed, especially for experienced engineers who can design the analytical parts themselves (data structures, interfaces, invariants, and process), but in large projects and/or organizations, queuing theory (especially as understood by lean development practitioners like Don Reinertsen) is going to be nasty.
Lean development theory teaches us that in a multi-workstream, multi-stage development process, developers should be kept at roughly 65-75% utilization. Otherwise, contra-intuitively, work queue lengths increase exponentially the closer utilization gets to 100%. The reason is that slack in the system absorbs and smooths perturbations and variability, which are inevitable.
Furthermore, underutilization is also highly comparable to stock market options: their value increases as variability increases. Slack enables quick pivots with less advance notice. It builds responsiveness into the system. And as the Agile Manifesto tells us, excellent software development is more characterized by the ability to respond to change than the mere ability to follow a plan. Customers appreciate responsiveness from software vendors; it builds trust.
But AI-driven development threatens to increase, not decrease individual engineer utilization. More is expected, more is possible, and frankly, once you learn how to guardrail the AI and give it no trust to design well analytically, the speed a senior engineer can achieve often feels intoxicating.
I think we're going to go through a whole new spat of hard, counterintuitive lessons similar to those many 1960s and 70s developers like Fred Brooks and his IBM team learned the hard way.
I'm curious to check how faster AAA games will hit the market in the next years compared to the pre-LLM era. Or how much of the aging COBOL code base out there will disappear in the next decade.
When concrete things like that start to happen, then I will start to believe in the 10x claim.
I'm not sure those are great examples. Why not just consider normal apps?
I don't think we'll see AAA game velocity change until asset generation progresses quite a bit, not to mention stuff like rigging. Even then, there's still a layer between code and engine where you have to wire everything together which an LLM will struggle with.
Replacing some old COBOL is probably more of a management decision based on appetite for change and politics rather than development speed.
Aren't there some measurable things like github repo creation, PRs, app store additions, etc. that can be correlated to LLM adoption? Didn't Show HN have to get throttled after LLMs arrived?
I feel like that’s tied to the hardware the companies are using. All the banks I’ve worked at run z/OS mainframes, can they even deploy modern run of the mill Go/Python/Rust code or is getting off COBOL reliant on hardware changes?
This was true as programming languages evolved too. It was so much easier to write scripting languages than C. You could crap out scripts like crazy - no cc refusing to give you a binary to get in your way.
Clearly..it still wasn't a silver bullet. Because output as a metric is a bad one. I thought it was only one managers valued..but apparently Anthropic has convinced devs to value it finally? i guess it def hits that dopamine receptor hard.
The main point of mythical man month was that communication cost across people was the main cost as project grow in complexity.
So increasing individual output by itself is not enough to affect the argument. It could, if you also reduce the size of people needed for a project, where people are everyone included in the project, not just SWE. But there are strong forces in large orgs to pull toward larger project sizes: budgeting overhead and other similar large orgs optimize for legibility kind of arguments.
IMO the only way this will change is when new companies will challenge existing big guys. I think AI will help achieve this (e.g. agentic e-commerce challenging the existing players), but it will take time.
First counterexample that comes to mind: Rails vs 90s networked/shared line-of-business crud app development was a 10x factor. It also enabled a lot of internal tools that wouldn't have been worth doing without it.
But after people's expectations adjusted it was just back on the treadmill.
I don't think we've found a new steady-state yet, but I have some gut feeling guesses about where it's going to be.
10x the amount of code or features =/= 10x the speed of software development.
There are sizes of projects where it's true, and that size is growing.
And 20x the bugs.
I too can vastly increase my speed of development when I stop caring about the quality.
Doesn’t necessarily but does sometimes unless you have a concrete alternative
How are you defining speed of software development?
How is that not the same thing?
Most of my work has been in core infra at large companies. Having the code written faster does not change rollout velocity all that much... It does help with signals and idiot proofing on bugs but when things break and cost real (very real) dollars AI is not an explanation. In that instance, its not even close. Development might be 10-20 percent of the actual work to get a change out.
AI can also speed up the release processes
Code is always easy to multiply fruitlessly, always has been.
Features are harder to show the limits of, but have you ever had a client or boss who didn't know what they wanted, they just kept asking for stuff? 100 sequential tickets to change the contrast of some button can be closed in record time, but the final impact is still just the final one of the sequence.
Or have you experienced bike-shedding* from coworkers in meetings? It doesn't matter what metaphorical colour the metaphorical bike shed gets painted.
Or, as a user, had a mandatory update that either didn't seem to do anything at all, or worse moved things in the UX around so you couldn't find features you actually did use? Something I get with many apps and operating systems; I'd say MacOS's UX peaked back when versions were named after cats. Non-UX stuff got better since then, but the UX (even the creation of SwiftUI as an attempt to replace UIKit and AppKit) feels like it was CV-driven development, not something that benefits me as a user.
You can add a lot of features and close a lot of tickets while adding zero-to-negative business value. When code was expensive, that cost could be used directly as a reason to say "let's delay this"; now you have to explain more directly to the boss or the client why they're asking for an actively bad thing instead of it being a replacement of an expensive gamble with a cheap gamble. This is not something most of us are trained to do well, I think. Worse, even those of us who are skilled at that kind of client interactions, the fact of code suddenly being cheap means that many of us have mis-trained instincts on what's actually important, in exactly the way that those customers and bosses should be suspicious of.
* https://en.wikipedia.org/wiki/Law_of_triviality
At Microsoft I wrote a feature to support customer setting a preferred AZ for their database. Took a couple weeks as a side project. Nearly 2 years later it reached customers
Extreme example, but exemplifies point
"nine women can't have a baby in a month". Speed of software development is not pure output.
for certain monkeys, they think it is tho
there are entire C corps of monkeys out there
Writing code is a part (sometimes a big part, sometimes not) of delivering software to production. The overall system throughput is the interesting thing to look at.
If AI is the silver bullet, I do not understand why so many shot-up projects are still wandering around the freelance market.
Horses weren't replaced overnight.
Also, I know that there will be a lot of boilerplate applications that just don't look good or seem to have been well thought out early on.
Folks will use that as a cope mechanism, but huge changes are coming.
I've been thinking about this and have wanted to discuss it with people. I think the 10x thing has been broken, but I don't think it's because the premise of "No Silver Bullet" was false - I think it's because LLMs have the ability to navigate some of the _essential_ complexity of problems.
I don't think anyone has really wrestled with the implications of that yet - we've started talking about "deskilling" and "congnitive debt" but mostly in the context of "programmers are going to forget how to structure code - how to use the syntax of their languages, etc et etc)." I'm not worried about that as it's the same sort of thing we've seen for decades - compilers, higher-order languages, better abstracts, etc etc etc.
The fact that LLMs are able to wrestle with essential complexity means that using them is going to push us further and further from the actual problems we're trying to solve. Right now, it's the wrestling with problems that helps us understand what those problems are. As our organizations adopt LLMs that are able to take on _those_ problems - that is, customer problems, not problems of data, scaling, and so forth - will we hit a brick wall where we lose that understanding? Where we keep shipping stuff but it gets further and further from what our customers need? How do we avoid that?
For your sake I hope that your pay is determined by your “output”, and not your long-term usefulness.
> that has led to a 10X increase in the speed of software development.
> AI is the silver bullet - my output is genuinely 10X what it was before claude code existed.
Those are not the same.
You can add 5 different features to a project and still provide less value that the 5 lines diff that resolves a performance bottleneck.
I agree with this sentiment but I think LLMs are really close to the Brooks idea of a silver bullet.
I don't know if, overall, it's a 10x improvement or 6x or 14x but it's a serious contender. Part of it is the LLMs are very uneven in their performance across domains. If all I build is simple landing pages, it might be a 100x improvement. If I work on more complex, proprietary work where there aren't great examples in the training data then it might be a 10% improvement (it helps me write better comments or something)
"claude, connect to a k8s pod in prod and grab a 30s cpu profile, analyze and create a performance test locally for the top outlier, verify your fix and create a PR"
Just because code has been put out does not mean the software is “developed”.
10x would only be possible if your output was low before Claude Code
I've found that I can have 10x output, so long as I don't expect anyone to review your code...
I can get 100x output, if we're counting lines of code!
The premise of "no silver bullet" is wrong (LLM just made it clear, but it has always been wrong).
The premise is that the software development had been mostly "essential complexity" rather than "accidental complexity." But I think anyone who worked as SE in the past decade would have found the opposite is true.
It's not only that software development is full of accidental complexity. Programmers (and the decision makers above them) have always been actively creating accidental complexity. Making a GUI program hasn't gotten easier since Visual Basic. In fact for each JavaScript framework and technique that wraps around DOM render engine, it has got harder over years. Until LLMs made it easier again (by creating a permanent dependency on LLMs. If you intend to edit the code manually afterwards, it became even harder!)
Fortunate to be reminded of this right now, especially the pull-quote about conceptual integrity.
This is the reason why AI-assisted programming has not turned out to be the silver bullet we have been hoping for, at least yet. Muddled prompting by humans gets you the Homer Simpson car you wished for, that will eventually collapse under its own weight.
I've been thinking a lot about Programming as Theory Building [0] as the missing piece in AI-assisted engineering. Perhaps there are approaches which naturally focus on the essence while ignoring the accidents, but I'm still looking for them. Right now the state of the art I see ignores both accident and essence alike, and degrades the ability to make progress.
Please inform me if there are any approaches you know that work! And lest this sound pessimistic, far from it. This state of affairs is actually intoxicatingly motivating. Feels like we have found silver, and just need to start learning to mould bullets.
[0] Another classic required reading of the industry https://pages.cs.wisc.edu/~remzi/Naur.pdf
The bearing of a child takes nine months, no matter how many women are assigned.
For the human makers of things, the incompletenesses and inconsistencies of our ideas become clear only during implementation.
Conceptual integrity is the most important consideration in system design.
There is no single development, in either technology or management technique, which by itself promises even one order-of-magnitude improvement in productivity.
---
These ideas still apply very well to modern society. but, Personally, I hope science advances to the point where nine women really can have a baby in parallel.
We may need that to prevent demographic collapse and keep the pension system from running out of money.
Nine women can already have babies in parallel. That is, nine women cannot have a baby in one month, but nine women can have nine babies in nine months.
It would probably be more practical to make old age less expensive than to inject more people into the bottom of the demographic pyramid. Those young people eventually get old too. I am looking forward to my sentient robot caretaker:
“Open the refrigerator door, HAL”
“I can’t do that right now”
If he had saved enough money to subscribe to the Pro tier, HAL might have opened it.
Once we ditch our centrally controlled economies perhaps life can be affordable enough to not prevent willing parents from having children.
Oddly from your comment I can't quite tell which end of the political spectrum you're on. I think I agree with you, but I'm not sure until I know which team you're on.
I bet depending on the questions you ask me I could be on either side :)
Life has never been more affordable than it is now. Virtually all of your ancestors were impoverished to a degree you can't even imagine.
Life was becoming increasingly more affordable, but that stopped being the case years ago. It is now declining. I would like it to either stay the same as it was years ago or start increasing again.
I think Brooks would call that an optimistic schedule estimate.
As a software engineering manager, I always look to staff up a project at the beginning as much as possible, looking for doing as much in parallel up-front as we can. If some things take longer than expected, then I already have a team of engineers with all the context since the project kicked off that can help each other with any longer running tasks. An engineer that has completed a smaller chunk of work can help out with the items on the critical path, for example.
Fred brooks would not necessarily endorse this.
Please, say more!
>I always look to staff up a project at the beginning as much as possible, looking for doing as much in parallel up-front as we can.
Ah, maybe this is what you think he would take issue with? Fair enough. Perhaps I should have said:
>I always look to staff up as much as is economically and organizationally optimal, to exploit all genuine parallelism opportunities, being careful not to overstaff.
You mileage may vary but in my (unfortunate) experience, stuffing up by any other reason than grassroots "we need more hands" raised by engineers themselves typically backfires. Teams that are constrained by people resources often find creative ways to work smarter. Teams that have an abundance of labor, often end up working unnecessarily harder, duplicating work, reinventing the wheel, not solving the right problems, etc. See also intensive vs extensive development.
I agree with that
I love this book, and I often recommend it to new folks on my team. I used to carry a few extra paperback copies to give away, just in case.
It’s easy to see the conceptual integrity in good software, architecture, design and movies — or the lack of this quality in the bad ones.
Vibe coded software is the Marvel green screen movie equivalent.
"The programmer, like the poet, works only slightly removed from pure thought-stuff. He builds his castles in the air, from air, creating by exertion of the imagination." -FB
Y'all are Fucking STUPID!
Look, I read it and loved it 25 hyears ago.
Fred Brooks wrote that book when they were programming IBM operating systems in assembly language.
Times have really, really changed - do not pay attention to the messages of this book unless for historical fun.
The lessons in that book have broadly held true for nearly every single one of my employers throughout the entirety of my career.
Indeed a lot of things have changed. A worthwhile exercise is to read the book, contemplate how things have changed, and try to map lessons from the book onto modern technology and organizational practices. A LOT of the core principles are still relevant IMO, even if many of the implementation details are not.
Your comment and the OP both mention some things that are outdated about the book. What are those things?
Our field is full of vague, terrible opinions and useless advice. Arrogant people that think they're better than others.
That book isn't, it's built from humility and a rare bright light in this god forsaken field.
The book is good. As you say, the author, Fred Brooks, is not at all arrogant.
Martin Fowler, the author of the blog, may be a bit different than that.
IMHO, Brooks's Law applies more today than ever.
I was half expecting Fowler to tie it in to right-sizing agent teams.