I'm surprised no one has already mentioned this, but this idea has been expressed before in Peter Naur's "Programming as Theory Building" (1985): he argues that a program can’t be reduced to its source text; it’s a theory shared by the programmers. When the original team is gone, maintainers must rebuild that theory (often painfully) from the remaining traces.
Coding agents let me build and throw away prototypes extremely fast. A major value, for me, is that they help me understand early what users truly want and need — rather than relying on assumptions or lingering in abstraction. They help me discover and reduce my ignorance.
"product is the knowledge in the code, not the code itself".. and other interesting observations. That might be relevant in current to-AI-or-not-AI questions
Published as book - The Laws of Software Process: A New Model for the Production and Management of Software , 2003, Phillip G. Armour
Essentially the idea of a context window in modern LLM models, there is implicit domain knowledge to every task in which no matter how capable the model may be, if not in the context, the software will not be functional.
The intro is really good and stands alone. I'd point any outsider to this as a decent description of hacking, programming, software engineering, prototyping in general.
Calling it specification makes it sound like someone knows what should be developed but didn’t put the time and effort to specify what they wanted.
In my experience, most people don’t know what should be developed. Even users don’t know if you ask them.
As the article outlines: you need to acquire that knowledge. And they’re many ways to do that. Talking to customers, testing with customers, having smart people who understand and care (!) what outcomes people want to achieve and so on.
> In my experience, most people don’t know what should be developed. Even users don’t know if you ask them.
That's also my experience. The most productive use of coding agents I've found so far is rooted in this.
I wanted to build a tool, for my own use, to let me specify a simple 2D geometry using a textual representation that looks like a simple version of the geometry itself. I'm the user and the implementer, so it should be easy right? I had some ideas for the basics of the textual representation, but the details were far from obvious.
I spent ~10 hours having AI give me suggestions and refinements for the textual representation along with evolving code to parse the representation. It turned out to be a highly productive way to explore the space of trade-offs. This representation looks really good, but it's ambiguous in some cases. That one is not ambiguous, but parsing is tricky. That one is too annoying for the user. It was all about quickly gathering the knowledge and therefore understanding what to build.
When I finally settled on a representation that had a good set of trade-offs, the code was pretty ugly. It worked, but it was about 1,200 lines of code, with weak tests. I tried to have the AI refactor it, and even restart from the final representation choices. Its best version was 1,000 lines of code that were difficult to understand.
I was getting on a plane with no internet the next day. So, armed with all the gathered knowledge, I decided to see what I could do with the code on the plane. It was too complicated to refactor as I went. So I rewrote it from scratch, employing the knowledge I'd built up. In 2-3 hours, I had a complete implementation that was simple to understand and less than 300 lines of code. ~40% of those lines of code were tests that were quite robust.
That amount of iteration in the knowledge gathering step would have taken me closer to a couple of weeks without the AI. And, by employing Fred Brooks' "build one to throw away" (a concept that I think is largely about the same thing as the article), I had a solid implementation without much more work.
I'm sure this workflow is not for everyone. But it (accidentally) leaned into the topic of the article, and I think that's exactly why it worked so well for me.
Not so much, actually. The better-than-default "process" for their 3rd level is to interview the customers, users, or domain experts, which is something you should do already in a sane software development process. Transposed and generalized to everyday life, this just means talk to people, ask questions and listen. This is generally called being "open-minded".
This is one reason why artificial general intelligence is impossible. It is because most of the knowledge needed would require knowledge that does not already exist in text form.
I'm surprised no one has already mentioned this, but this idea has been expressed before in Peter Naur's "Programming as Theory Building" (1985): he argues that a program can’t be reduced to its source text; it’s a theory shared by the programmers. When the original team is gone, maintainers must rebuild that theory (often painfully) from the remaining traces.
https://pages.cs.wisc.edu/~remzi/Naur.pdf
Not to say the article doesn't have value, as great foundational ideas are always worth repeating and revisiting.
Coding agents let me build and throw away prototypes extremely fast. A major value, for me, is that they help me understand early what users truly want and need — rather than relying on assumptions or lingering in abstraction. They help me discover and reduce my ignorance.
"product is the knowledge in the code, not the code itself".. and other interesting observations. That might be relevant in current to-AI-or-not-AI questions
Published as book - The Laws of Software Process: A New Model for the Production and Management of Software , 2003, Phillip G. Armour
https://www.amazon.com/Laws-Software-Process-Production-Mana...
Essentially the idea of a context window in modern LLM models, there is implicit domain knowledge to every task in which no matter how capable the model may be, if not in the context, the software will not be functional.
Incredibly prescient given what’s happening now 25 years later. This message resonates with me quite strongly! Thanks for sharing.
The intro is really good and stands alone. I'd point any outsider to this as a decent description of hacking, programming, software engineering, prototyping in general.
>As a development life-cycle model, prototyping acknowledges that our job is not to build a system, but to acquire knowledge.
So if there is any hope in making software development faster, we need to focus more on the specification part - to get it right faster.
Calling it specification makes it sound like someone knows what should be developed but didn’t put the time and effort to specify what they wanted.
In my experience, most people don’t know what should be developed. Even users don’t know if you ask them.
As the article outlines: you need to acquire that knowledge. And they’re many ways to do that. Talking to customers, testing with customers, having smart people who understand and care (!) what outcomes people want to achieve and so on.
> In my experience, most people don’t know what should be developed. Even users don’t know if you ask them.
That's also my experience. The most productive use of coding agents I've found so far is rooted in this.
I wanted to build a tool, for my own use, to let me specify a simple 2D geometry using a textual representation that looks like a simple version of the geometry itself. I'm the user and the implementer, so it should be easy right? I had some ideas for the basics of the textual representation, but the details were far from obvious.
I spent ~10 hours having AI give me suggestions and refinements for the textual representation along with evolving code to parse the representation. It turned out to be a highly productive way to explore the space of trade-offs. This representation looks really good, but it's ambiguous in some cases. That one is not ambiguous, but parsing is tricky. That one is too annoying for the user. It was all about quickly gathering the knowledge and therefore understanding what to build.
When I finally settled on a representation that had a good set of trade-offs, the code was pretty ugly. It worked, but it was about 1,200 lines of code, with weak tests. I tried to have the AI refactor it, and even restart from the final representation choices. Its best version was 1,000 lines of code that were difficult to understand.
I was getting on a plane with no internet the next day. So, armed with all the gathered knowledge, I decided to see what I could do with the code on the plane. It was too complicated to refactor as I went. So I rewrote it from scratch, employing the knowledge I'd built up. In 2-3 hours, I had a complete implementation that was simple to understand and less than 300 lines of code. ~40% of those lines of code were tests that were quite robust.
That amount of iteration in the knowledge gathering step would have taken me closer to a couple of weeks without the AI. And, by employing Fred Brooks' "build one to throw away" (a concept that I think is largely about the same thing as the article), I had a solid implementation without much more work.
I'm sure this workflow is not for everyone. But it (accidentally) leaned into the topic of the article, and I think that's exactly why it worked so well for me.
No that workflow isn’t for everyone and everything.
But something like that is exactly what you need to do.
Somehow experience / test reality and with that feedback go back and build.
> As the article outlines: you need to acquire that knowledge. And they’re many ways to do that.
Domain-driven design is all about this.
This would be so useful of a model, in personal development, life and more! Incredible take on this.
Not so much, actually. The better-than-default "process" for their 3rd level is to interview the customers, users, or domain experts, which is something you should do already in a sane software development process. Transposed and generalized to everyday life, this just means talk to people, ask questions and listen. This is generally called being "open-minded".
... And here's the first three orders mentioned in a famously quoted press conference from 2002:
https://www.youtube.com/watch?v=REWeBzGuzCc
Funny how something written in 2000 can sound so modern.
Ok so he’s got KK, KDK, and DKDK, but he’s missing DKK.
> the real job is not writing the code, or even building the system — it is acquiring the necessary knowledge to build the system.
Not only very true, but the grammar will trigger those who insist on forcing the "that's written by AI" meme. I love it.
This is one reason why artificial general intelligence is impossible. It is because most of the knowledge needed would require knowledge that does not already exist in text form.