From the article "And yet these tools have opened a world of creative potential in software that was previously closed to me, and they feel personally empowering. "
I keep seeing things like this, about "democratization" etc. But coding has been essentially free to learn for about 25 years now, with all the online resources and open source tools anyone can use.
Is this just a huge boon to those too lazy to learn? And what does that mean for later security and tech debt issues?
You, like most hacker news folks, likely are too rich to imagine the type of person who can’t spend hours every day learning a billion different difficult things. Some folks have to work menial jobs, take care of their kids, and fix their leaky roof themselves.
as a software developer with a life after work, I don't have time to learn to be a mechanic either. So I don't claim to be one without having done the work.
Bad assumption. I taught myself to code in the early 2000s while scrapping by earning just above minimum wage in Vancouver, one of the most expensive cities in the world.
So uh, you would be hard pressed to be more wrong. And I used free resources.
Think about having to assemble a car (you can find specs and tutorials online, say) and then drive it, or asking engineers and mechanics to assemble it, and then using the car assembled by others to go for a drive.
Those are all false equivalents. The GP speaks of "democratization of learning", which had already happened. It's more akin to if I said "now people can finally vote" when remote voting expanded to civilians. It's not like people couldn't have voted before, and in fact it had only a modest impact on turnout.
Then people would ask "is this just a huge boon to those too lazy to vote?", and the answer would be "no actually, voting is still a thing where one must do their own thinking."
If anything, it's a boon to people too lazy to drive, similar to LLMs being a boon for those too lazy to type.
Too lazy to learn is a bit harsh and your statement lacks empathy.
Coding has been free to learn for a long time and the quality of education resources has only improved overtime. But that does not mean it’s easy and it doesn’t decrease the time to learn.
I’ll use myself as an example. I’m pretty creative, I have a lot of ideas and interests, but I struggle a lot with the logic and syntax of coding. I find it interesting at the surface level but every time I’ve tried to learn I just can’t get it to click. And, to be frank, I don’t find it very enjoyable.
But at the same time I have random website and app ideas quite frequently. I’ll use apps that have terrible UI/UX and imagine ways it could be better or maybe even design something in Figma if I’m feeling frisky. But actually making an app? Always just way too out of my wheelhouse. Plus I work 40-50 hours a week and prioritize socializing on the weekends, a lot of those ideas have to be relegated to just ideas on a list in Obsidian. Does that make me a lazy person? Maybe to you but I don’t think of myself that way.
The tools available now have unlocked something new for me. My ideas can start to come to life because the coding part doesn’t hold me back anymore. I’ve made silly websites with domains I’ve owned for years. I’ve made apps that solve an annoying issue I’ve had forever like a file media viewer app for my iPhone since file viewing sucks with Files/Preview and every app on the AppStore is infested with ads and didn’t fit my use case. I just for fun made an app that can play against me in MTG by using the continuity camera from my iPhone to my Mac to read the playing field.
I get where you’re coming from but you probably think every vibe coder is lazy because you’re good at coding. Not everybody has the talent/time/desire to learn how to code. Does that mean we can’t let our ideas come to life?
Honestly stories like yours are the best part of this whole AI revolution. It’s genuinely cool that the technical barrier is no longer killing creative ideas. You’ve essentially skipped the "coder" stage and jumped straight to orchestrator (Product Owner + QA rolled into one), with AI acting as the diligent junior dev. That is a totally valid model
The only downside is that sooner or later, you hit the scaling trap. When a project grows from a silly website into a real product with users, not understanding how exactly the AI stitched those code blocks together becomes a critical risk. AI is great at putting up walls and painting facades, but the foundation (architecture and security) still needs human verification, especially once other people's data gets involved
Agreed on the point that people will become busier. It’s the Jevons Paradox in its purest form: increasing the efficiency of a resource leads not to saving it, but to increasing its consumption.
We used to skip building internal tools or complex integrations because it was expensive, now it's cheap so naturally the business wants everything. Future engineers won't be writing code so much as orchestrating the chaos generated by agents. We won't stop working, but the workload is definitely going up
nice write up of things that are only obvious if you spend time with AI.
pretty much everything applies to non-agentic AI work, code or not, as well, if you are aiming beyond average quality and conventional design, that is. people who give up somewhat early won't give up much later just because they use AI or teach an AI agent.
but the article is mostly also what people not in the field or tangentially related expect. it's here but that big thing isn't.
I could say I dabbled with woodworking but I really just used a chainsaw to cut down some trees, make slabs and then used drill and screws to construct the cheapest, fastest MVP of a piece of furniture that I used until the shed burned down. But that's not woodworking, not really.
"AI coding agents" is just an autoiterating chat of/with a large coding model, that you still have to iterate over, which is as obvious as an apprentice in a woodworking shop doing a lot--if not all of the work--alone until the meister points out all the mistakes and lets him do it all over again.
> I was soon spending eight hours a day during my winter vacation shepherding about 15 Claude Code projects at once
If you are a "computer person", spending 8h a day on multiple projects is normal, although 15 is, IMO, way too freaking much but I'm ADHD and not really a computer person. While I run dozens of narratives in parallel all the time, I only "shepherd" and iterate over a handful of them in 'flexible' time intervalls.
The reason for the burnout might be, and I can relate due to my ADHD, the following:
> Due to what might poetically be called “preconceived notions” baked into a coding model’s neural network (more technically, statistical semantic associations), it can be difficult to get AI agents to create truly novel things, even if you carefully spell out what you want.
The expectation to create something "truley novel" based on ideas that aren't truly novel (yet, ...what?) is weird enough, but then expecting that an AI coding agent, an apprentice, will make it novel even though the entire thing basically already exists and the novelty makes no sense conceptually until the core elements are separated
> a UFO (instead of a circular checker piece) flying over a field of adjacent squares,
is quite analogues to semi-functional ADHD people who believe they will get at least some of their ideas out if they "work" or dream on all all them. It can work, but you have to separate concerns, which, in case of ADHD people, is becoming functional, meaning don't consume stuff that impede body and brain, do stuff to eliminate bio-physical distractions and to keep hormonal and neural moral high at most times, and only then work, and in the case of AI coding agents it means to separate concepts that are programmatically/mathematically/linguistically intertwined and only then define mechanics and features within or beyond the individual or combined constraints.
> The first 90 percent of an AI coding project comes in fast and amazes you. The last 10 percent involves tediously filling in the details through back-and-forth trial-and-error conversation with the agent. Tasks that require deeper insight or understanding than what the agent can provide still require humans to make the connections and guide it in the right direction.
So then why not at this point switch to the human being primary author and only have the AI do reviews and touch ups? Or are we restricting ourselves to vibe coding only?
> The last 10 percent involves tediously filling in the details through back-and-forth trial-and-error conversation with the agent
It so often sounds like "traditional coding" flows like an orchestra during an opera while vibe and 'agentic' coding flows like a bunch of big bands practicing.
Are they trying to tell the story that "it's the same" or that "it's just not the same"? Is the toolchain changing that much that there is no reason to learn the baseline anymore? So the next ten years of AI development should be left to those who already weild the basic tools? Just like the economy? Is the narrative meant to establish a singularity-driven relationship with young coders, computer scientists and those who use code to entertain, inform and sell via media? While simultaneously pushing the outliers to the edge of the sphere and lock them out via their lack of AI skills and experience with such tools from ever reaching a proper chunk of the mob?
On the one hand, it's a personal decision. Trends and narratives can be convincing. Defactors are rare nowadays while polarization and the status quo are the defacto standard. So on the other hand, it's a depersonalized decision reinforcing the hierarchies (hardware) that dictate which tools (hardware, the cloud) dominate the main stream either way.
> Or are we restricting ourselves to vibe coding only?
> why not at this point switch to the human being primary author
It's the only choice. You are either the primary author of the code or of the learning material. In the former case, the latter is implied and you can't teach if you don't know.
In essence, all this "AI hype" should really only motivate. But these perceptions of "the end of stuff as we know it" and "NOW it's definitely not in my/our hands anymore" that is everywhere weighs heavy. So that the only "residue outcome" really is: making money is the only thing that's left ..., again, reinforcing the hierarchies (hardware) that dictate which tools (hardware, the cloud) dominate the main stream either way--whether you break under the weight or not, whether you shrugg it off or become versed enough to just carry it along--while establishing a singularity-driven relationship of the system with it's constituents.
This is an excellent article. I resonated with all ten of his points.This section at the end particularly made sense.
"Regardless of my own habits, the flow of new software will not slow down. There will soon be a seemingly endless supply of AI-augmented media (games, movies, images, books), and that’s a problem we’ll have to figure out how to deal with. These products won’t all be “AI slop,” either; some will be done very well, and the acceleration in production times due to these new power tools will balloon the quantity beyond anything we’ve seen."
It's not different per se, it's just being made much more difficult i.e. if you had to look for one pearl through a pile of 200 barnacles, how you have to scan through 3000.
From the article "And yet these tools have opened a world of creative potential in software that was previously closed to me, and they feel personally empowering. "
I keep seeing things like this, about "democratization" etc. But coding has been essentially free to learn for about 25 years now, with all the online resources and open source tools anyone can use.
Is this just a huge boon to those too lazy to learn? And what does that mean for later security and tech debt issues?
You, like most hacker news folks, likely are too rich to imagine the type of person who can’t spend hours every day learning a billion different difficult things. Some folks have to work menial jobs, take care of their kids, and fix their leaky roof themselves.
as a software developer with a life after work, I don't have time to learn to be a mechanic either. So I don't claim to be one without having done the work.
Bad assumption. I taught myself to code in the early 2000s while scrapping by earning just above minimum wage in Vancouver, one of the most expensive cities in the world.
So uh, you would be hard pressed to be more wrong. And I used free resources.
Think about having to assemble a car (you can find specs and tutorials online, say) and then drive it, or asking engineers and mechanics to assemble it, and then using the car assembled by others to go for a drive.
You can always ask ChatGPT how to dissasemble and assemble you shiny new Ford F-150. /s
> Is this just a huge boon to those too lazy to learn? And what does that mean for later security and tech debt issues?
In the same way that GPS is a boon to people too lazy to learn celestial navigation or read a paper map.
In the same way that word processors are a boon to people too lazy to use a typewriter and white-out.
In the same way that supermarkets are a boon to people too lazy to hunt and gather their own food.
In the same way that synthesizers are a boon to people too lazy to hire a 40-piece orchestra.
In the same way that power drills are a boon to people too lazy to use a hand crank.
Those are all false equivalents. The GP speaks of "democratization of learning", which had already happened. It's more akin to if I said "now people can finally vote" when remote voting expanded to civilians. It's not like people couldn't have voted before, and in fact it had only a modest impact on turnout.
Then people would ask "is this just a huge boon to those too lazy to vote?", and the answer would be "no actually, voting is still a thing where one must do their own thinking."
If anything, it's a boon to people too lazy to drive, similar to LLMs being a boon for those too lazy to type.
Too lazy to learn is a bit harsh and your statement lacks empathy.
Coding has been free to learn for a long time and the quality of education resources has only improved overtime. But that does not mean it’s easy and it doesn’t decrease the time to learn.
I’ll use myself as an example. I’m pretty creative, I have a lot of ideas and interests, but I struggle a lot with the logic and syntax of coding. I find it interesting at the surface level but every time I’ve tried to learn I just can’t get it to click. And, to be frank, I don’t find it very enjoyable.
But at the same time I have random website and app ideas quite frequently. I’ll use apps that have terrible UI/UX and imagine ways it could be better or maybe even design something in Figma if I’m feeling frisky. But actually making an app? Always just way too out of my wheelhouse. Plus I work 40-50 hours a week and prioritize socializing on the weekends, a lot of those ideas have to be relegated to just ideas on a list in Obsidian. Does that make me a lazy person? Maybe to you but I don’t think of myself that way.
The tools available now have unlocked something new for me. My ideas can start to come to life because the coding part doesn’t hold me back anymore. I’ve made silly websites with domains I’ve owned for years. I’ve made apps that solve an annoying issue I’ve had forever like a file media viewer app for my iPhone since file viewing sucks with Files/Preview and every app on the AppStore is infested with ads and didn’t fit my use case. I just for fun made an app that can play against me in MTG by using the continuity camera from my iPhone to my Mac to read the playing field.
I get where you’re coming from but you probably think every vibe coder is lazy because you’re good at coding. Not everybody has the talent/time/desire to learn how to code. Does that mean we can’t let our ideas come to life?
Honestly stories like yours are the best part of this whole AI revolution. It’s genuinely cool that the technical barrier is no longer killing creative ideas. You’ve essentially skipped the "coder" stage and jumped straight to orchestrator (Product Owner + QA rolled into one), with AI acting as the diligent junior dev. That is a totally valid model
The only downside is that sooner or later, you hit the scaling trap. When a project grows from a silly website into a real product with users, not understanding how exactly the AI stitched those code blocks together becomes a critical risk. AI is great at putting up walls and painting facades, but the foundation (architecture and security) still needs human verification, especially once other people's data gets involved
> The only downside is that sooner or later, you hit the scaling trap.
Which they might be able to overcome faster with the help of AI, again.
Agreed on the point that people will become busier. It’s the Jevons Paradox in its purest form: increasing the efficiency of a resource leads not to saving it, but to increasing its consumption.
We used to skip building internal tools or complex integrations because it was expensive, now it's cheap so naturally the business wants everything. Future engineers won't be writing code so much as orchestrating the chaos generated by agents. We won't stop working, but the workload is definitely going up
> 8. People may become busier than ever
this is so true and the opposite of what was expected
LLMs provide the benefit of lossy compression of all the text on the Internet.
It’s a crappier reddit in your pocket.
Use it well.
nice write up of things that are only obvious if you spend time with AI. pretty much everything applies to non-agentic AI work, code or not, as well, if you are aiming beyond average quality and conventional design, that is. people who give up somewhat early won't give up much later just because they use AI or teach an AI agent.
but the article is mostly also what people not in the field or tangentially related expect. it's here but that big thing isn't.
I could say I dabbled with woodworking but I really just used a chainsaw to cut down some trees, make slabs and then used drill and screws to construct the cheapest, fastest MVP of a piece of furniture that I used until the shed burned down. But that's not woodworking, not really.
"AI coding agents" is just an autoiterating chat of/with a large coding model, that you still have to iterate over, which is as obvious as an apprentice in a woodworking shop doing a lot--if not all of the work--alone until the meister points out all the mistakes and lets him do it all over again.
> I was soon spending eight hours a day during my winter vacation shepherding about 15 Claude Code projects at once
If you are a "computer person", spending 8h a day on multiple projects is normal, although 15 is, IMO, way too freaking much but I'm ADHD and not really a computer person. While I run dozens of narratives in parallel all the time, I only "shepherd" and iterate over a handful of them in 'flexible' time intervalls.
The reason for the burnout might be, and I can relate due to my ADHD, the following:
> Due to what might poetically be called “preconceived notions” baked into a coding model’s neural network (more technically, statistical semantic associations), it can be difficult to get AI agents to create truly novel things, even if you carefully spell out what you want.
The expectation to create something "truley novel" based on ideas that aren't truly novel (yet, ...what?) is weird enough, but then expecting that an AI coding agent, an apprentice, will make it novel even though the entire thing basically already exists and the novelty makes no sense conceptually until the core elements are separated
> a UFO (instead of a circular checker piece) flying over a field of adjacent squares,
is quite analogues to semi-functional ADHD people who believe they will get at least some of their ideas out if they "work" or dream on all all them. It can work, but you have to separate concerns, which, in case of ADHD people, is becoming functional, meaning don't consume stuff that impede body and brain, do stuff to eliminate bio-physical distractions and to keep hormonal and neural moral high at most times, and only then work, and in the case of AI coding agents it means to separate concepts that are programmatically/mathematically/linguistically intertwined and only then define mechanics and features within or beyond the individual or combined constraints.
> The first 90 percent of an AI coding project comes in fast and amazes you. The last 10 percent involves tediously filling in the details through back-and-forth trial-and-error conversation with the agent. Tasks that require deeper insight or understanding than what the agent can provide still require humans to make the connections and guide it in the right direction.
So then why not at this point switch to the human being primary author and only have the AI do reviews and touch ups? Or are we restricting ourselves to vibe coding only?
> The last 10 percent involves tediously filling in the details through back-and-forth trial-and-error conversation with the agent
It so often sounds like "traditional coding" flows like an orchestra during an opera while vibe and 'agentic' coding flows like a bunch of big bands practicing.
Are they trying to tell the story that "it's the same" or that "it's just not the same"? Is the toolchain changing that much that there is no reason to learn the baseline anymore? So the next ten years of AI development should be left to those who already weild the basic tools? Just like the economy? Is the narrative meant to establish a singularity-driven relationship with young coders, computer scientists and those who use code to entertain, inform and sell via media? While simultaneously pushing the outliers to the edge of the sphere and lock them out via their lack of AI skills and experience with such tools from ever reaching a proper chunk of the mob?
On the one hand, it's a personal decision. Trends and narratives can be convincing. Defactors are rare nowadays while polarization and the status quo are the defacto standard. So on the other hand, it's a depersonalized decision reinforcing the hierarchies (hardware) that dictate which tools (hardware, the cloud) dominate the main stream either way.
> Or are we restricting ourselves to vibe coding only? > why not at this point switch to the human being primary author
It's the only choice. You are either the primary author of the code or of the learning material. In the former case, the latter is implied and you can't teach if you don't know.
In essence, all this "AI hype" should really only motivate. But these perceptions of "the end of stuff as we know it" and "NOW it's definitely not in my/our hands anymore" that is everywhere weighs heavy. So that the only "residue outcome" really is: making money is the only thing that's left ..., again, reinforcing the hierarchies (hardware) that dictate which tools (hardware, the cloud) dominate the main stream either way--whether you break under the weight or not, whether you shrugg it off or become versed enough to just carry it along--while establishing a singularity-driven relationship of the system with it's constituents.
This is the way.
This is an excellent article. I resonated with all ten of his points.This section at the end particularly made sense.
"Regardless of my own habits, the flow of new software will not slow down. There will soon be a seemingly endless supply of AI-augmented media (games, movies, images, books), and that’s a problem we’ll have to figure out how to deal with. These products won’t all be “AI slop,” either; some will be done very well, and the acceleration in production times due to these new power tools will balloon the quantity beyond anything we’ve seen."
The problem is finding the pearls amongst the slop.
How is that any different than, say, all of human history?
It's not different per se, it's just being made much more difficult i.e. if you had to look for one pearl through a pile of 200 barnacles, how you have to scan through 3000.