Similar to bragging about LOC, I have noticed in my own field of computational fluid dynamics that some vibe coders brag about how large or rigorous their test suites are. The problem is that whenever I look more closely into the tests, the tests are not outstanding and less rigorous than my own manually created tests. There often are big gaps in vibe coded tests. I don't care if you have 1 million tests. 1 million easy tests or 1 million tests that don't cover the right parts of the code aren't worth much.
> Generally, though, most of us need to think about using more abstraction rather than less.
Maybe this was true when Programming Perl was written, but I see the opposite much more often now. I'm a big fan of WET - Write Everything Twice (stolen from comments here), then the third time think about maybe creating a new abstraction.
Writing twice makes sense if time permits, or the opportunity presents itself. First time may be somewhat exploratory (maybe a thow-away prototype), then second time you better understand the problem and can do a better job.
A third time, with a new abstraction, is where you need to be careful. Fred Brooks ("Mythical Man Month") refers to it as the "second-system effect" where the confidence of having done something once (for real, not just prototype) may lead to an over-engineered and unnecessarily complex "version 2" as you are tempted to "make it better" by adding layers of abstractions and bells and whistles.
I agree with what you're saying about writing something twice or even three times to really understand it but I think you might have misunderstood the WET idea: as I understand it, it's meant in opposition to DRY, in the sense of "allow a second copy of the same code", and then when you need a third copy, start to consider introducing an abstraction, rather than religiously avoiding repeated code.
Totally agree with this, the beauty of software is the right abstractions have untold impact, spanning many orders of magnitude. I'm talking about the major innovations, things like operating systems, RDBMS, cloud orchestration. But the majority of code in the world is not like that, it's just simple business logic that represents ideas and processes run by humans for human purposes which resist abstraction.
That doesn't people from trying though, platform creation is rife within big tech companies as a technical form of empire building and career-driven development. My rule of thumb in tech reviews is you can't have a platform til you have three proven use cases and shown that coupling them together is not a net negative due to the autonomy constraint a shared system imposes.
laziness makes you understand the problem before writing anything. an LLM will happily generate 500 lines for something that needed 20 because it never has to maintain any of it.
German General Kurt von Hammerstein-Equord (a high-ranking army officer in the Reichswehr/Wehrmacht era):
“I divide my officers into four groups. There are clever, diligent, stupid, and lazy officers. Usually two characteristics are combined.
Some are clever and diligent — their place is the General Staff.
The next lot are stupid and lazy — they make up 90% of every army and are suited to routine duties.
Anyone who is both clever and lazy is qualified for the highest leadership posts, because he possesses the intellectual clarity and the composure necessary for difficult decisions.
One must beware of anyone who is both stupid and diligent — he must not be entrusted with any responsibility because he will always cause only mischief.”
I think we put too much negative emphasis on people who aren’t as gifted intellectually.
In reality, the world works because of human automotons, honest people doing honest work; living their life in hopefully a comforting, complete and wholesome way, quietly contributing their piece to society.
There is no shame in this, yet we act as though there is.
This is what pains me with how many people respond negatively toward the idea of everyone being able to earn an honest living and raise a family. Too often the idea of "deserving it" comes into it as if doing your small part to contribute to society is not enough.
I'm not blaming you here, but I think "automatons" may be inaccurate. A lot of the jobs that seem menial would be utterly bollixed if done by an automaton. The people continually handle the edge cases and tiny discrepancies between formal procedures and how things actually work. Consider the many stories of people experience AI bots when they try to get vendor support for products. "Please let me talk to a real person."
Many of those people, probably including most bureaucrats, are working on systems that have already been automated to the fullest extent possible. This is one of the reasons why bureaucracies seem chaotic and inefficient -- the stuff that works is happening automatically and is invisible. You only see the exceptions.
The automation can be improved, but it's a laborious process and fraught with the risks associated with the software crisis. You never know when a project is going to fall into the abyss and never emerge, and the best models of project failure are stochastic.
Human automatons? Why would you have mercy for automatons?
Just call them cattle, we might feel more compassion towards them if we don't think of them as machinelike.
I don’t know why you’re being downvoted. Using that sort of terminology already shows you don’t care about them more than the sort of energy someone has saying they would never consider keying _their_ car.
People don’t need to be exceptional to have intrinsic value.
I’m here man. Just want to make money and support my family. Couldn’t care less what some German general thinks about me. Even less care about online clowns trying to put people in buckets.
As dumb as it is to loudly proclaim you wrote 200k loc last week with an LLM, I don’t think it’s much better to look at the code someone else wrote with an LLM and go “hah! Look at how stupid it is!” You’re making exactly the same error as the other guy, just in the opposite direction: you’re judging the profession of software engineering based on code output rather than value generation.
Now, did Garry Tan actually produce anything of value that week? I dunno, you’ll have to ask him.
> As for the artifact that Tan was building with such frenetic energy, I was broadly ignoring it. Polish software engineer Gregorein, however, took it apart, and the results are at once predictable, hilarious and instructive: A single load of Tan’s "newsletter-blog-thingy" included multiple test harnesses (!), the Hello World Rails app (?!), a stowaway text editor, and then eight different variants of the same logo — one of which with zero bytes.
Do you think any of the... /things/ bundled in this software increased the surface area that attacks could be leveraged against?
I also struggle with this all the time, balance between bringing value/joy and level of craft. Most human written stuff might look really ugly or was written in a weird way but as long as it’s useful it’s ok.
What I don’t like here is the bragging about the LoC. He’s not bragging about the value it could provide. Yes people also write shitty code but they don’t brag about it - most of the time they are even ashamed.
The Horizon IT scandal was not caused by poor code quality, the scandal was the corrupt employees of the UK government/Post Office. Poor quality code might have caused the error, but the failure to investigate the errors and sweep them under the rug was made by humans.
> Poor quality code might have caused the error, but the failure to investigate the errors and sweep them under the rug was made by humans.
That's not quite correct.
The root set of errors were made by the accounting software. The branch sets of errors were made by humans taking Horizon It's word for it that there was no fault in the code, and instead blaming the workers for the differences in the balance sheets.
If there were no errors in the accounting software (i.e. it had been properly designed and tested), then none of that would have happened.
> Now, did Garry Tan actually produce anything of value that week? I dunno, you’ll have to ask him.
Let’s not be naive. Garry is not a nobody. He absolutely doesn’t care about how many lines of code are produced or deleted. He made that post as advertisement: he’s advertising AI because he’s the ceo of YC which profitability depends on AI.
At the extreme end you'll get invited to conferences but further down you could have other products you are pushing. Even non-AI related that takes advantage of your "smart person" public persona.
> You’re making exactly the same error as the other guy, just in the opposite direction: you’re judging the profession of software engineering based on code output rather than value generation.
But the true metric isn't either one, it's value created net of costs. And those costs include the cost to create the software, the cost to understand and maintain it, the cost of securing it and deploying it and running it, and consequential costs, such as the cost of exploited security holes and the cost of unexpected legal liabilities, say from accidental copyright or patent infringement or from accidental violation of laws such as the Digital Markets Act and Digital Services Act. The use of AI dramatically decreases some of these costs and dramatically increases other costs (in expectation). But the AI hypesters only shine the spotlight on the decreased costs.
It isn't worth the time. I am not going to read the 200k LOC to prove it was a bad idea to generate that much code in a short time and ship it to production. It is on the vibe coder to prove it is. And if it is just tweets being exchanged, and I want to judge someone who is boasting about LOC and aiming to make more LOC/second. Yep I'll judge 'em. It is stupid.
"Value generation" is a term I would be somewhat wary of.
To me, in this context, it's similar to drive economic growth on fossil fuel.
Whether in the end it can result in a net benefit (the value is larger than the cost of interacting with it and the cost to sort out the mess later) is likely impossible to say, but I don't think it can simply be judged by short sighted value.
Given the framing of the article, I can understand where the opposite direction comment is coming from. The author also gives mixed signals, by simultaneously suggesting that the "laziness" of the programmer and code are virtues. Yet I don't think they are ignoring value generation. Rather, I think they are suggesting that the value is in the quality of the code instead of the problem being solves. This seems to be an attitude held by many developers who are interested in the pursuit of programming rather than the end product.
LLMs not being lazy enough definitely feels true. But it's unclear to me if it a permanent issue, one that will be fixed in the next model upgrade or just one your agent framework/CICD framework takes care of.
e.g. Right now when using agents after I'm "done" with the feature and I commit I usually prompt "Check for any bugs or refactorings we should do" I could see a CICD step that says "Look at the last N commits and check if the code in them could be simplified or refactored to have a better abstraction"
Agreed. If I'm looking at what it proposes then about 1/2 the time I don't make the changes. If this were fully automated you would need an addendum like "Only make the change if it saves over 100 lines of code or removes 3 duplicate pieces of logic".
There are other scenarios you would want to check for but you get the idea.
I've had this exact sentiment in the past couple months after seeing a few PRs that were definitely the wrong solution to a problem. One was implementing it's own parsing functions to which well established solutions like JSON or others likely existed. I think any non-llm programmer could have thought this up but then immediately decide to look elsewhere, their human emotions would have hit and said "that's way too much (likely redundant) work, there must be a better way". But the LLM has no emotion, it isn't lazy and that can be a problem because it makes it a lot easier to do the wrong thing.
It also doesn't bother checking what's already in your project. Grep around a bit and you'll find three `formatTimestamp` functions all doing almost the same thing.
I have noticed LLMs have a propensity to create full single page web applications instead of simpler programs that just print results to the terminal.
I've also struggled with getting LLMs to keep spec.md files succinct. They seem incapable of simplifing documents while doing another task (e.g. "update this doc with xyz and simply the surrounding content") and really need to be specifically tasked at simplifying/summarizing. If you want something human readable, you probably just need to write it yourself. Editing LLM output is so painful, and it also helps to keep yourself in the loop if you actually write and understand something.
I‘m so happy about this article. I was forming a thought in my head the last couple of days, which is how to describe what it is that makes AI code practically unusable in good systems.
And one of the reasons is the one described in this article and the other is, that you skip training your mental model when you don’t grind these laziness patterns. If you are not in the code grinding to your codebase, you don’t see the fundamental issues that block the next level nor you have the itch to name and abstract it properly so you wont have to worry about in the future, when somebody or you have to extend it.
Knowing your shit is so powerful.
I believe now that my competive advantage is grinding code, whilst others are accumulating slop.
Great article, I've been saying something similar (much less eloquently) at work for months and will reference this one next time it comes up.
Quite often I see inexperienced engineers trying to ship the dumbest stuff. Back before LLM these would be projects that would take them days or weeks to research, write, test, and somewhere along the way they could come to the realization "hold on, this is dumb or not worth doing". Now they just send 10k line PR before lunch and pat themselves on the back.
I very much agree; I think laziness / friction is basically a critically important regularizer for what to build and for what to not build. LLMs remove that friction and it requires more discipline now. (Wrote some of this up a while ago here: https://matthiasplappert.com/blog/2026/laziness-in-the-age-o...)
The more people boast about AI while delivering absolute garbage like in the example here, the more I feel happier toiling around in Nginx configurations and sysadmin busy work. Why worry about AI when it's the same old idiots using it as a crutch, like any new fad.
Since we all, stupidly, are leaning into LoC as a metric, because we can't handle subjectivity, at the very least, we could just do orders of magnitude for LoC. Was it a 10/100/1,000/10,000 LoC hour/week/day/month? 1,2,3,4 or 5. Dtrace's 60kLo, would then be a 5, Linux kernel is an8 (40M), Firefox is also an 8. Notepad++ is a 6,
Abstractions and strong basis as a freedom to think freely at high levels.
The slop drowning and impinging our ability to do good hammock driven development.
Love it. Thanks Bryan.
It's invaluable framing and we'll stayed. There's a pretty steady background dumb-beat of "do we still need frameworks/libraries" that shows up now. And how to talk to that is always hard. https://news.ycombinator.com/item?id=47711760
To me, the separation of concerns & strong conceptual basis to work from seem like such valuable clarity. But these are also anchor points that can limit us too, and I hope we see faster stronger panning for good reusable architectures & platforms to hang out apps and systems upon. I hope we try a little harder than we have been, that there's more experimentation. Cause it sure felt like the bandwagon effect was keeping us in a couple local areas. I do think islands of stability to work from make all the sense, are almost always better than the drift/accumulation of letting the big ball of mud architecture accrue.
Interesting times ahead. Amid so much illegibile miring slop, hopefully too some complementary new finding out too.
oh this hits all the right notes for me! I am just the demographic that tried to perl my way into the earliest web server builds, and read those exact words carefully while looking at the very mixed quality, cryptic ascii line noise that is everyday perl. And as someone who had built multi-thousand line C++ systems already, the "virtues" by Larry Wall seemed spot on! and now to combine the hindsight with current LLM snotty Lord Fauntleroy action coming from San Francisco.. perfect!
Disregarding the fact that Bryan operates oxide a company that has multiple investors and customers (id say this proves valuable knowledge) the crazier fact is that people think html is useless knowledge.
React USES html. Understanding html is core to understanding react. React does not in anyway devalue html in the same way that driving automatic devalues driving manual
Go to Facebook.com and right click view source and tell me html is not being devalued. No person who wants to write aesthetic html would write that stuff.
When it matters it matters. Even in facebooks case they made react fit for their use case. You think the react devs didn’t understand html? Do you think quality frontends can be written without any understanding of html?
Like the article says we’ve moved an abstraction up. That does not make the html knowledge useless
What he prides himself in (in this context) is craft, which LLM use probably can enable, but definitely isn't commoditized by the kind of vibe coding that Garry Tan is doing.
Similar to bragging about LOC, I have noticed in my own field of computational fluid dynamics that some vibe coders brag about how large or rigorous their test suites are. The problem is that whenever I look more closely into the tests, the tests are not outstanding and less rigorous than my own manually created tests. There often are big gaps in vibe coded tests. I don't care if you have 1 million tests. 1 million easy tests or 1 million tests that don't cover the right parts of the code aren't worth much.
It's a struggle to get LLMs to generate tests that aren't entirely stupid.
Like grepping source code for a string. or assert(1==1, true)
You have to have a curated list of every kind of test not to write or you get hundreds of pointless-at-best tests.
Time to teach the LLMs and the vibe coders one of the timeless lessons of software development:
https://www.folklore.org/Negative_2000_Lines_Of_Code.html
> Generally, though, most of us need to think about using more abstraction rather than less.
Maybe this was true when Programming Perl was written, but I see the opposite much more often now. I'm a big fan of WET - Write Everything Twice (stolen from comments here), then the third time think about maybe creating a new abstraction.
>WET - Write Everything Twice
I've always heard this as the "Rule of three": https://en.wikipedia.org/wiki/Rule_of_three_(computer_progra...
Writing twice makes sense if time permits, or the opportunity presents itself. First time may be somewhat exploratory (maybe a thow-away prototype), then second time you better understand the problem and can do a better job.
A third time, with a new abstraction, is where you need to be careful. Fred Brooks ("Mythical Man Month") refers to it as the "second-system effect" where the confidence of having done something once (for real, not just prototype) may lead to an over-engineered and unnecessarily complex "version 2" as you are tempted to "make it better" by adding layers of abstractions and bells and whistles.
I agree with what you're saying about writing something twice or even three times to really understand it but I think you might have misunderstood the WET idea: as I understand it, it's meant in opposition to DRY, in the sense of "allow a second copy of the same code", and then when you need a third copy, start to consider introducing an abstraction, rather than religiously avoiding repeated code.
Totally agree with this, the beauty of software is the right abstractions have untold impact, spanning many orders of magnitude. I'm talking about the major innovations, things like operating systems, RDBMS, cloud orchestration. But the majority of code in the world is not like that, it's just simple business logic that represents ideas and processes run by humans for human purposes which resist abstraction.
That doesn't people from trying though, platform creation is rife within big tech companies as a technical form of empire building and career-driven development. My rule of thumb in tech reviews is you can't have a platform til you have three proven use cases and shown that coupling them together is not a net negative due to the autonomy constraint a shared system imposes.
That will still result in more abstraction than the average programmer.
More than twice is a rather low bar, I don’t think that it conflicts with the quote from Programming Perl.
I've been advocating for writing everything twice since college.
laziness makes you understand the problem before writing anything. an LLM will happily generate 500 lines for something that needed 20 because it never has to maintain any of it.
German General Kurt von Hammerstein-Equord (a high-ranking army officer in the Reichswehr/Wehrmacht era):
“I divide my officers into four groups. There are clever, diligent, stupid, and lazy officers. Usually two characteristics are combined.
Some are clever and diligent — their place is the General Staff.
The next lot are stupid and lazy — they make up 90% of every army and are suited to routine duties.
Anyone who is both clever and lazy is qualified for the highest leadership posts, because he possesses the intellectual clarity and the composure necessary for difficult decisions.
One must beware of anyone who is both stupid and diligent — he must not be entrusted with any responsibility because he will always cause only mischief.”
Where my fellow ninety-percenters at?
I think we put too much negative emphasis on people who aren’t as gifted intellectually.
In reality, the world works because of human automotons, honest people doing honest work; living their life in hopefully a comforting, complete and wholesome way, quietly contributing their piece to society.
There is no shame in this, yet we act as though there is.
This is what pains me with how many people respond negatively toward the idea of everyone being able to earn an honest living and raise a family. Too often the idea of "deserving it" comes into it as if doing your small part to contribute to society is not enough.
I'm not blaming you here, but I think "automatons" may be inaccurate. A lot of the jobs that seem menial would be utterly bollixed if done by an automaton. The people continually handle the edge cases and tiny discrepancies between formal procedures and how things actually work. Consider the many stories of people experience AI bots when they try to get vendor support for products. "Please let me talk to a real person."
Many of those people, probably including most bureaucrats, are working on systems that have already been automated to the fullest extent possible. This is one of the reasons why bureaucracies seem chaotic and inefficient -- the stuff that works is happening automatically and is invisible. You only see the exceptions.
The automation can be improved, but it's a laborious process and fraught with the risks associated with the software crisis. You never know when a project is going to fall into the abyss and never emerge, and the best models of project failure are stochastic.
The movie Perfect Days captures this perfectly.
Human automatons? Why would you have mercy for automatons? Just call them cattle, we might feel more compassion towards them if we don't think of them as machinelike.
I don’t know why you’re being downvoted. Using that sort of terminology already shows you don’t care about them more than the sort of energy someone has saying they would never consider keying _their_ car.
People don’t need to be exceptional to have intrinsic value.
I’m here man. Just want to make money and support my family. Couldn’t care less what some German general thinks about me. Even less care about online clowns trying to put people in buckets.
Hard disagree with the initial assumption: Abstractions do not make a system simpler.
Note: I would have added usually but I really do mean always.
As dumb as it is to loudly proclaim you wrote 200k loc last week with an LLM, I don’t think it’s much better to look at the code someone else wrote with an LLM and go “hah! Look at how stupid it is!” You’re making exactly the same error as the other guy, just in the opposite direction: you’re judging the profession of software engineering based on code output rather than value generation.
Now, did Garry Tan actually produce anything of value that week? I dunno, you’ll have to ask him.
Yeah! It's not like code quality matters in terms of negative value or lives lost, right?!
https://en.wikipedia.org/wiki/Horizon_IT_scandal
Furthermore,
> As for the artifact that Tan was building with such frenetic energy, I was broadly ignoring it. Polish software engineer Gregorein, however, took it apart, and the results are at once predictable, hilarious and instructive: A single load of Tan’s "newsletter-blog-thingy" included multiple test harnesses (!), the Hello World Rails app (?!), a stowaway text editor, and then eight different variants of the same logo — one of which with zero bytes.
Do you think any of the... /things/ bundled in this software increased the surface area that attacks could be leveraged against?
> a stowaway text editor
?!
Was it hiding in one of the lifeboats?
I also struggle with this all the time, balance between bringing value/joy and level of craft. Most human written stuff might look really ugly or was written in a weird way but as long as it’s useful it’s ok.
What I don’t like here is the bragging about the LoC. He’s not bragging about the value it could provide. Yes people also write shitty code but they don’t brag about it - most of the time they are even ashamed.
> included multiple test harnesses (!)
ive seen plenty of real code written by real people with multiple test harnesses and multiple mocking libraries.
its still kinda irrelevant to whether the code does anything useful; only a descriptor of the funding model
The Horizon IT scandal was not caused by poor code quality, the scandal was the corrupt employees of the UK government/Post Office. Poor quality code might have caused the error, but the failure to investigate the errors and sweep them under the rug was made by humans.
> Poor quality code might have caused the error, but the failure to investigate the errors and sweep them under the rug was made by humans.
That's not quite correct.
The root set of errors were made by the accounting software. The branch sets of errors were made by humans taking Horizon It's word for it that there was no fault in the code, and instead blaming the workers for the differences in the balance sheets.
If there were no errors in the accounting software (i.e. it had been properly designed and tested), then none of that would have happened.
Nobody blames THERAC-25 on the human operator.
> Now, did Garry Tan actually produce anything of value that week? I dunno, you’ll have to ask him.
Let’s not be naive. Garry is not a nobody. He absolutely doesn’t care about how many lines of code are produced or deleted. He made that post as advertisement: he’s advertising AI because he’s the ceo of YC which profitability depends on AI.
He’s just shipping ads.
"Follow the money" was always relevant, but especially when it comes to any kind of LLM news or investment-du-jour.
The cautionary/pessimist folks at least don't make money by taking the stance.
A few do.
At the extreme end you'll get invited to conferences but further down you could have other products you are pushing. Even non-AI related that takes advantage of your "smart person" public persona.
> You’re making exactly the same error as the other guy, just in the opposite direction: you’re judging the profession of software engineering based on code output rather than value generation.
But the true metric isn't either one, it's value created net of costs. And those costs include the cost to create the software, the cost to understand and maintain it, the cost of securing it and deploying it and running it, and consequential costs, such as the cost of exploited security holes and the cost of unexpected legal liabilities, say from accidental copyright or patent infringement or from accidental violation of laws such as the Digital Markets Act and Digital Services Act. The use of AI dramatically decreases some of these costs and dramatically increases other costs (in expectation). But the AI hypesters only shine the spotlight on the decreased costs.
It isn't worth the time. I am not going to read the 200k LOC to prove it was a bad idea to generate that much code in a short time and ship it to production. It is on the vibe coder to prove it is. And if it is just tweets being exchanged, and I want to judge someone who is boasting about LOC and aiming to make more LOC/second. Yep I'll judge 'em. It is stupid.
"Value generation" is a term I would be somewhat wary of.
To me, in this context, it's similar to drive economic growth on fossil fuel.
Whether in the end it can result in a net benefit (the value is larger than the cost of interacting with it and the cost to sort out the mess later) is likely impossible to say, but I don't think it can simply be judged by short sighted value.
Given the framing of the article, I can understand where the opposite direction comment is coming from. The author also gives mixed signals, by simultaneously suggesting that the "laziness" of the programmer and code are virtues. Yet I don't think they are ignoring value generation. Rather, I think they are suggesting that the value is in the quality of the code instead of the problem being solves. This seems to be an attitude held by many developers who are interested in the pursuit of programming rather than the end product.
The main value he generated from that exercise was the screenshot. It's a kind of credentialism.
LLMs not being lazy enough definitely feels true. But it's unclear to me if it a permanent issue, one that will be fixed in the next model upgrade or just one your agent framework/CICD framework takes care of.
e.g. Right now when using agents after I'm "done" with the feature and I commit I usually prompt "Check for any bugs or refactorings we should do" I could see a CICD step that says "Look at the last N commits and check if the code in them could be simplified or refactored to have a better abstraction"
It’s difficult to define a termination criterion for that. When you ask LLMs to find any X, they usually find something they claim qualifies as X.
Agreed. If I'm looking at what it proposes then about 1/2 the time I don't make the changes. If this were fully automated you would need an addendum like "Only make the change if it saves over 100 lines of code or removes 3 duplicate pieces of logic".
There are other scenarios you would want to check for but you get the idea.
I agree, it's not a fundamental characteristic but a limitation of how the tool is being used.
If you just tell these things to add, they'll absolutely do that indiscriminately. You end up with these huge piles of slop.
But if I tell an LLM backed harness to reduce LOC and DRY during the review phase, it will do that too.
I think you're more likely to get the huge piles if you delegate a large task and don't review it (either yourself or with an agent).
Man, I cannot imagine how nice it must to be to work with leadership like this, who just gets it.
I've had this exact sentiment in the past couple months after seeing a few PRs that were definitely the wrong solution to a problem. One was implementing it's own parsing functions to which well established solutions like JSON or others likely existed. I think any non-llm programmer could have thought this up but then immediately decide to look elsewhere, their human emotions would have hit and said "that's way too much (likely redundant) work, there must be a better way". But the LLM has no emotion, it isn't lazy and that can be a problem because it makes it a lot easier to do the wrong thing.
It also doesn't bother checking what's already in your project. Grep around a bit and you'll find three `formatTimestamp` functions all doing almost the same thing.
I have noticed LLMs have a propensity to create full single page web applications instead of simpler programs that just print results to the terminal.
I've also struggled with getting LLMs to keep spec.md files succinct. They seem incapable of simplifing documents while doing another task (e.g. "update this doc with xyz and simply the surrounding content") and really need to be specifically tasked at simplifying/summarizing. If you want something human readable, you probably just need to write it yourself. Editing LLM output is so painful, and it also helps to keep yourself in the loop if you actually write and understand something.
I‘m so happy about this article. I was forming a thought in my head the last couple of days, which is how to describe what it is that makes AI code practically unusable in good systems.
And one of the reasons is the one described in this article and the other is, that you skip training your mental model when you don’t grind these laziness patterns. If you are not in the code grinding to your codebase, you don’t see the fundamental issues that block the next level nor you have the itch to name and abstract it properly so you wont have to worry about in the future, when somebody or you have to extend it.
Knowing your shit is so powerful.
I believe now that my competive advantage is grinding code, whilst others are accumulating slop.
Great article, I've been saying something similar (much less eloquently) at work for months and will reference this one next time it comes up.
Quite often I see inexperienced engineers trying to ship the dumbest stuff. Back before LLM these would be projects that would take them days or weeks to research, write, test, and somewhere along the way they could come to the realization "hold on, this is dumb or not worth doing". Now they just send 10k line PR before lunch and pat themselves on the back.
At this point, I almost feel bad that people are piling on Garry Tan. Almost.
I very much agree; I think laziness / friction is basically a critically important regularizer for what to build and for what to not build. LLMs remove that friction and it requires more discipline now. (Wrote some of this up a while ago here: https://matthiasplappert.com/blog/2026/laziness-in-the-age-o...)
The more people boast about AI while delivering absolute garbage like in the example here, the more I feel happier toiling around in Nginx configurations and sysadmin busy work. Why worry about AI when it's the same old idiots using it as a crutch, like any new fad.
Since we all, stupidly, are leaning into LoC as a metric, because we can't handle subjectivity, at the very least, we could just do orders of magnitude for LoC. Was it a 10/100/1,000/10,000 LoC hour/week/day/month? 1,2,3,4 or 5. Dtrace's 60kLo, would then be a 5, Linux kernel is an8 (40M), Firefox is also an 8. Notepad++ is a 6,
Abstractions and strong basis as a freedom to think freely at high levels.
The slop drowning and impinging our ability to do good hammock driven development.
Love it. Thanks Bryan.
It's invaluable framing and we'll stayed. There's a pretty steady background dumb-beat of "do we still need frameworks/libraries" that shows up now. And how to talk to that is always hard. https://news.ycombinator.com/item?id=47711760
To me, the separation of concerns & strong conceptual basis to work from seem like such valuable clarity. But these are also anchor points that can limit us too, and I hope we see faster stronger panning for good reusable architectures & platforms to hang out apps and systems upon. I hope we try a little harder than we have been, that there's more experimentation. Cause it sure felt like the bandwagon effect was keeping us in a couple local areas. I do think islands of stability to work from make all the sense, are almost always better than the drift/accumulation of letting the big ball of mud architecture accrue.
Interesting times ahead. Amid so much illegibile miring slop, hopefully too some complementary new finding out too.
oh this hits all the right notes for me! I am just the demographic that tried to perl my way into the earliest web server builds, and read those exact words carefully while looking at the very mixed quality, cryptic ascii line noise that is everyday perl. And as someone who had built multi-thousand line C++ systems already, the "virtues" by Larry Wall seemed spot on! and now to combine the hindsight with current LLM snotty Lord Fauntleroy action coming from San Francisco.. perfect!
This is a person clearly grieving that his hard earned knowledge in his field is now not that valuable.
It is * exactly * the same as a person who spent years perfecting hand written HTML, just to face the wrath of React.
Disregarding the fact that Bryan operates oxide a company that has multiple investors and customers (id say this proves valuable knowledge) the crazier fact is that people think html is useless knowledge.
React USES html. Understanding html is core to understanding react. React does not in anyway devalue html in the same way that driving automatic devalues driving manual
Go to Facebook.com and right click view source and tell me html is not being devalued. No person who wants to write aesthetic html would write that stuff.
Do the same to Google.com
When it matters it matters. Even in facebooks case they made react fit for their use case. You think the react devs didn’t understand html? Do you think quality frontends can be written without any understanding of html?
Like the article says we’ve moved an abstraction up. That does not make the html knowledge useless
https://xkcd.com/1053/
I recommend you go look at some of his talks on Youtube, his best five talks are probably all in my all time top-ten list!
> This is a person clearly grieving that his hard earned knowledge in his field is now not that valuable.
He's co-founder and CTO of his own company, so I think he's doing fine in his field.
It doesn't change the fact that much of what (I think) he prides in himself in is getting commoditised.
LLMs dissolved your brain if you think they commoditize what a guy like this[0] prides in himself.
https://bcantrill.dtrace.org/about/
What he prides himself in (in this context) is craft, which LLM use probably can enable, but definitely isn't commoditized by the kind of vibe coding that Garry Tan is doing.
Your account name is so fitting
Now look up who he actually is.