I think if your job is to assemble a segment of a car based on a spec using provided tools and pre-trained processes, it makes sense if you worry that giant robot arms might be installed to replace you.
But if your job is to assemble a car in order to explore what modifications to make to the design, experiment with a single prototype, and determine how to program those robot arms, you’re probably not thinking about the risk of being automated.
I know a lot of counter arguments are a form of, “but AI is automating that second class of job!” But I just really haven’t seen that at all. What I have seen is a misclassification of the former as the latter.
A software engineer with an LLM is still infinitely more powerful than a commoner with an LLM. The engineer can debug, guide, change approaches, and give very specific instructions if they know what needs to be done.
The commoner can only hammer the prompt repeatedly with "this doesn't work can you fix it".
So yes, our jobs are changing rapidly, but this doesn't strike me as being obsolete any time soon.
This is actually a great way to foster the learning spirit in the age of AI. Even if the student uses AI to arrive at an answer, they will still need to, at the very least, ask the AI to give it an explanation that will teach them how it arrived to the solution.
I think it's a bit like the Dunning-Kruger effect. You need to know what you're even asking for and how to ask for it. And you need to know how to evaluate if you've got it.
This actually reminds me so strongly of the Pakleds from Star Trek TNG. They knew they wanted to be strong and fast, but the best they could do is say, "make us strong." They had no ability to evaluate that their AI (sorry, Geordi) was giving them something that looked strong, but simply wasn't.
You are describing tradition (deterministic?) automation before AI. With AI systems as general as today's SOTA LLMs, they'll happily take on the job regardless of the task falling into class I or class II.
Ask a robot arm "how should we improve our car design this year", it'll certainly get stuck. Ask an AI, it'll give you a real opinion that's at least on par with a human's opinion. If a company builds enough tooling to complete the "AI comes up with idea -> AI designs prototype -> AI robot physically builds the car -> AI robot test drives the car -> AI evaluates all prototypes and confirms next year's design" feedback loop, then theoretically this definitely can work.
This is why AI is seen as such a big deal - it's fundamentally different from all previous technologies. To an AI, there is no line that would distinguish class I from II.
This is actually a really good description of the situation. But I will say, as someone that prided myself on being the second one you described, I am becoming very concerned about how much of my work was misclassified. It does feel like a lot of work I did in the second class is being automated where maybe previously it overinflated my ego.
SWE is more like formula 1 where each race presents a unique combination of track, car, driver, conditions. You may have tools to build the thing, but designing the thing is the main issue. Code editor, linter, test runner, build tools are for building the thing. Understanding the requirements and the technical challenges is designing the thing.
The other day I said something along the lines of, "be interested in the class, not the instance" and I meant to try to articulate a sense of metaprogramming and metaanalysis of a problem.
Y is causing Z and we should fix that. But if we stop and study the problem, we might discover that X causes the class of Y problem so we can fix the entire class, not just the instance. And perhaps W causes the class of X issue. I find my job more and more being about how far up this causality tree can I reason, how confident am I about my findings, and how far up does it make business sense to address right now, later, or ever?
is it? I really fail to see the metaphor as an F1 fan. The cars do not change that much; only the setup does, based on track and conditions. The drivers are fairly consistent through the season. Once a car is built and a pecking order is established in the season, it is pretty unrealistic to expect a team with a slower car to outcompete a team with a faster car, no matter what track it is (since the conditions affect everyone equally).
Over the last 16 years, Red Bull has won 8 times, Mercedes 7 times and Mclaren 1. Which means, regardless of the change in tracks and conditions, the winners are usually the same.
So either every other team sucks at "understanding the requirements and the technical challenges" on a clinical basis or the metaphor doesn't make a lot of sense.
I wonder about how true this was historically. I imagine race car driving had periods of rapid, exciting innovation. But I can see how a lot of it has probably reached levels of optimization where the rules, safety, and technology change well within the realm of diminishing returns. I'm sure there's still a ridiculous about of R&D though? (I don't really know race car driving)
Most projects don’t change that much either. Head over to a big open source project, and more often you will only see tweaks. To be able to do the tweaks require a very good understanding of the whole project (Naur’s theory of programming).
Also in software, we can do big refactors. F1 teams are restricted to the version they’ve put in the first race. But we do have a lot of projects that were designed well enough that they’ve never changed the initial version, just build on top of it.
My job is to make people who have money think I'm indispensable to achieving their goals. There's a good chance AI can fake this well enough to replace me. Faking it would be good enough in an economy with low levels of competition; everyone can judge for themselves if this is our economy or not.
> I know a lot of counter arguments are a form of, “but AI is automating that second class of job!”
Uh, it's not the issue. The issue is that there isn't that much demand for the second class of job. At least not yet. The first class of job is what feeds billions of families.
Discussing what we should do about the automation of labour is nothing new and is certainly a pretty big deal here. But I think you're reframing/redirecting the intended topic of conversation by suggesting that "X isn't the issue, Y is."
It wanders off the path like if I responded with, "that's also not the issue. The issue is that people need jobs to eat."
I keep on wondering how much of the AI embrace is marketing driven. Yes, it can produce value and cut corners. But it seems like self driving by 2016 Musk prediction. Which never happened. With IPO/Stock valuations closely tied to hype, I wonder if we are all witnessing a giant bubble in the making
How much of this is mass financial engineering than real value. Reading a lot of nudges how everyone should have Google or other AI stock in their portfolio/retirement accounts
What you are saying may have made sense at the start of 2025 where people were still using github copilot tab auto completes(atleast I did) and was just toying with things like cursor, but unsure.
Things have changed drastically now, engineers with these tools(like claude code) have become unstoppable.
Atleast for me, I have been able to contribute to the codebases i was unfamiliar with, even with different tech stacks. No, I am not talking about generating ai slop, but I have been enabled to write principal engineer level code unlike before.
So i don't agree with the above statement, it's actually generating real value and I have become valuable because of the tools available to me.
Maybe we haven't seen much economic value or productivity increase given all the AI hypes. I don't think we can deny the fact that programming has been through a paradigm shift where humans aren't the only ones writing code and the amount of code written by humans I would say is decreasing.
There's nothing to wonder about. It's obviously marketing.
The whole narrative of "inevitability" is the stock behavior of tech companies who want to push a product onto the public. Why fight the inevitable? All you can do is accept and adapt.
And given how many companies ask vendors whether their product "has AI" without having the slightest inkling of what that even means or whether it even makes sense, as if it were some kind of magical fairy dust - yeah, the stench of hype is thick enough you could cut it with a knife.
Of course, that doesn't mean it lacks all utility.
I realize many are disappointed (especially by technical churn, star-based-development JS projects on github without technical rigour). I don't trust any claim on the open web if I don't know the technical background of the person making it.
However I think - Nadh, ronacher, the redis bro - these are people who can be trusted. I find Nadh's article (OP) quite balanced.
The original phrase "talk is cheap" is generally used to mean "it's easy to say a whole lot of shit and that talk often has no real value." So this cleaver headline is telling me the code has even less value than the talk. That alone betrays a level of ignorance I would expect from the author's work. I go to read the article and it confirmed my suspicion.
I think you are hyper-focusing on the headline, which is just a joke. The underlying article does not indicate to me that the author is ignorant of code, and if you care to look, they seem to have a substantial body of public open source contributions that proves this quite conclusively.
The underlying point is just that while it was very cognitively expensive to back up a good design with good code back in 2000, it's much cheaper now. And therefore, making sure the design is good is the more important part. That's it really.
And… the design (artistry) aspect is always the toughest. So explain to me, where do the returns come from if it is seemingly obviously only those who are very well informed of their domains/possess general intelligence can benefit from this tool?
Personally I don’t see it happening. This is the bitter reality the LLM producers have to face at some point.
> One can no longer know whether such a repository was “vibe” coded by a non-technical person who has never written a single line of code, or an experienced developer, who may or may not have used LLM assistance.
I am talking about what it means to invert that phrase.
Yes, the original phrase has a specific meaning. But in another context, "talk" is more important than the code.
In software development, code is in a real sense less important than the understanding and models that developers carry around in their heads. The code is, to use an unflattering metaphor, a kind of excrement of the process. It means nothing without a human interpreter, even if it has operational value. The model is never part of the implementation, because software apart from human observers is a purely syntactic construct, at best (even there, I would argue it isn't even that, as syntax belongs to the mind/language).
I see a lot of the same (well thought out) pushback on here whenever these kinds of blind hype articles pop up.
But my biggest objection to this "engineering is over" take is one that I don't see much. Maybe this is just my Big Tech glasses, but I feel like for a large, mature product, if you break down the time and effort required to bring a change to production, the actual writing of code is like... ten, maybe twenty percent of it?
Sure, you can bring "agents" to bear on other parts of the process to some degree or another. But their value to the design and specification process, or to live experiment, analysis, and iteration, is just dramatically less than in the coding process (which is already overstated). And that's without even getting into communication and coordination across the company, which is typically the real limiting factor, and in which heavy LLM usage almost exclusively makes things worse.
Takes like this seem to just have a completely different understanding of what "software development" even means than I do, and I'm not sure how to reconcile it.
To be clear, I think these tools absolutely have a place, and I use them where appropriate and often get value out of them. They're part of the field for good, no question. But this take that it's a replacement for engineering, rather than an engineering power tool, consistently feels like it's coming from a perspective that has never worked on supporting a real product with real users.
> Takes like this seem to just have a completely different understanding of what "software development" even means than I do, and I'm not sure how to reconcile it.
You're absolutely right about coding being less than 20% of the overall effort. In my experience, 10% is closer to the median. This will get reconciled as companies apply LLMs and track the ROI. Over a single year the argument can be made that "We're still learning how to leverage it." Over multiple years the 100x increase in productivity claims will be busted.
We're still on the upslope of Gartner's hype cycle. I'm curious to see how rapidly we descend into the Trough of Disillusionment.
I'm not sure you're actually in disagreement with the author of this piece at all.
They didn't say that software engineering is over - they said:
> Software development, as it has been done for decades, is over.
You argue that writing code is 10-20% of the craft. That's the point they are making too! They're framing the rest of it as the "talking", which is now even more important than it was before thanks to the writing-the-code bit being so much cheaper.
> Software development, as it has been done for decades, is over.
Simon I guess vb-8558's comment inn here is something which is really nice (definitely worth a read) and they mention how much coding has changed from say 1995 to 2005 to 2015 to 2025
Directly copying line from their comment here : For sure, we are going through some big changes, but there is no "as it has been done for decades".
My (point?) is that this pure mentality of code is cheap show me the talk is weird/net negative (even if I may talk more than I code) simply because code and coding practices are something that I can learn over my experience and hone in whereas talk itself constitutes to me as non engineers trying to create software and that's all great but not really understanding the limitations (that still exist)
So the point I am trying to make is that I feel as if when the OP mentioned code is 10-20% of the craft, they didn't mean the rest is talk. They meant all the rest are architectural decisions & just everything surrounding the code. Quite frankly, the idea behind Ai/LLM's is to automate that too and convert it into pure text and I feel like the average layman significantly overestimates what AI can and cannot do.
So the whole notion of show me the talk atleast in a more non engineering background as more people try might be net negative not really understanding the tech as is and quite frankly even engineers are having a hard time catching up with all which is happening.
I do feel like that the AI industry just has too many words floating right now. To be honest, I don't want to talk right now, let me use the tool and see how it goes and have a moment of silence. The whole industry is moving faster than the days till average js framework days.
To have a catchy end to my comment: There is just too much talk nowadays. Show me the trust.
I do feel like information has become saturated and we are transitioning from the "information" age to "trust" age. Human connections between businesses and elsewhere matter the most right now more than ever. I wish to support projects which are sustainable and fair driven by passion & then I might be okay with AI use case imo.
Yeah in a lot of ways, my assertion is that @
“Code is cheap” actually means the opposite of what everyone thinks it does. Software Engineer is even more about the practices we’ve been developing over the past 20 or so years, not less
Like Linus’ observation still stands. Show me that the code you provided does exactly what you think it should. It’s easy to prompt a few lines into an LLM, it’s another thing to know exactly the way to safely and effectively change low level code.
Liz Fong-Jones told a story on LinkedIn about this at HoneyComb, she got called out for dropping a bad set of PR’s in a repo, because she didn’t really think about the way the change was presented.
The book Software Engineering at Google makes a distinction between software engineering and programming. The main difference is that software engineering occurs over a longer time span than programming. In this sense, AI tools can make programming faster, but not necessarily software engineering.
> Software development, as it has been done for decades, is over.
I'm pretty sure the way I was doing things in 2005 was completely different compared to 2015. Same for 2015 and 2025. I'm not old enough to know how they were doing things in 1995, but I'm pretty sure there very different compared to 2005.
For sure, we are going through some big changes, but there is no "as it has been done for decades".
I don't think things have changed that much in the time I've been doing it (roughly 20 years). Tools have evolved and new things were added but the core workflow of a developer has more or less stayed the same.
I don't think that's true, at least for everywhere I've worked.
Agile has completely changed things, for better or for worse.
Being a SWE today is nothing like 30 years ago, for me. I much preferred the earlier days as well, as it felt far more engineered and considered as opposed to much of the MVP 'productivity' of today.
MVP is not necessarily opposed to engineered and considered. It's just that many people who throw that term around have little regard for engineering, which they hide behind buzzwords like "agile".
I also wonder what those people have been doing all this time... I also have been mostly working as a developer for about 20 years and I don't think much has changed at all.
I also don't feel less productive or lacking in anything compared to the newer developers I know (including some LLM users) so I don't think I am obsolete either.
At some point I could straight-up call functions from the Visual Studio debugger Watch window instead of editing and recompiling. That was pretty sick.
Yes I know, Lisp could do this the whole time. Feel free to offer me a Lisp job drive-by Lisp person.
Yeah, I remember being amazed at the immediate incremental compilation on save in Visual Age for Java many years ago. Today's neovim users have features that even the most advanced IDEs didn't have back then.
I think a lot of people in the industry forget just how much change has come from 30 years of incremental progress.
talk is even cheaper, still show me the code, people claim 10x productivity that translates to 10x of work done in a month, even with Opus 4.5 out since November 2025 I haven't seen signs of this. AI makes the level of complexity with modern systems bearable, it was getting pretty bad before and AI kinda saved us. A non-trivial React app is still a pain to write. Also creating a harness for a non-deterministic api that AI provides is also pain. At least we don't have to fight through typing errors or search through relevant examples before copying and pasting. AI is good at automating typing, the lack of reasoning and the knowledge cutoff still makes coding very tedious though.
Best example of this is Claude's own terminal program. Apparently renders react at 60fps and then translates it into ANSI chars that then diff the content of the terminal and do an overwrite...
All to basically mimic what curses can do very easily.
> because one is hooked on and dependent on the genie, the natural circumstances that otherwise would allow for foundational and fundamental skills and understanding to develop, never arise, to the point of cognitive decline.
After using AI to code, I came to the same conclusion myself. Interns and juniors are fully cooked:
- Companies will replace them with AI, telling seniors to use AI instead of juniors
- As a junior, AI is a click away, so why would you spend sleepless nights painstakingly acquiring those fundamentals?
Their only hope is to use AI to accelerate their own _learning_, not their performance. Performance will come after the learning phase.
If you're young, use AI as a personal TA, don't use it to write the code for you.
> Code was always a means to an end. Unlike poetry or prose, end users don’t read or care about code.
Yes and no. Code is not art, but software is art.
What is art, then? Not something that's "beautiful", as beauty is of course mostly subjective. Not even something that works well.
I think art is a thing that was made with great care.
It doesn't matter if some piece of software was vibe-coded in part or in full, if it was edited, tested, retried enough times for its maker to consider it "perfect". Trash is something that's done in a careless way.
If you truly love and use what you made, it's likely someone else will. If not, well... why would anyone?
4. To learn (to get better by absorbing how the pros do it).
5. To verify and improve it (code review, pair programming).
6. To grade it (because a student wrote it).
7. To enjoy its beauty.
These are all I can think of right now, and they are ordered from most common to most rare case.
Personally, I have certainly read and re-read SICP code to enjoy its beauty (7), perhaps mixed in with a desire to learn (4) how to write equally beautiful code.
This "Code is cheap. Show me the talk." punchline gets overused as a bait these days. It is an alright article but that's a lot of words to tell us something we already know. There's nothing here that we don't already know. It's not just greedy companies riding the AI wave. Bloggers and influencers are also riding the AI wave. They know if you say anything positive or negative about AI with a catchy title it will trend on HN, Reddit, etc.
Also credit where credit is due. Origin of this punchline:
In January 2026, prototype code is cheap. Shitty production code is cheap. If that's all you need—which is sometimes the case—then go for it.
But actually good code, with a consistent global model for what is going on, still won't come from Opus 4.5 or a Markdown plan. It still comes from a human fighting entropy.
Getting eyes on the code still matters, whether it's plain old AI slop, or fancy new Opus 4.5 "premium slop." Opus is quite smart, and it does its best.
But I've tried seriously using a number of high-profile, vibe-coded projects in the last few weeks. And good grief what unbelievable piles of shit most of them are. I spend 5% of the time using the vibe-coded tool, and 95% of the time trying to uncorrupt my data. I spend plenty of time having Opus try to look at the source to figure out what went wrong in 200,000 lines of vibe-coded Go. And even Opus is like, "This never worked! It's broken! You see, there's a race condition in the daemonization code that causes the daemon to auto-kill itself!"
And at that point, I stop caring. If someone can't be bothered to even read the code Opus generates, I can't be bothered to debug their awful software.
> Ignoring outright bad code, in a world where functional code is so abundant that “good” and “bad” are indistinguishable, ultimately, what makes functional AI code slop or non-slop?
I'm sorry, but this is an indicator for me that the author hasn't had a critical eye for quality in some time. There is massive overlap between "bad" and "functional." More than ever. The barrier-to-entry to programming got irresponsibly low for a time there, and it's going to get worse. The toolchains are not in a good way. Windows and macOS are degrading both in performance and usability, LLVM still takes 90% of a compiler's CPU time in unoptimized builds, Notepad has AI (and crashes,) simple social (mobile) apps are >300 MB download/installs when eight years ago they were hovering around a tenth of that, a site like Reddit only works on hardware which is only "cheap" in the top 3 GDP nations in the world... The list goes on. Whatever we're doing, it is not scaling.
One issue is that tooling and internals have been optimized for individual people's tastes currently. Heterogeneous environments make the models spikier. As we shift to building more homogenized systems optimized around agent accessibility, I think we'll see significant improvements
Elegantly, agents finally give us an objective measure of what "good" code is. It's code that maximizes the likelihood that future agents will be able to successfully solve problems in this codebase. If code is "bad" it makes future problems harder.
I'd think there'll be a dip in code quality (compared to human) initially due to "AI machinery" due to its immaturity. But over-time on a mass-scale - we are going to see an improvement in the quality of software artifacts.
It is easier to 'discipline' the top 5 AI agents in the planet - rather than try to get a million distributed devs ("artisans") to produce high quality results.
It's like in the clothing or manufacturing industry I think. Artisans were able to produce better individual results than the average industry machinery, at least initially. But overtime - industry machinery could match the average artisan or even beat the average, while decisively beating in scale, speed, energy efficiency and so on.
> industry machinery could match the average artisan or even beat the average
Whether it could is distinct from whether it will. I'm sure you've noticed the decline in the quality of clothing. Markets a mercurial and subject to manipulation through hype (fast fashion is just a marketing scheme to generate revenue, but people bought into the lie).
With code, you have a complicating factor, namely, that LLMs are now consuming their own shit. As LLM use increases, the percentage of code that is generated vs. written by people will increase. That risks creating an echo chamber of sorts.
> it is easier to 'discipline' the top 5 AI agents in the planet - rather than try to get a million distributed devs ("artisans") to produce high quality results.
Your take essentially is "let's live in a shoe box, packaging pipelines produce them cheaply en masse, who needs slow poke construction engineers and architects anymore"
Historically, it would take a reasonably long period of consistent effort and many iterations of refinement for a good developer to produce 10,000 lines of quality code that not only delivered meaningful results, but was easily readable and maintainable. While the number of lines of code is not a measure of code quality—it is often the inverse—a codebase with good quality 10,000 lines of code indicated significant time, effort, focus, patience, expertise, and often, skills like project management that went into it. Human traits.
Now, LLMs can not only one-shot generate that in seconds,
Evidence please. Ascribing many qualities to LLM code that I haven't (personally) seen at that scale. I think if you want to get an 'easily readable and maintainable' codebase of 10k lines with an LLM you need somebody to review its contributions very closely, and it probably isn't going to be generated with a 1 shot prompt.
Feels like this website is yelling at me with its massive text size. Had to drop down to -50% to get it readable.
Classical indicators of good software are still very relevant and valid!
Building something substantial and material (ie not an api wrapper+gui, to-do list) that is undeniably well made, while being faster and easier than it used to be, still takes a _lot_ of work. Even though you don't have to write a line of code, it moves so fast that you are now spending 3.5-4 days of your work week reading code, using the project, running benchmarks and experimental test lanes, reviewing specs and plans, drafting specs, defining features and tests.
The level of granularity needed to get earnestly good results is more than most people are used to. It's directly centered at the intersection between spec heavy engineering work and writing requirements for a large, high quality offshore dev team that is endearingly literal in how they interpret instructions. Depending on the work, I've found that I average around one 'task' per 22-35 lines of code.
You'll discover a new sense of profound respect for the better PMs, QA Leads, Eng Directors you have worked with. Months of progress happen each week. You'll know you're doing it right when you ask an Agent to evaluate the work since last week and it assumes it is reviewing the output of a medium sized business and offers to make Jira tickets.
Talk is never cheap. Communicating your thoughts to people without the exact same kind of expertise as you is the most important skill.
This quote is from Torvalds, and I'm quite sure that if he weren't able to write eloquent English no one would know Linux today.
Code is important when it's the best medium to express the essence of your thoughts. Just like a composer cannot express the music in his head with English words.
I don't think Linus is a people person. This is something which he talks about himself in the famous ted-ed video.
I just re-watched the video (currently halfway) & I feel like the point of Linux is something which you are forgetting but it was never intended to grow so much and Linux himself in the video when asked says that he never had a moment where he went like oh this went big.
In fact he talks about when the project was little. On how he had gratitude when the project had 10 people maybe 100 people working on it and then things only grow over a very large time frame (more than 25-30years? maybe now 35 just searched 34)
He talks about how he got other people's idea which he couldn't have thought of things themselves and when he first created the project he just wanted to show off to the world to look at what I did (and he did it mainly for both the end result of the project and programming itself too) and then he got introduced to open source (free software) by his friend and he just decided to have it open source.
My point is it was neither the code nor the talk. Linus is the best person to maintain Linux, why? Because he has been passionate over it for 25 years. I feel like Linux would be just as interested in talking about the code and any improvements now with maybe the same vigour as 34 years ago. He loves his creation & we love Linux too :)
Another small point I wish to add is that if talk was the only thing, then you are missing the point because Linux was created because hurd was getting delayed (so all talks no code)
Linux himself says that if the hurd kernel would've been released earlier, Linux wouldn't have been created.
So all talk no code Hurd project (which from what I hear right now is still a bit limbo as now everyone [rightfully?] uses linux) is what led to creation of linux project.
Everyone who hasn't watched Linus's ted ed should definitely watch it.
>> Remember the old adage, “programming is 90% thinking and 10% typing”? It is now, for real.
> Proceeds to write literal books of markdown to get something meaningful
>> It requires no special training, no new language or framework to learn, and has practically no entry barriers—just good old critical thinking and foundational human skills, and competence to run the machinery.
> Wrote a paragraph about how it is important to have serious experience to understand the generated code prior to that
>> For the first time ever, good talk is exponentially more valuable than good code. The ramifications of this are significant and disruptive. This time, it is different.
> This time is different bro I swear, just one more model, just one more scale-up, just one more trillion parameters, bro we’re basically at AGI
AI was never the problem we have been having a downgrade in software in general AI just amplifies how badly you can build software. The real problem is people who just dont care about the craft just pushing out human slop, whether it be because the business goes “we can come back to that dont worry” or what have you. At least with AI me coming back to something is right here and right now, not never or when it causes a production grade issue.
> The real concern is for generations of learners who are being robbed of the opportunity to acquire the expertise to objectively discern what is slop and what is not.
How do new developers build the skills that seniors generated through time? I see my seniors having higher success in vibe-coding than me. How can I short-circuit the time they put through for myself?
I for one am quite happy to outsource this kind of simply memorisation to a machine. Maybe it's the thin end of the slippery slope? It doesn't FEEL like it is but...
>The cost of just trying things out is so exponentially high that the significant majority of ideas are simply never tried out.
But dumping code into files was never the hard part of it then either. It's understanding the subtitles of the implementation and the problem domain as you work with it and build experience with it.
>codebase with good quality 10,000 lines of code indicated significant time, effort, focus, patience, expertise, and often, skills like project management that went into it. Human traits.
>Now, LLMs can not only one-shot generate that in seconds
It will also one-shot hundreds more without the "good quality" qualifier in the human made example. In either case you're slowly refining an idea about what an implementation might be into one that is appropriate.
Okay I was writing a comment to simon (and I have elaborated some there but I wanted this to be something catchy to show how I feel and something people might discuss with too)
Both Code and talk are cheap. Show me the trust. Show me how I can trust you. Show me your authenticity. Show me your passion.
Code used to be the sign of authenticity. This is whats changing. You can no longer guarantee that large amounts of code let's say are now authentic, something which previously used to be the case (for the most part)
I have been shouting into the void many times about it but Trust seems to be the most important factor.
Essentially, I am speaking it from a consumer perspective but suppose that you write AI generated code and deploy it. Suppose you talked to AI or around it. Now I can do the same too and create a project sometimes (mostly?) more customizable to my needs for free/very-cheap.
So you have to justify why you are charging me. I do feel like that's only possible if there is something additional added to value. Trust, I trust the decision that you make and personally I trust people/decisions who feel like they take me or my ideas into account. So, essentially not ripping me off while actively helping. I don't know how to explain this but the most thing which I hate is the feeling of getting ripped off. So justifiable sustainable business who is open/transparent about the whole deal and what he gets and I get just gets my respect and my trust and quite frankly, I am not seeing many people do that but hopefully this changes.
I am curious now what you guys of HN think about this & what trust means to you in this (new?) ever-changing world.
Like y'know I feel like everything changes all the time but at the same time nothing changes at the same time too. We are still humans & we will always be humans & we are driven by our human instincts. Perhaps the community I envision is a more tight knit community online not complete mega-sellers.
I hate this trend of using adjectives to describe systems.
Fast
Secure
Sandboxed
Minimal
Reliable
Robust
Production grade
AI ready
Let's you _____
Enables you to _____
But somewhat I agree, code is essentially free, you can shit out infinite amounts of code. Unless it's good, then show the code instead.
If your code is shit, show the program.
If your program is shit, your code is worse, but you still pursing an interesting idea (in your eyes), show the prompt instead of the slop generated. Or even better communicate an elaborate version of the prompt.
>One can no longer know whether such a repository was “vibe”
I think that's always been true. The ideas and reasoning process matter. So does the end product. If you produced it with an LLM and it sucks, it still sucks.
I think if your job is to assemble a segment of a car based on a spec using provided tools and pre-trained processes, it makes sense if you worry that giant robot arms might be installed to replace you.
But if your job is to assemble a car in order to explore what modifications to make to the design, experiment with a single prototype, and determine how to program those robot arms, you’re probably not thinking about the risk of being automated.
I know a lot of counter arguments are a form of, “but AI is automating that second class of job!” But I just really haven’t seen that at all. What I have seen is a misclassification of the former as the latter.
A software engineer with an LLM is still infinitely more powerful than a commoner with an LLM. The engineer can debug, guide, change approaches, and give very specific instructions if they know what needs to be done.
The commoner can only hammer the prompt repeatedly with "this doesn't work can you fix it".
So yes, our jobs are changing rapidly, but this doesn't strike me as being obsolete any time soon.
I listened to an segment on the radio where a College Teacher told their class that it was okay to use AI assist you during test provided:
1. Declare in advance that AI is being used.
2. Provided verbatim the questions and answer session.
3. Explain why the answer given by the AI is good answer.
Part of the grade will include grading 1, 2, 3
Fair enough.
This is actually a great way to foster the learning spirit in the age of AI. Even if the student uses AI to arrive at an answer, they will still need to, at the very least, ask the AI to give it an explanation that will teach them how it arrived to the solution.
I think it's a bit like the Dunning-Kruger effect. You need to know what you're even asking for and how to ask for it. And you need to know how to evaluate if you've got it.
This actually reminds me so strongly of the Pakleds from Star Trek TNG. They knew they wanted to be strong and fast, but the best they could do is say, "make us strong." They had no ability to evaluate that their AI (sorry, Geordi) was giving them something that looked strong, but simply wasn't.
Agree totally.
You are describing tradition (deterministic?) automation before AI. With AI systems as general as today's SOTA LLMs, they'll happily take on the job regardless of the task falling into class I or class II.
Ask a robot arm "how should we improve our car design this year", it'll certainly get stuck. Ask an AI, it'll give you a real opinion that's at least on par with a human's opinion. If a company builds enough tooling to complete the "AI comes up with idea -> AI designs prototype -> AI robot physically builds the car -> AI robot test drives the car -> AI evaluates all prototypes and confirms next year's design" feedback loop, then theoretically this definitely can work.
This is why AI is seen as such a big deal - it's fundamentally different from all previous technologies. To an AI, there is no line that would distinguish class I from II.
This is actually a really good description of the situation. But I will say, as someone that prided myself on being the second one you described, I am becoming very concerned about how much of my work was misclassified. It does feel like a lot of work I did in the second class is being automated where maybe previously it overinflated my ego.
SWE is more like formula 1 where each race presents a unique combination of track, car, driver, conditions. You may have tools to build the thing, but designing the thing is the main issue. Code editor, linter, test runner, build tools are for building the thing. Understanding the requirements and the technical challenges is designing the thing.
The other day I said something along the lines of, "be interested in the class, not the instance" and I meant to try to articulate a sense of metaprogramming and metaanalysis of a problem.
Y is causing Z and we should fix that. But if we stop and study the problem, we might discover that X causes the class of Y problem so we can fix the entire class, not just the instance. And perhaps W causes the class of X issue. I find my job more and more being about how far up this causality tree can I reason, how confident am I about my findings, and how far up does it make business sense to address right now, later, or ever?
is it? I really fail to see the metaphor as an F1 fan. The cars do not change that much; only the setup does, based on track and conditions. The drivers are fairly consistent through the season. Once a car is built and a pecking order is established in the season, it is pretty unrealistic to expect a team with a slower car to outcompete a team with a faster car, no matter what track it is (since the conditions affect everyone equally).
Over the last 16 years, Red Bull has won 8 times, Mercedes 7 times and Mclaren 1. Which means, regardless of the change in tracks and conditions, the winners are usually the same.
So either every other team sucks at "understanding the requirements and the technical challenges" on a clinical basis or the metaphor doesn't make a lot of sense.
I wonder about how true this was historically. I imagine race car driving had periods of rapid, exciting innovation. But I can see how a lot of it has probably reached levels of optimization where the rules, safety, and technology change well within the realm of diminishing returns. I'm sure there's still a ridiculous about of R&D though? (I don't really know race car driving)
Most projects don’t change that much either. Head over to a big open source project, and more often you will only see tweaks. To be able to do the tweaks require a very good understanding of the whole project (Naur’s theory of programming).
Also in software, we can do big refactors. F1 teams are restricted to the version they’ve put in the first race. But we do have a lot of projects that were designed well enough that they’ve never changed the initial version, just build on top of it.
My job is to make people who have money think I'm indispensable to achieving their goals. There's a good chance AI can fake this well enough to replace me. Faking it would be good enough in an economy with low levels of competition; everyone can judge for themselves if this is our economy or not.
> I know a lot of counter arguments are a form of, “but AI is automating that second class of job!”
Uh, it's not the issue. The issue is that there isn't that much demand for the second class of job. At least not yet. The first class of job is what feeds billions of families.
Yeah, I'm aware of the lump of labour fallacy.
Discussing what we should do about the automation of labour is nothing new and is certainly a pretty big deal here. But I think you're reframing/redirecting the intended topic of conversation by suggesting that "X isn't the issue, Y is."
It wanders off the path like if I responded with, "that's also not the issue. The issue is that people need jobs to eat."
It depends a lot on the type of industry I would think.
I keep on wondering how much of the AI embrace is marketing driven. Yes, it can produce value and cut corners. But it seems like self driving by 2016 Musk prediction. Which never happened. With IPO/Stock valuations closely tied to hype, I wonder if we are all witnessing a giant bubble in the making
How much of this is mass financial engineering than real value. Reading a lot of nudges how everyone should have Google or other AI stock in their portfolio/retirement accounts
What you are saying may have made sense at the start of 2025 where people were still using github copilot tab auto completes(atleast I did) and was just toying with things like cursor, but unsure.
Things have changed drastically now, engineers with these tools(like claude code) have become unstoppable.
Atleast for me, I have been able to contribute to the codebases i was unfamiliar with, even with different tech stacks. No, I am not talking about generating ai slop, but I have been enabled to write principal engineer level code unlike before.
So i don't agree with the above statement, it's actually generating real value and I have become valuable because of the tools available to me.
Maybe we haven't seen much economic value or productivity increase given all the AI hypes. I don't think we can deny the fact that programming has been through a paradigm shift where humans aren't the only ones writing code and the amount of code written by humans I would say is decreasing.
No need to wonder, just look at the numbers - investments versus revenue are hugely disparate, growth is plateauing.
There's nothing to wonder about. It's obviously marketing.
The whole narrative of "inevitability" is the stock behavior of tech companies who want to push a product onto the public. Why fight the inevitable? All you can do is accept and adapt.
And given how many companies ask vendors whether their product "has AI" without having the slightest inkling of what that even means or whether it even makes sense, as if it were some kind of magical fairy dust - yeah, the stench of hype is thick enough you could cut it with a knife.
Of course, that doesn't mean it lacks all utility.
I realize many are disappointed (especially by technical churn, star-based-development JS projects on github without technical rigour). I don't trust any claim on the open web if I don't know the technical background of the person making it.
However I think - Nadh, ronacher, the redis bro - these are people who can be trusted. I find Nadh's article (OP) quite balanced.
When you mention Redis bro, I think you are talking about Antirez correct?
yeah, forgot his name.
The original phrase "talk is cheap" is generally used to mean "it's easy to say a whole lot of shit and that talk often has no real value." So this cleaver headline is telling me the code has even less value than the talk. That alone betrays a level of ignorance I would expect from the author's work. I go to read the article and it confirmed my suspicion.
I think you are hyper-focusing on the headline, which is just a joke. The underlying article does not indicate to me that the author is ignorant of code, and if you care to look, they seem to have a substantial body of public open source contributions that proves this quite conclusively.
The underlying point is just that while it was very cognitively expensive to back up a good design with good code back in 2000, it's much cheaper now. And therefore, making sure the design is good is the more important part. That's it really.
And… the design (artistry) aspect is always the toughest. So explain to me, where do the returns come from if it is seemingly obviously only those who are very well informed of their domains/possess general intelligence can benefit from this tool?
Personally I don’t see it happening. This is the bitter reality the LLM producers have to face at some point.
Did you get very far in? They're referring to a pretty specific contextual usage of the phrase (Linus, back in 2000), not the adage as a whole.
I think I made it to about here haha
> One can no longer know whether such a repository was “vibe” coded by a non-technical person who has never written a single line of code, or an experienced developer, who may or may not have used LLM assistance.
I am talking about what it means to invert that phrase.
I read the whole thing, and GP is right. Code is important, whether it is generated or handwritten. At least until true AGI is here.
It's directly an inversion of https://www.goodreads.com/quotes/437173-talk-is-cheap-show-m...
Yes, the original phrase has a specific meaning. But in another context, "talk" is more important than the code.
In software development, code is in a real sense less important than the understanding and models that developers carry around in their heads. The code is, to use an unflattering metaphor, a kind of excrement of the process. It means nothing without a human interpreter, even if it has operational value. The model is never part of the implementation, because software apart from human observers is a purely syntactic construct, at best (even there, I would argue it isn't even that, as syntax belongs to the mind/language).
This has consequences for LLM use.
I see a lot of the same (well thought out) pushback on here whenever these kinds of blind hype articles pop up.
But my biggest objection to this "engineering is over" take is one that I don't see much. Maybe this is just my Big Tech glasses, but I feel like for a large, mature product, if you break down the time and effort required to bring a change to production, the actual writing of code is like... ten, maybe twenty percent of it?
Sure, you can bring "agents" to bear on other parts of the process to some degree or another. But their value to the design and specification process, or to live experiment, analysis, and iteration, is just dramatically less than in the coding process (which is already overstated). And that's without even getting into communication and coordination across the company, which is typically the real limiting factor, and in which heavy LLM usage almost exclusively makes things worse.
Takes like this seem to just have a completely different understanding of what "software development" even means than I do, and I'm not sure how to reconcile it.
To be clear, I think these tools absolutely have a place, and I use them where appropriate and often get value out of them. They're part of the field for good, no question. But this take that it's a replacement for engineering, rather than an engineering power tool, consistently feels like it's coming from a perspective that has never worked on supporting a real product with real users.
> Takes like this seem to just have a completely different understanding of what "software development" even means than I do, and I'm not sure how to reconcile it.
You're absolutely right about coding being less than 20% of the overall effort. In my experience, 10% is closer to the median. This will get reconciled as companies apply LLMs and track the ROI. Over a single year the argument can be made that "We're still learning how to leverage it." Over multiple years the 100x increase in productivity claims will be busted.
We're still on the upslope of Gartner's hype cycle. I'm curious to see how rapidly we descend into the Trough of Disillusionment.
I'm not sure you're actually in disagreement with the author of this piece at all.
They didn't say that software engineering is over - they said:
> Software development, as it has been done for decades, is over.
You argue that writing code is 10-20% of the craft. That's the point they are making too! They're framing the rest of it as the "talking", which is now even more important than it was before thanks to the writing-the-code bit being so much cheaper.
> Software development, as it has been done for decades, is over.
Simon I guess vb-8558's comment inn here is something which is really nice (definitely worth a read) and they mention how much coding has changed from say 1995 to 2005 to 2015 to 2025
Directly copying line from their comment here : For sure, we are going through some big changes, but there is no "as it has been done for decades".
Recently Economic Media made a relevant video about all of this too: How Replacing Developers With AI is Going Horribly Wrong [https://www.youtube.com/watch?v=ts0nH_pSAdM]
My (point?) is that this pure mentality of code is cheap show me the talk is weird/net negative (even if I may talk more than I code) simply because code and coding practices are something that I can learn over my experience and hone in whereas talk itself constitutes to me as non engineers trying to create software and that's all great but not really understanding the limitations (that still exist)
So the point I am trying to make is that I feel as if when the OP mentioned code is 10-20% of the craft, they didn't mean the rest is talk. They meant all the rest are architectural decisions & just everything surrounding the code. Quite frankly, the idea behind Ai/LLM's is to automate that too and convert it into pure text and I feel like the average layman significantly overestimates what AI can and cannot do.
So the whole notion of show me the talk atleast in a more non engineering background as more people try might be net negative not really understanding the tech as is and quite frankly even engineers are having a hard time catching up with all which is happening.
I do feel like that the AI industry just has too many words floating right now. To be honest, I don't want to talk right now, let me use the tool and see how it goes and have a moment of silence. The whole industry is moving faster than the days till average js framework days.
To have a catchy end to my comment: There is just too much talk nowadays. Show me the trust.
I do feel like information has become saturated and we are transitioning from the "information" age to "trust" age. Human connections between businesses and elsewhere matter the most right now more than ever. I wish to support projects which are sustainable and fair driven by passion & then I might be okay with AI use case imo.
Yeah in a lot of ways, my assertion is that @ “Code is cheap” actually means the opposite of what everyone thinks it does. Software Engineer is even more about the practices we’ve been developing over the past 20 or so years, not less
Like Linus’ observation still stands. Show me that the code you provided does exactly what you think it should. It’s easy to prompt a few lines into an LLM, it’s another thing to know exactly the way to safely and effectively change low level code.
Liz Fong-Jones told a story on LinkedIn about this at HoneyComb, she got called out for dropping a bad set of PR’s in a repo, because she didn’t really think about the way the change was presented.
The book Software Engineering at Google makes a distinction between software engineering and programming. The main difference is that software engineering occurs over a longer time span than programming. In this sense, AI tools can make programming faster, but not necessarily software engineering.
They're also great for writing design docs, which is another significant time sink for SWEs.
> Software development, as it has been done for decades, is over.
I'm pretty sure the way I was doing things in 2005 was completely different compared to 2015. Same for 2015 and 2025. I'm not old enough to know how they were doing things in 1995, but I'm pretty sure there very different compared to 2005.
For sure, we are going through some big changes, but there is no "as it has been done for decades".
I don't think things have changed that much in the time I've been doing it (roughly 20 years). Tools have evolved and new things were added but the core workflow of a developer has more or less stayed the same.
I don't think that's true, at least for everywhere I've worked.
Agile has completely changed things, for better or for worse.
Being a SWE today is nothing like 30 years ago, for me. I much preferred the earlier days as well, as it felt far more engineered and considered as opposed to much of the MVP 'productivity' of today.
MVP is not necessarily opposed to engineered and considered. It's just that many people who throw that term around have little regard for engineering, which they hide behind buzzwords like "agile".
I also wonder what those people have been doing all this time... I also have been mostly working as a developer for about 20 years and I don't think much has changed at all.
I also don't feel less productive or lacking in anything compared to the newer developers I know (including some LLM users) so I don't think I am obsolete either.
At some point I could straight-up call functions from the Visual Studio debugger Watch window instead of editing and recompiling. That was pretty sick.
Yes I know, Lisp could do this the whole time. Feel free to offer me a Lisp job drive-by Lisp person.
1995 vs 2005 was definitely a larger change than subsequent decades; in 1995 most information was gathered through dead trees or reverse engineering.
Yeah, I remember being amazed at the immediate incremental compilation on save in Visual Age for Java many years ago. Today's neovim users have features that even the most advanced IDEs didn't have back then.
I think a lot of people in the industry forget just how much change has come from 30 years of incremental progress.
talk is even cheaper, still show me the code, people claim 10x productivity that translates to 10x of work done in a month, even with Opus 4.5 out since November 2025 I haven't seen signs of this. AI makes the level of complexity with modern systems bearable, it was getting pretty bad before and AI kinda saved us. A non-trivial React app is still a pain to write. Also creating a harness for a non-deterministic api that AI provides is also pain. At least we don't have to fight through typing errors or search through relevant examples before copying and pasting. AI is good at automating typing, the lack of reasoning and the knowledge cutoff still makes coding very tedious though.
Best example of this is Claude's own terminal program. Apparently renders react at 60fps and then translates it into ANSI chars that then diff the content of the terminal and do an overwrite...
All to basically mimic what curses can do very easily.
> because one is hooked on and dependent on the genie, the natural circumstances that otherwise would allow for foundational and fundamental skills and understanding to develop, never arise, to the point of cognitive decline.
After using AI to code, I came to the same conclusion myself. Interns and juniors are fully cooked:
- Companies will replace them with AI, telling seniors to use AI instead of juniors
- As a junior, AI is a click away, so why would you spend sleepless nights painstakingly acquiring those fundamentals?
Their only hope is to use AI to accelerate their own _learning_, not their performance. Performance will come after the learning phase.
If you're young, use AI as a personal TA, don't use it to write the code for you.
> Code was always a means to an end. Unlike poetry or prose, end users don’t read or care about code.
Yes and no. Code is not art, but software is art.
What is art, then? Not something that's "beautiful", as beauty is of course mostly subjective. Not even something that works well.
I think art is a thing that was made with great care.
It doesn't matter if some piece of software was vibe-coded in part or in full, if it was edited, tested, retried enough times for its maker to consider it "perfect". Trash is something that's done in a careless way.
If you truly love and use what you made, it's likely someone else will. If not, well... why would anyone?
Well, why do humans read code:
1. To maintain it (to refactor or extend it).
2. To test it.
3. To debug it (to detect and fix flaws in it).
4. To learn (to get better by absorbing how the pros do it).
5. To verify and improve it (code review, pair programming).
6. To grade it (because a student wrote it).
7. To enjoy its beauty.
These are all I can think of right now, and they are ordered from most common to most rare case.
Personally, I have certainly read and re-read SICP code to enjoy its beauty (7), perhaps mixed in with a desire to learn (4) how to write equally beautiful code.
Art is expression. What the software provides (an experience) for which the artist (software engineer) expresses in code.
This "Code is cheap. Show me the talk." punchline gets overused as a bait these days. It is an alright article but that's a lot of words to tell us something we already know. There's nothing here that we don't already know. It's not just greedy companies riding the AI wave. Bloggers and influencers are also riding the AI wave. They know if you say anything positive or negative about AI with a catchy title it will trend on HN, Reddit, etc.
Also credit where credit is due. Origin of this punchline:
https://nitter.net/jason_young1231/status/193518070341689789...
https://programmerhumor.io/ai-memes/code-is-cheap-show-me-th...
In January 2026, prototype code is cheap. Shitty production code is cheap. If that's all you need—which is sometimes the case—then go for it.
But actually good code, with a consistent global model for what is going on, still won't come from Opus 4.5 or a Markdown plan. It still comes from a human fighting entropy.
Getting eyes on the code still matters, whether it's plain old AI slop, or fancy new Opus 4.5 "premium slop." Opus is quite smart, and it does its best.
But I've tried seriously using a number of high-profile, vibe-coded projects in the last few weeks. And good grief what unbelievable piles of shit most of them are. I spend 5% of the time using the vibe-coded tool, and 95% of the time trying to uncorrupt my data. I spend plenty of time having Opus try to look at the source to figure out what went wrong in 200,000 lines of vibe-coded Go. And even Opus is like, "This never worked! It's broken! You see, there's a race condition in the daemonization code that causes the daemon to auto-kill itself!"
And at that point, I stop caring. If someone can't be bothered to even read the code Opus generates, I can't be bothered to debug their awful software.
> Ignoring outright bad code, in a world where functional code is so abundant that “good” and “bad” are indistinguishable, ultimately, what makes functional AI code slop or non-slop?
I'm sorry, but this is an indicator for me that the author hasn't had a critical eye for quality in some time. There is massive overlap between "bad" and "functional." More than ever. The barrier-to-entry to programming got irresponsibly low for a time there, and it's going to get worse. The toolchains are not in a good way. Windows and macOS are degrading both in performance and usability, LLVM still takes 90% of a compiler's CPU time in unoptimized builds, Notepad has AI (and crashes,) simple social (mobile) apps are >300 MB download/installs when eight years ago they were hovering around a tenth of that, a site like Reddit only works on hardware which is only "cheap" in the top 3 GDP nations in the world... The list goes on. Whatever we're doing, it is not scaling.
One issue is that tooling and internals have been optimized for individual people's tastes currently. Heterogeneous environments make the models spikier. As we shift to building more homogenized systems optimized around agent accessibility, I think we'll see significant improvements
Elegantly, agents finally give us an objective measure of what "good" code is. It's code that maximizes the likelihood that future agents will be able to successfully solve problems in this codebase. If code is "bad" it makes future problems harder.
This is the "artisanal clothing argument".
I'd think there'll be a dip in code quality (compared to human) initially due to "AI machinery" due to its immaturity. But over-time on a mass-scale - we are going to see an improvement in the quality of software artifacts.
It is easier to 'discipline' the top 5 AI agents in the planet - rather than try to get a million distributed devs ("artisans") to produce high quality results.
It's like in the clothing or manufacturing industry I think. Artisans were able to produce better individual results than the average industry machinery, at least initially. But overtime - industry machinery could match the average artisan or even beat the average, while decisively beating in scale, speed, energy efficiency and so on.
> industry machinery could match the average artisan or even beat the average
Whether it could is distinct from whether it will. I'm sure you've noticed the decline in the quality of clothing. Markets a mercurial and subject to manipulation through hype (fast fashion is just a marketing scheme to generate revenue, but people bought into the lie).
With code, you have a complicating factor, namely, that LLMs are now consuming their own shit. As LLM use increases, the percentage of code that is generated vs. written by people will increase. That risks creating an echo chamber of sorts.
> This is the "artisanal clothing argument".
> it is easier to 'discipline' the top 5 AI agents in the planet - rather than try to get a million distributed devs ("artisans") to produce high quality results.
Your take essentially is "let's live in a shoe box, packaging pipelines produce them cheaply en masse, who needs slow poke construction engineers and architects anymore"
From the article:
Evidence please. Ascribing many qualities to LLM code that I haven't (personally) seen at that scale. I think if you want to get an 'easily readable and maintainable' codebase of 10k lines with an LLM you need somebody to review its contributions very closely, and it probably isn't going to be generated with a 1 shot prompt.Feels like this website is yelling at me with its massive text size. Had to drop down to -50% to get it readable.
Classical indicators of good software are still very relevant and valid!
Building something substantial and material (ie not an api wrapper+gui, to-do list) that is undeniably well made, while being faster and easier than it used to be, still takes a _lot_ of work. Even though you don't have to write a line of code, it moves so fast that you are now spending 3.5-4 days of your work week reading code, using the project, running benchmarks and experimental test lanes, reviewing specs and plans, drafting specs, defining features and tests.
The level of granularity needed to get earnestly good results is more than most people are used to. It's directly centered at the intersection between spec heavy engineering work and writing requirements for a large, high quality offshore dev team that is endearingly literal in how they interpret instructions. Depending on the work, I've found that I average around one 'task' per 22-35 lines of code.
You'll discover a new sense of profound respect for the better PMs, QA Leads, Eng Directors you have worked with. Months of progress happen each week. You'll know you're doing it right when you ask an Agent to evaluate the work since last week and it assumes it is reviewing the output of a medium sized business and offers to make Jira tickets.
Talk is never cheap. Communicating your thoughts to people without the exact same kind of expertise as you is the most important skill.
This quote is from Torvalds, and I'm quite sure that if he weren't able to write eloquent English no one would know Linux today.
Code is important when it's the best medium to express the essence of your thoughts. Just like a composer cannot express the music in his head with English words.
You want a real mind bender? Imagine a universe where Linus's original usenet post didn't go viral.
I don't think Linus is a people person. This is something which he talks about himself in the famous ted-ed video.
I just re-watched the video (currently halfway) & I feel like the point of Linux is something which you are forgetting but it was never intended to grow so much and Linux himself in the video when asked says that he never had a moment where he went like oh this went big.
In fact he talks about when the project was little. On how he had gratitude when the project had 10 people maybe 100 people working on it and then things only grow over a very large time frame (more than 25-30years? maybe now 35 just searched 34)
He talks about how he got other people's idea which he couldn't have thought of things themselves and when he first created the project he just wanted to show off to the world to look at what I did (and he did it mainly for both the end result of the project and programming itself too) and then he got introduced to open source (free software) by his friend and he just decided to have it open source.
My point is it was neither the code nor the talk. Linus is the best person to maintain Linux, why? Because he has been passionate over it for 25 years. I feel like Linux would be just as interested in talking about the code and any improvements now with maybe the same vigour as 34 years ago. He loves his creation & we love Linux too :)
Another small point I wish to add is that if talk was the only thing, then you are missing the point because Linux was created because hurd was getting delayed (so all talks no code)
Linux himself says that if the hurd kernel would've been released earlier, Linux wouldn't have been created.
So all talk no code Hurd project (which from what I hear right now is still a bit limbo as now everyone [rightfully?] uses linux) is what led to creation of linux project.
Everyone who hasn't watched Linus's ted ed should definitely watch it.
The Mind Behind Linux | Linus Torvalds | TED : https://www.youtube.com/watch?v=o8NPllzkFhE
>> Remember the old adage, “programming is 90% thinking and 10% typing”? It is now, for real.
> Proceeds to write literal books of markdown to get something meaningful
>> It requires no special training, no new language or framework to learn, and has practically no entry barriers—just good old critical thinking and foundational human skills, and competence to run the machinery.
> Wrote a paragraph about how it is important to have serious experience to understand the generated code prior to that
>> For the first time ever, good talk is exponentially more valuable than good code. The ramifications of this are significant and disruptive. This time, it is different.
> This time is different bro I swear, just one more model, just one more scale-up, just one more trillion parameters, bro we’re basically at AGI
My latest take on AI assisted coding is that AI tools are an amplifier of the developer.
- A good and experienced developer who knows how to organize and structure systems will become more productive.
- An inexperienced developer will also be able to produce more code but not necessarily systems that are maintainable.
- A sloppy developer will produce more slop.
AI was never the problem we have been having a downgrade in software in general AI just amplifies how badly you can build software. The real problem is people who just dont care about the craft just pushing out human slop, whether it be because the business goes “we can come back to that dont worry” or what have you. At least with AI me coming back to something is right here and right now, not never or when it causes a production grade issue.
> The real concern is for generations of learners who are being robbed of the opportunity to acquire the expertise to objectively discern what is slop and what is not. How do new developers build the skills that seniors generated through time? I see my seniors having higher success in vibe-coding than me. How can I short-circuit the time they put through for myself?
Regardless, knowing syntax of programming language or remember some library API, is a dead business.
I for one am quite happy to outsource this kind of simply memorisation to a machine. Maybe it's the thin end of the slippery slope? It doesn't FEEL like it is but...
>The cost of just trying things out is so exponentially high that the significant majority of ideas are simply never tried out.
But dumping code into files was never the hard part of it then either. It's understanding the subtitles of the implementation and the problem domain as you work with it and build experience with it.
>codebase with good quality 10,000 lines of code indicated significant time, effort, focus, patience, expertise, and often, skills like project management that went into it. Human traits.
>Now, LLMs can not only one-shot generate that in seconds
It will also one-shot hundreds more without the "good quality" qualifier in the human made example. In either case you're slowly refining an idea about what an implementation might be into one that is appropriate.
code is cheap, show me the prompt
Lots of words to say that “now” communicating in regular human language is important.
What soft-skill buzzword will be the next one as the capital owners take more of the supposed productivity profits?
Long blog posts are cheap. Show me the prompt.
OK, fuck it, show me the demo (without staging it). show me the result.
Okay I was writing a comment to simon (and I have elaborated some there but I wanted this to be something catchy to show how I feel and something people might discuss with too)
Both Code and talk are cheap. Show me the trust. Show me how I can trust you. Show me your authenticity. Show me your passion.
Code used to be the sign of authenticity. This is whats changing. You can no longer guarantee that large amounts of code let's say are now authentic, something which previously used to be the case (for the most part)
I have been shouting into the void many times about it but Trust seems to be the most important factor.
Essentially, I am speaking it from a consumer perspective but suppose that you write AI generated code and deploy it. Suppose you talked to AI or around it. Now I can do the same too and create a project sometimes (mostly?) more customizable to my needs for free/very-cheap.
So you have to justify why you are charging me. I do feel like that's only possible if there is something additional added to value. Trust, I trust the decision that you make and personally I trust people/decisions who feel like they take me or my ideas into account. So, essentially not ripping me off while actively helping. I don't know how to explain this but the most thing which I hate is the feeling of getting ripped off. So justifiable sustainable business who is open/transparent about the whole deal and what he gets and I get just gets my respect and my trust and quite frankly, I am not seeing many people do that but hopefully this changes.
I am curious now what you guys of HN think about this & what trust means to you in this (new?) ever-changing world.
Like y'know I feel like everything changes all the time but at the same time nothing changes at the same time too. We are still humans & we will always be humans & we are driven by our human instincts. Perhaps the community I envision is a more tight knit community online not complete mega-sellers.
Thoughts?
Please no. Talk is cheap.
I hate this trend of using adjectives to describe systems.
Fast Secure Sandboxed Minimal Reliable Robust Production grade AI ready Let's you _____ Enables you to _____
But somewhat I agree, code is essentially free, you can shit out infinite amounts of code. Unless it's good, then show the code instead. If your code is shit, show the program. If your program is shit, your code is worse, but you still pursing an interesting idea (in your eyes), show the prompt instead of the slop generated. Or even better communicate an elaborate version of the prompt.
>One can no longer know whether such a repository was “vibe”
This is absurd. Simply false, people can spot INSTANTLY when the code is good, see: https://news.ycombinator.com/item?id=46753708
Uhh... how about show me both?
I think that's always been true. The ideas and reasoning process matter. So does the end product. If you produced it with an LLM and it sucks, it still sucks.