I've written code for almost 30 years, and the last 4 years I've slowly used AI more and more, starting with GitHub Copilot beta, ChatGPT, Cursor, Windsurf, Claude, Gemini, Jules, Codex. Now I mostly work with Claude, and I don't write any code myself. Even configuring servers is easier with Claude. I still understand how everything works, but I now change how I work so I can do a lot more, cover a lot more, and rely less on people.
It isn't much different to how it works with a team. You have an architecture who understands the broader landscape, you have developers who implement certain subsystems, you have a testing strategy, you have communication, teaching, management. The only difference now is that I can do all this with my team being LLMs/agents, while I focus on the leadership stuff: docs, designs, tests, direction, vision.
I do miss coding, but it just isn't worth it anymore.
That's partly an illusion. Try doing everything manually. After only using inline suggestions for six months a few years ago, I've noticed that my skills have gotten way worse. I became way slower. You have to constantly exercise your brain.
This reminds me of people who watch tens of video courses about programming, but can't code anything when it comes to a real job. They have an illusion of understanding how to code.
For AI companies, that's a good thing. People's skills can atrophy to the point that they can't code without LLMs.
I would suggest practicing it from time to time. It helps with code review and keeping the codebase at a decent level. We just can't afford to vibecode important software.
LLMs produce average code, and when you see it all day long, you get used to it. After getting used to it, you start to merge bad code because suddenly it looks good to you.
> That's partly an illusion. Try doing everything manually. After only using inline suggestions for six months a few years ago, I've noticed that my skills have gotten way worse. I became way slower. You have to constantly exercise your brain.
YMMV, but I'm not seeing this at all. You might get foggy around things like the particular syntax for some advanced features, but I'll never forget what a for loop is, how binary search works, or how to analyze time complexity. That's just not how human cognition works, assuming you had solid understanding before.
I still do puzzles like Advent of Code or problems from competitive programming from time to time because I don't want to "lose it," but even if you're doing something interesting, a lot of practical programming boils down to the digital equivalent of "file this file into this filing," mind-numbingly boring, forgettable code that still has to be written to a reasonable standard of quality because otherwise everything collapses.
This is a consequence of introducing LLMs in software development. If you imagine it as a pyramid that starts from the bottom, the easiest tasks that happen more frequently, to the top, the hardest challenges that happen once in a while, LLMs can definitely help in automating the base of such pyramid, leaving the human with an harder job to do because now he statistically encounters harder tasks more often.
If this is the price to pay to unlock this productivity boost, so be it but let’s keep in mind that:
- we need to be more careful not to burnout since our job became de facto harder (if done at the maximum potential);
- we always need to control and have a way to verify what LLMs are doing on the easiest tasks, because even if rarely, they can fail even there (...but we had to do this anyway with Junior devs, or didn’t you?)
Using Agile methodology with agents actually works pretty well in my experience. We do sprints and then code reviews, testing and revision, optimization. During code review, I inspect everything the agents created and make corrections and then roll the corrected patterns into the training documentation for the agents so they learn and don't make the same mistakes.
> I do miss coding, but it just isn't worth it anymore.
This pretty much sums up my current mood with AI. I also like to think, but it just isn't worth it anymore as a SE at bigCorp. Just ask AI to do it and think for you and the result only has to be "good enough" (=> works, passes tests). Makes sense business wise, but it breaks me, personally.
To what extent is Claude configuring these servers? Is this baremetal deployment with OS configuration and service management? Or is it abstracted by defining Terraform files to use pre-created images offered by a hosting service?
I don't think the split is along seniority lines. Many juniors have adopted LLMs even faster. In many quarters it has also become a kind of political issue where "all the people I hate love LLMs so I must hate them."
> These are the depths of my laziness and I have yet to hit the ground.
I only hope that when you do, you don’t take anyone else with you.
It’s one thing to be careless and delete all your own email; quite another to be careless and screw the lives of people using something you worked on and who had no idea you were YOLOing with their data.
Maybe you aren’t, but there are definitely people who are and do exactly what you described, including senior staff at companies like Meta and Microsoft, so the point stands.
Except your team is full of occasionally insane "people" who hallucinate, lie, and cover things up.
We are trading the long term benefits for truth and correctness for the short term benefits of immediate productivity and money. This is like how some cultures have valued cheating and quick fixes because it's "not worth it" to do things correctly. The damage of this will continue to compound and bubble up.
I agree. The further I have progressed into my career the more I have been focused on the stability, maintainability and "supportability" of the products I work on. Going slower in order to progress faster in the long run. I feel like everyone is disregarding the importance of that at the moment and I feel quite sad about it.
I agree with you, but considering the state of modern software, I think the values "truth and correctness" have been abandoned by most developers a long time ago.
I have not found that to be true on a personal level, but in fairness it does seem to be a widely reported problem. At its core, I think it is an issue of alignment. That is something different than skill.
That is the point. It is nonsense to delegate your responsibility to something that is neither accountable nor reliable if you care about not tanking your reputation..
This is a fair argument but it’s rapidly becoming a non-argument.
LLMs have come a long way since ChatGPT 4.
The idea that they’ll always value quick answers, and always be prone to hallucination seems short-sighted, given how much the technology has advanced.
I’ve seen Claude do iterative problem solving, spot bad architectural patterns in human written code, and solve very complex challenges across multiple services.
All of this capability emerging from a company (Anthropic) that’s just five years old. Imagine what Claude will be capable of in 2030.
> The idea that they’ll always value quick answers, and always be prone to hallucination seems short-sighted, given how much the technology has advanced.
It’s not shortsighted, hallucinations still happen all the time with the current models. Maybe not as much if you’re only asking it to do the umpteenth React template or whatever that should’ve already been a snippet, but if you’re doing anything interesting with low level APIS, they still make shit up constantly.
> All of this capability emerging from a company (Anthropic) that’s just five years old. Imagine what Claude will be capable of in 2030.
I don't believe VC-backed companies see monotonic user-facing improvement as a general rule. The nature of VC means you have to do a lot of unmaintainable cool things for cheap, and then slowly heat the water to boil. See google, reddit, facebook, etc...
For all we know, Claude today is the best it will ever be.
The current models had lots and lots of hand written code to train on. Now stackoverflow is dead and github is getting filled with AI generated slop so one begins to wonder whether further training will start to show diminishing returns or perhaps even regressions. I am at least a little bit skeptical of any claim that AI will continue to improve at the rate it has thus far.
If you don't really understand how LLMs of today are made possible, it is really easy to fall into the trap of thinking that it is just a matter of time and compute to attain perpetual progress..
Imagine somebody writes a blog post "why I bike to work". They detail that they love it, the fresh air, nature experience biking through a forest, yes sometimes it's raining but that's just part of the experience, and they get fit along the way. You respond with "well I take the car, it's just easier". Well, good for you, but not engaging with what they wrote.
The difference is that everyone knows that it’s faster and to take the car but you get to exercise your muscles. But imagine it was 1920 when cars were still up for debate and the post was “why I ride my horse to work”. It’s still a common argument whether you’ll get better results coding manually or using AI.
> It’s still a common argument whether you’ll get better results coding manually or using AI.
Except the post has nothing to do with “better results” of the generated output, it concerns itself with the effect it has on the user’s learning. That’s the theme which is relevant to the discussion.
And we already know LLMs impact your learning. How could it not? If you don’t use your brain for a task, it gets worse at that task. We’ve know that, with studies, since before LLMs.
Did you read the post yourself? It doesn’t sound like it. It is composed of the title and three mystical-sounding quotes. How is one supposed to engage with this? Doing literary critique? A counter point to the statement “I don’t use LLMs” would probably count as valid engagement in any circumstance but especially in this one.
I did. The three quotes clearly express a shared sentiment for enjoyment of building and learning while doing so. That's certainly something one can engage with by providing a counterpoint. But just saying "that's not what I do" isn't one.
The original poster “expresses a shared sentiment” by posting three quotes, but the poster you replied to, who offers a fairly detailed account of the value LLMs bring to their daily work life, and how they feel about it, does not. OK.
I may start using LLMs to filter out these kinds of posts.
At this point it's worth considering a permanent, pinned "HN Flamewar: Will LLMs turn you into the next Ken Thompson or are you just a poser who can't write code" thread. We're having this same discussion, constantly, on 5 different threads on the frontpage.
Right, we do seem to have hit diminishing returns on dueling "I have seen the light" and "I haven't fallen for it" blogposts based on personal experience and the author's hunches about where things are going, and we definitely don't need to restart the same discussion from 0 every time another one lands. One thing which would be very interesting at this point is some actual software engineering research measuring the actual, not just user-perceived, impacts.
We don't do measurable metrics to evaluate our processes, ever. We pick up a trend and go at it for 10 years, until we have a 1,000,000 line monstrosity that's impossible to work on.
Unfortunately people need to experience a 1 million line codebase a dynamic language to figure out that types are actually pretty nice, and they need to write getters and setters for every field for a few years to figure out OOP is stupid, and they need to do 10 HTTP requests for something that could be 10 function calls to figure out microservices are stupid.
In none of these trends did the industry pause to evaluate if what's being written is completely idiotic, it's only with a few decades of hindsight, after a lot of money is lost that we learn the lesson.
I don't see the point in supporting the hoovering up of anything anyone has ever wrote online, without attribution, just so I don't get to do the thing I actually like doing, programming
My mental model is that coding by hand is similar to horseback riding, sail boating, etc. These skills are still enjoyed by people and in some circumstances they are invaluable.
The quote from Douglas Adams is perfectly consistent with using AI for programming. The difficulty of programming, indeed the whole point, is to understand the problem; it's not typing if-then-else cases for the millionth time.
Explaining the problem to an LLM and having it ask pointed questions is helpful IMHO, as well as being able to iterate fast (output new versions fast).
As an example, I'm currently making simple Windows utilities with the help of AI. Parsing config files in C is something the AI does perfectly. But an interesting part of the process is: what should go into a config file, or not, what are the best defaults, what should not be configurable: questions that don't have a perfect answer and that can only be solved by using each program for weeks, on different machines / in different contexts.
> The difficulty of programming, indeed the whole point, is to understand the problem
I'd dispute "the whole point" - there's a whole bunch of problems I can understand but would struggle to implement effectively in code (which is another big point - there's little use in a solution that takes, e.g., two months to calculate last week's numbers when your revenue/profit/planning depends on those numbers.)
At a minimum, for me, the difficulties of programming are many stepped: understanding the problem -> converting that understanding to algorithms/whatnot -> implementing that understanding -> making it efficient (if required) -> verifying the solution.
Trying to boil it down to "ONE COOL TRICK!" that justifies vibe-coding is daft.
[There's also a whole bunch of things I can implement but don't really understand (business logic, sales/tax rules, that kind of thing) but that's why we have project managers, domain experts, etc.]
I mostly use AI as an assistant, not as a replacement. I actually enjoy the process of programming and learning new things along the way. I’m not really interested in outsourcing that to an LLM just to save a few minutes.
I don’t think learning and understanding is hard-coupled to performing all low level steps yourself. The LLM can be a developer, sure. But it can also take on the role of rubber duck, architect, teacher or pupil.
Have a large LLM-written change set that works but that you’re not sure you fully understand? Make the coding agent quiz you on the design and implementation decisions. This can be a lot more engaging than trying to do a normal code review. And you might even learn something from it. Probably not the same amount as if you did this yourself fully. But that’s just a question of how much effort you want to invest in the understanding?
> By the time you’ve sorted out a complicated idea into little steps that even a stupid machine can deal with, you’ve certainly learned something about it yourself.
I love these quotes. I got a much deeper, more elegant understanding of the grammar of a human language as I wrote a phrase generator and parser for it. Writing and refactoring it gave me an understanding of how the grammar works. (And LLMs still confidently fail at really basic tasks I ask them for in this language.)
This morning while sitting on the shitter, claude wrote me a complete plugbox interface for wiring together A/V filters, rendered with libASS subtitles to be embedded in an mpv video player.
I genuinely don’t understand how anyone (with technical background) can see LLMs anything more then fancy autocomplete. If you know anything about NNs and about average code quality, that LLMs never will be able to generate high quality code.
Im ready to get downvotes again for my takes, but as a person who writes and trains DL models, I will die on the hill: “people need to produce high quality data” it can be code it can be art, but we can’t rely on those models and trust in the things that they provide.
The new bottle neck isn't writing code, it's testing. You're right you can't blindly trust the output of an LLM but you can trust the testing regime to ensure a certain standard has been met. In hindsight this actually sort of obvious, the more things change the more they stay the same etc.
Well it’s not obvious that is true. If you ask LLM to write tests, it will generate versions of them that code passes, that doesn’t guarantee good code. If you write tests yourself and just pray for great LLM pull, it’s easier to just write code yourself, in my humble opinion
That's a useless approach as you point out but doesn't meant there isn't a valid testing regime to be explored and upheld. Manual testing is going to be a lot more important, I see QA teams/roles becoming very valuable assets in the coming years.
Because often the problems that these people are working on tend to be trivial. LLMs are excellent for making the millionth CRUD backend server that talks to SQLite and a glorified todo list React frontend. In fact, it's stupid to do it any other way now.
I've gone the other direction completely. Run claude code basically unsupervised on my codebase for months now and honestly I write way less by hand. still understand everything it does but I spend time on what actually matters — the architecture, the decisions. for me the fun part was always shipping things, not the syntax.
Coding without AI will likely take on the nature of leisure activities like cycling, jogging, horseback riding, or swimming.
The invention of cars, trains, and ships didn't eliminate them. It's clear the latter are overwhelmingly more efficient, while the former now remain in the realm of hobbies or exercise.
I also deliberately avoid using AI for some small projects and code them myself, but I consider this purely a hobby now, not work.
As the original author pointed out, the advice to jog or ride a bike because driving all the time is bad for your health is sound, but the Red Flag Act has proven to be a foolish endeavor. I believe the same phenomenon will occur.
The point I was trying to make is that whether you should use AI for coding depends on the scale and nature of the task.
To continue the original analogy, even if it's not leisure, a bicycle is a practical choice for short-distance travel. Of course, a car doesn't perfectly replace a bicycle. But would that still be true for distances of tens or hundreds of kilometers?
And this is just an analogy; if you don't like cars, an electric bike, a scooter, or something similar is fine.
>if you don't like cars, an electric bike, a scooter, or something similar is fine.
Assuming that society hasn't been stroaded into artificially favoring cars, to the point where other options become effectively removed, even if they would otherwise have been better-suited to the use case.
Seeing a lot of "ok boomer" reactions to posts like this, and honestly I think I kind of agree - but more accurately the author hasn't considered the current landscape properly.
Grady Booch (co-creator of UML) has this to say about AI: this is a shift of the abstraction of software engineering up a level. It's very similar to when we moved from programming in assembly to structured languages, which abstracted away the machine. Now we're abstracting away the code itself.
That means specs and architectural understanding are now the locus of work - which is exactly what Neil is claiming to be trying to preserve. I mean, yeah you can give that up to the AI as well but then you just get vibecoded garbage with huge security/functionality holes.
I don't either. I'm genuinely considering registering an NGO dedicated to anti-slop. I tried AI and it didn't work on all accounts - bugs, edge cases are never covered, horrible security, slow and over complicated. The reason people keep saying that it does work is just the perception they had of programming: A lot of people were lead to believe that anyone can be a programmer. Much like everyone believes that they can be an artist and spoilers - that's not true. I am saying this as the child of two artists - I am incapable of creating art, despite numerous swings in that direction when I was a child - it was just not for me. People looking from the outside saw the tons of apps pouring out over the years, some making billions and though "well if those losers can do it, so can I". A 20 hour course on web development did not cut it, even though the hiring spree around COVID made many think that it did and did not attribute it to the instant rise in demand for online services. But that, for better or worse, did not last. So the alternative came in the form of AI slop and now an active generation that is in their mid 20's thinking that seemingly functioning slop and stable software are the same thing, completely brushing off a century of collective knowledge in what we know as computer science.
The metric became lines of code, although those of us that started off coding as children when MySpace was a thing and goto was the best performing search engine, are well aware that lines of code is the stupidest metric you can come up with. But slop machines produce so much of it, it's easy to see why many people are like "see? see this? it works! And you are gonna be doing this 2 days as a caveman". Gladly, because two data pipelines that do the exact same thing take 4 days to run on slop code, whereas my caveman approach takes single digit hours and does not produce several billion rows of unusable garbage.
Not to mention the countless times when someone has asked he to help them when they are stuck and a simple question such as "where do you define the path to the output directory?" leads to 10 minutes of scrolling on project that contains a total of 10000 lines of code.
The good news for us mortals, is that this is that this approach is starting to bite people back and for the companies that manage to survive the inevitable head on collision, they will have to dig deep in their pockets to get people to clean up the mess.
Gemini parsed 5000 lines assembly program. And it understood everything.
I wanted to change it from 32-bit MSDOS to 64-bit Linux. But it realized that the segmented memory model cannot be implemented in large memory without massive changes which breaks everything else.
It was willing to construct new program with seemingly same functionality, but the assembly code was so incomprehensible that whole project was useless as a learning tool. And C-version would have been faster already.
Sorry to say, but less talented humans like me-myself are already totally useless in this.
I am really finding this. By the time I've specced out a feature properly to an LLM, I could have just written most of it quicker myself. But I often find that with jobs I want to give to other people, so maybe I over specify?
There's some tasks where it's pretty clear what you want though and are just boring jobs that are totally not worth speccing well and an LLM will blaze through.
Things like:
- add oauth support to this API
- add a language switcher in this menu, an API endpoint, save it to the UserSettings table
- make a 404 page
> But I often find that with jobs I want to give to other people, so maybe I over specify?
The difference is that with other people, you are training somebody else in your team who will eventually internalize what you taught them and then be able to carry the philosophy forward. Even if it took exactly the same amount of time for you to explain (+ code review etc), it's a clear net benefit in the long run. Not so with an LLM. There it's just lost time.
I've written code for almost 30 years, and the last 4 years I've slowly used AI more and more, starting with GitHub Copilot beta, ChatGPT, Cursor, Windsurf, Claude, Gemini, Jules, Codex. Now I mostly work with Claude, and I don't write any code myself. Even configuring servers is easier with Claude. I still understand how everything works, but I now change how I work so I can do a lot more, cover a lot more, and rely less on people.
It isn't much different to how it works with a team. You have an architecture who understands the broader landscape, you have developers who implement certain subsystems, you have a testing strategy, you have communication, teaching, management. The only difference now is that I can do all this with my team being LLMs/agents, while I focus on the leadership stuff: docs, designs, tests, direction, vision.
I do miss coding, but it just isn't worth it anymore.
> I still understand how everything works,
That's partly an illusion. Try doing everything manually. After only using inline suggestions for six months a few years ago, I've noticed that my skills have gotten way worse. I became way slower. You have to constantly exercise your brain.
This reminds me of people who watch tens of video courses about programming, but can't code anything when it comes to a real job. They have an illusion of understanding how to code.
For AI companies, that's a good thing. People's skills can atrophy to the point that they can't code without LLMs.
I would suggest practicing it from time to time. It helps with code review and keeping the codebase at a decent level. We just can't afford to vibecode important software.
LLMs produce average code, and when you see it all day long, you get used to it. After getting used to it, you start to merge bad code because suddenly it looks good to you.
> That's partly an illusion. Try doing everything manually. After only using inline suggestions for six months a few years ago, I've noticed that my skills have gotten way worse. I became way slower. You have to constantly exercise your brain.
YMMV, but I'm not seeing this at all. You might get foggy around things like the particular syntax for some advanced features, but I'll never forget what a for loop is, how binary search works, or how to analyze time complexity. That's just not how human cognition works, assuming you had solid understanding before.
I still do puzzles like Advent of Code or problems from competitive programming from time to time because I don't want to "lose it," but even if you're doing something interesting, a lot of practical programming boils down to the digital equivalent of "file this file into this filing," mind-numbingly boring, forgettable code that still has to be written to a reasonable standard of quality because otherwise everything collapses.
Well, I‘m still using my brain from morning to evening, but I‘m certainly using it differently.
This will without a doubt become a problem if the whole AI thing somehow collapses or becomes very expensive!
But it’s probably the correct adaptation if not.
I have a hard time using languages I know without an LSP when all ive been doing is using lsp and its suggestions.
I cant imagine how it is for people tha try to manually write after years of heavy llm usage
The GP seems to run a decentralized AI hosting company built on top of a crypto chain.
Can you get any fadd-ier than that? Of course they love AI.
I could write the same comment myself. Also >30 years of experience.
I actually think *more* than I used to, because I only get the hardest problems to solve. I mostly work on architectural documents these days.
This is a consequence of introducing LLMs in software development. If you imagine it as a pyramid that starts from the bottom, the easiest tasks that happen more frequently, to the top, the hardest challenges that happen once in a while, LLMs can definitely help in automating the base of such pyramid, leaving the human with an harder job to do because now he statistically encounters harder tasks more often.
If this is the price to pay to unlock this productivity boost, so be it but let’s keep in mind that:
- we need to be more careful not to burnout since our job became de facto harder (if done at the maximum potential);
- we always need to control and have a way to verify what LLMs are doing on the easiest tasks, because even if rarely, they can fail even there (...but we had to do this anyway with Junior devs, or didn’t you?)
>anyway with Junior devs..
A junior dev is accountable, but an LLM subscription is not.
>I only get the hardest problems to solve.
So do you review all that code your LLM generates for you?
Just curious. What stuff did you make before the LLMs, with regular coding?
Using Agile methodology with agents actually works pretty well in my experience. We do sprints and then code reviews, testing and revision, optimization. During code review, I inspect everything the agents created and make corrections and then roll the corrected patterns into the training documentation for the agents so they learn and don't make the same mistakes.
> I do miss coding, but it just isn't worth it anymore.
This pretty much sums up my current mood with AI. I also like to think, but it just isn't worth it anymore as a SE at bigCorp. Just ask AI to do it and think for you and the result only has to be "good enough" (=> works, passes tests). Makes sense business wise, but it breaks me, personally.
> even configuring servers is easier with Claude
To what extent is Claude configuring these servers? Is this baremetal deployment with OS configuration and service management? Or is it abstracted by defining Terraform files to use pre-created images offered by a hosting service?
I've seen this at a few orgs i've visited, where the seniors have leaned into LLM programming more than the juniors for these reasons.
I don't think the split is along seniority lines. Many juniors have adopted LLMs even faster. In many quarters it has also become a kind of political issue where "all the people I hate love LLMs so I must hate them."
Me:
My laziness knows no bounds.> These are the depths of my laziness and I have yet to hit the ground.
I only hope that when you do, you don’t take anyone else with you.
It’s one thing to be careless and delete all your own email; quite another to be careless and screw the lives of people using something you worked on and who had no idea you were YOLOing with their data.
Edited my comment before your response. But yeah, lighten up, it’s a joke! I’m not that lazy.
The only thing I do that I’d consider remotely lazy is put my API keys in my AGENTS.md so I don’t have to keep pasting it in my chat.
> lighten up, it’s a joke! I’m not that lazy.
Maybe you aren’t, but there are definitely people who are and do exactly what you described, including senior staff at companies like Meta and Microsoft, so the point stands.
Fair.
Except your team is full of occasionally insane "people" who hallucinate, lie, and cover things up.
We are trading the long term benefits for truth and correctness for the short term benefits of immediate productivity and money. This is like how some cultures have valued cheating and quick fixes because it's "not worth it" to do things correctly. The damage of this will continue to compound and bubble up.
I agree. The further I have progressed into my career the more I have been focused on the stability, maintainability and "supportability" of the products I work on. Going slower in order to progress faster in the long run. I feel like everyone is disregarding the importance of that at the moment and I feel quite sad about it.
Not only that, there’s this immense drive for “productivity” so they have more time to… Do more work. It’s insanity.
I agree with you, but considering the state of modern software, I think the values "truth and correctness" have been abandoned by most developers a long time ago.
Be that as it may, we shouldn’t be striving to accelerate the decline, and be recruiting even more people who never learned those values.
It’s the Eternal September of software (lack of) quality.
I have not found that to be true on a personal level, but in fairness it does seem to be a widely reported problem. At its core, I think it is an issue of alignment. That is something different than skill.
> Except your team is full of occasionally insane "people" who hallucinate, lie, and cover things up.
Wait.. are we talking about LLMs or humans here?
Humans are accountable, an LLM subscription is not..
The humans operating the LLM are accountable.
That is the point. It is nonsense to delegate your responsibility to something that is neither accountable nor reliable if you care about not tanking your reputation..
This is a fair argument but it’s rapidly becoming a non-argument.
LLMs have come a long way since ChatGPT 4.
The idea that they’ll always value quick answers, and always be prone to hallucination seems short-sighted, given how much the technology has advanced.
I’ve seen Claude do iterative problem solving, spot bad architectural patterns in human written code, and solve very complex challenges across multiple services.
All of this capability emerging from a company (Anthropic) that’s just five years old. Imagine what Claude will be capable of in 2030.
> The idea that they’ll always value quick answers, and always be prone to hallucination seems short-sighted, given how much the technology has advanced.
It’s not shortsighted, hallucinations still happen all the time with the current models. Maybe not as much if you’re only asking it to do the umpteenth React template or whatever that should’ve already been a snippet, but if you’re doing anything interesting with low level APIS, they still make shit up constantly.
> All of this capability emerging from a company (Anthropic) that’s just five years old. Imagine what Claude will be capable of in 2030.
I don't believe VC-backed companies see monotonic user-facing improvement as a general rule. The nature of VC means you have to do a lot of unmaintainable cool things for cheap, and then slowly heat the water to boil. See google, reddit, facebook, etc...
For all we know, Claude today is the best it will ever be.
The current models had lots and lots of hand written code to train on. Now stackoverflow is dead and github is getting filled with AI generated slop so one begins to wonder whether further training will start to show diminishing returns or perhaps even regressions. I am at least a little bit skeptical of any claim that AI will continue to improve at the rate it has thus far.
If you don't really understand how LLMs of today are made possible, it is really easy to fall into the trap of thinking that it is just a matter of time and compute to attain perpetual progress..
Sorry, good for you, but how is this relevant?
Imagine somebody writes a blog post "why I bike to work". They detail that they love it, the fresh air, nature experience biking through a forest, yes sometimes it's raining but that's just part of the experience, and they get fit along the way. You respond with "well I take the car, it's just easier". Well, good for you, but not engaging with what they wrote.
The difference is that everyone knows that it’s faster and to take the car but you get to exercise your muscles. But imagine it was 1920 when cars were still up for debate and the post was “why I ride my horse to work”. It’s still a common argument whether you’ll get better results coding manually or using AI.
> It’s still a common argument whether you’ll get better results coding manually or using AI.
Except the post has nothing to do with “better results” of the generated output, it concerns itself with the effect it has on the user’s learning. That’s the theme which is relevant to the discussion.
And we already know LLMs impact your learning. How could it not? If you don’t use your brain for a task, it gets worse at that task. We’ve know that, with studies, since before LLMs.
It boggles my mind how AI discussion is so abrasive that people get their jimmies rustled over just about anything in here.
Ironically your comment looks AI written with that analogy.
Roncesvalles' law: Bad posts have bad comments.
Welcome to Hacker News!
Did you read the post yourself? It doesn’t sound like it. It is composed of the title and three mystical-sounding quotes. How is one supposed to engage with this? Doing literary critique? A counter point to the statement “I don’t use LLMs” would probably count as valid engagement in any circumstance but especially in this one.
I did. The three quotes clearly express a shared sentiment for enjoyment of building and learning while doing so. That's certainly something one can engage with by providing a counterpoint. But just saying "that's not what I do" isn't one.
The original poster “expresses a shared sentiment” by posting three quotes, but the poster you replied to, who offers a fairly detailed account of the value LLMs bring to their daily work life, and how they feel about it, does not. OK.
I like this perspective.
"Man spends hundreds of dollars a month on API tokens, claims coding isn't worth it anymore."
Onion articles really write themselves these days. I for one would still rather keep the money and write 25% of it myself.
I may start using LLMs to filter out these kinds of posts.
At this point it's worth considering a permanent, pinned "HN Flamewar: Will LLMs turn you into the next Ken Thompson or are you just a poser who can't write code" thread. We're having this same discussion, constantly, on 5 different threads on the frontpage.
Right, we do seem to have hit diminishing returns on dueling "I have seen the light" and "I haven't fallen for it" blogposts based on personal experience and the author's hunches about where things are going, and we definitely don't need to restart the same discussion from 0 every time another one lands. One thing which would be very interesting at this point is some actual software engineering research measuring the actual, not just user-perceived, impacts.
We don't do measurable metrics to evaluate our processes, ever. We pick up a trend and go at it for 10 years, until we have a 1,000,000 line monstrosity that's impossible to work on.
Unfortunately people need to experience a 1 million line codebase a dynamic language to figure out that types are actually pretty nice, and they need to write getters and setters for every field for a few years to figure out OOP is stupid, and they need to do 10 HTTP requests for something that could be 10 function calls to figure out microservices are stupid.
In none of these trends did the industry pause to evaluate if what's being written is completely idiotic, it's only with a few decades of hindsight, after a lot of money is lost that we learn the lesson.
I've seen a page offering the latest of AI news from HN and my reaction was basically yours.
I wish I could have a HN frontpage with everything but AI news. Both postive or negative.
Ken would just write awk and rc scripts to define half of the code boilerplate.
I don't see the point in supporting the hoovering up of anything anyone has ever wrote online, without attribution, just so I don't get to do the thing I actually like doing, programming
Claude has helped me learn that the thing I enjoyed was actually delivering good software, as opposed to crafting syntax.
If people enjoy coding by hand: GREAT DO IT!!!
My mental model is that coding by hand is similar to horseback riding, sail boating, etc. These skills are still enjoyed by people and in some circumstances they are invaluable.
The quote from Douglas Adams is perfectly consistent with using AI for programming. The difficulty of programming, indeed the whole point, is to understand the problem; it's not typing if-then-else cases for the millionth time.
Explaining the problem to an LLM and having it ask pointed questions is helpful IMHO, as well as being able to iterate fast (output new versions fast).
As an example, I'm currently making simple Windows utilities with the help of AI. Parsing config files in C is something the AI does perfectly. But an interesting part of the process is: what should go into a config file, or not, what are the best defaults, what should not be configurable: questions that don't have a perfect answer and that can only be solved by using each program for weeks, on different machines / in different contexts.
> The difficulty of programming, indeed the whole point, is to understand the problem
I'd dispute "the whole point" - there's a whole bunch of problems I can understand but would struggle to implement effectively in code (which is another big point - there's little use in a solution that takes, e.g., two months to calculate last week's numbers when your revenue/profit/planning depends on those numbers.)
At a minimum, for me, the difficulties of programming are many stepped: understanding the problem -> converting that understanding to algorithms/whatnot -> implementing that understanding -> making it efficient (if required) -> verifying the solution.
Trying to boil it down to "ONE COOL TRICK!" that justifies vibe-coding is daft.
[There's also a whole bunch of things I can implement but don't really understand (business logic, sales/tax rules, that kind of thing) but that's why we have project managers, domain experts, etc.]
> is to understand the problem; it's not typing if-then-else cases for the millionth time.
Edit macros and awk+grep solved that.
I mostly use AI as an assistant, not as a replacement. I actually enjoy the process of programming and learning new things along the way. I’m not really interested in outsourcing that to an LLM just to save a few minutes.
I don’t think learning and understanding is hard-coupled to performing all low level steps yourself. The LLM can be a developer, sure. But it can also take on the role of rubber duck, architect, teacher or pupil.
Have a large LLM-written change set that works but that you’re not sure you fully understand? Make the coding agent quiz you on the design and implementation decisions. This can be a lot more engaging than trying to do a normal code review. And you might even learn something from it. Probably not the same amount as if you did this yourself fully. But that’s just a question of how much effort you want to invest in the understanding?
> By the time you’ve sorted out a complicated idea into little steps that even a stupid machine can deal with, you’ve certainly learned something about it yourself.
I love these quotes. I got a much deeper, more elegant understanding of the grammar of a human language as I wrote a phrase generator and parser for it. Writing and refactoring it gave me an understanding of how the grammar works. (And LLMs still confidently fail at really basic tasks I ask them for in this language.)
This morning while sitting on the shitter, claude wrote me a complete plugbox interface for wiring together A/V filters, rendered with libASS subtitles to be embedded in an mpv video player.
A large issue is that I am not going to give up my private source code/IP to be trained on. As an individual, not a billion dollar enterprise.
Ironically, author doesnt realize that not everyone wants to learn everything
There is nothing wrong whatsoever with just getting things done.
I genuinely don’t understand how anyone (with technical background) can see LLMs anything more then fancy autocomplete. If you know anything about NNs and about average code quality, that LLMs never will be able to generate high quality code.
Im ready to get downvotes again for my takes, but as a person who writes and trains DL models, I will die on the hill: “people need to produce high quality data” it can be code it can be art, but we can’t rely on those models and trust in the things that they provide.
The new bottle neck isn't writing code, it's testing. You're right you can't blindly trust the output of an LLM but you can trust the testing regime to ensure a certain standard has been met. In hindsight this actually sort of obvious, the more things change the more they stay the same etc.
Well it’s not obvious that is true. If you ask LLM to write tests, it will generate versions of them that code passes, that doesn’t guarantee good code. If you write tests yourself and just pray for great LLM pull, it’s easier to just write code yourself, in my humble opinion
That's a useless approach as you point out but doesn't meant there isn't a valid testing regime to be explored and upheld. Manual testing is going to be a lot more important, I see QA teams/roles becoming very valuable assets in the coming years.
Because often the problems that these people are working on tend to be trivial. LLMs are excellent for making the millionth CRUD backend server that talks to SQLite and a glorified todo list React frontend. In fact, it's stupid to do it any other way now.
I've gone the other direction completely. Run claude code basically unsupervised on my codebase for months now and honestly I write way less by hand. still understand everything it does but I spend time on what actually matters — the architecture, the decisions. for me the fun part was always shipping things, not the syntax.
Coding without AI will likely take on the nature of leisure activities like cycling, jogging, horseback riding, or swimming. The invention of cars, trains, and ships didn't eliminate them. It's clear the latter are overwhelmingly more efficient, while the former now remain in the realm of hobbies or exercise. I also deliberately avoid using AI for some small projects and code them myself, but I consider this purely a hobby now, not work.
As the original author pointed out, the advice to jog or ride a bike because driving all the time is bad for your health is sound, but the Red Flag Act has proven to be a foolish endeavor. I believe the same phenomenon will occur.
Cycling is not just a leisure activity and cars are not a full bike replacements.
The point I was trying to make is that whether you should use AI for coding depends on the scale and nature of the task. To continue the original analogy, even if it's not leisure, a bicycle is a practical choice for short-distance travel. Of course, a car doesn't perfectly replace a bicycle. But would that still be true for distances of tens or hundreds of kilometers? And this is just an analogy; if you don't like cars, an electric bike, a scooter, or something similar is fine.
>if you don't like cars, an electric bike, a scooter, or something similar is fine.
Assuming that society hasn't been stroaded into artificially favoring cars, to the point where other options become effectively removed, even if they would otherwise have been better-suited to the use case.
Seeing a lot of "ok boomer" reactions to posts like this, and honestly I think I kind of agree - but more accurately the author hasn't considered the current landscape properly.
Grady Booch (co-creator of UML) has this to say about AI: this is a shift of the abstraction of software engineering up a level. It's very similar to when we moved from programming in assembly to structured languages, which abstracted away the machine. Now we're abstracting away the code itself.
That means specs and architectural understanding are now the locus of work - which is exactly what Neil is claiming to be trying to preserve. I mean, yeah you can give that up to the AI as well but then you just get vibecoded garbage with huge security/functionality holes.
I don't either. I'm genuinely considering registering an NGO dedicated to anti-slop. I tried AI and it didn't work on all accounts - bugs, edge cases are never covered, horrible security, slow and over complicated. The reason people keep saying that it does work is just the perception they had of programming: A lot of people were lead to believe that anyone can be a programmer. Much like everyone believes that they can be an artist and spoilers - that's not true. I am saying this as the child of two artists - I am incapable of creating art, despite numerous swings in that direction when I was a child - it was just not for me. People looking from the outside saw the tons of apps pouring out over the years, some making billions and though "well if those losers can do it, so can I". A 20 hour course on web development did not cut it, even though the hiring spree around COVID made many think that it did and did not attribute it to the instant rise in demand for online services. But that, for better or worse, did not last. So the alternative came in the form of AI slop and now an active generation that is in their mid 20's thinking that seemingly functioning slop and stable software are the same thing, completely brushing off a century of collective knowledge in what we know as computer science.
The metric became lines of code, although those of us that started off coding as children when MySpace was a thing and goto was the best performing search engine, are well aware that lines of code is the stupidest metric you can come up with. But slop machines produce so much of it, it's easy to see why many people are like "see? see this? it works! And you are gonna be doing this 2 days as a caveman". Gladly, because two data pipelines that do the exact same thing take 4 days to run on slop code, whereas my caveman approach takes single digit hours and does not produce several billion rows of unusable garbage.
Not to mention the countless times when someone has asked he to help them when they are stuck and a simple question such as "where do you define the path to the output directory?" leads to 10 minutes of scrolling on project that contains a total of 10000 lines of code.
The good news for us mortals, is that this is that this approach is starting to bite people back and for the companies that manage to survive the inevitable head on collision, they will have to dig deep in their pockets to get people to clean up the mess.
Gemini parsed 5000 lines assembly program. And it understood everything.
I wanted to change it from 32-bit MSDOS to 64-bit Linux. But it realized that the segmented memory model cannot be implemented in large memory without massive changes which breaks everything else.
It was willing to construct new program with seemingly same functionality, but the assembly code was so incomprehensible that whole project was useless as a learning tool. And C-version would have been faster already.
Sorry to say, but less talented humans like me-myself are already totally useless in this.
> but the assembly code was so incomprehensible
Wait, I thought you said it understood everything..
What? I mean it made its own version, but it was so full of incomprehensible squiggles that it was useless as a learning tool.
I just wanted to see what it would look like. Lesson learned.
I am really finding this. By the time I've specced out a feature properly to an LLM, I could have just written most of it quicker myself. But I often find that with jobs I want to give to other people, so maybe I over specify?
There's some tasks where it's pretty clear what you want though and are just boring jobs that are totally not worth speccing well and an LLM will blaze through.
Things like:
> But I often find that with jobs I want to give to other people, so maybe I over specify?
The difference is that with other people, you are training somebody else in your team who will eventually internalize what you taught them and then be able to carry the philosophy forward. Even if it took exactly the same amount of time for you to explain (+ code review etc), it's a clear net benefit in the long run. Not so with an LLM. There it's just lost time.
If you use plan mode, parallel agents and voice dictation, LLM-powered development becomes much faster and more powerful.
This assumes you always learn something new with every new program you write.
[flagged]
please stop wasting our attention with such comments