> Code reviewing coworkers are rapidly losing their minds as they come to the crushing realization that they are now the first layer of quality control instead of one of the last. Asked to review; forced to pick apart. Calling out freshly added functions that are never called, hallucinated library additions, and obvious runtime or compilation errors. All while the author—who clearly only skimmed their “own” code—is taking no responsibility, going “whoopsie, Claude wrote that. Silly AI, ha-ha.”
LLMs have made Brandolini's law ("The amount of energy needed to refute bullshit is an order of magnitude larger than to produce it") perhaps understated. When an inexperienced or just inexpert developer can generate thousands of lines of code in minutes, the responsibility for keeping a system correct & sane gets offloaded to the reviewers who still know how to reason with human intelligence.
As a litmus test, look at a PR's added/removed LoC delta. LLM-written ones are almost entirely additive, whereas good senior engineers often remove as much code as they add.
In my opinion this is another case where people look at it as a technical problem when it's actually a people problem. If someone does it once, they get a stern message about it. If it happens twice, it gets rejected and sent to their manager. Regardless of how you authored a pull request, you are signing off on it with your name. If it's garbage, then you're responsible.
How do have code review be an educational experience for onboarding/teaching if any bad submission is cut down with due prejudice?
I am happy to work with a junior engineer and is trying, and we have to loop on some silly mistakes, and pick and choose which battles to balance building confidence with developing good skills.
But I am not happy to have a junior engineer throw LLM stuff at me, inspired the confidence that the psycophantic AI engendered in it, and then have to churn on that. And if you're not in the same office, how do you even hope to sift through which bad parts are which kind?
To mentor requires a mentee. If a junior is not willing to learn (reasoning, coming up, with an hypothesis, implementing the concept, and verifying it), then why should a senior bother to teach. As a philosopher has once said, a teacher is not meant to give you the solution, but to help you come up with your own.
The problem is leadership buy in. The person throwing the LLM slop at github has great metrics when the leadership are looking at cursor usage, lines of code, PR numbers, while the person slowing down to actually read wtf the other people are submitting is now so drowning in slop that they have less time to produce on their own. So the execs look at it as the person complaining "not keeping up with the times".
If leadership is that inept, then this is likely only 1 of many problems they are creating for the organization. I would be looking for alternative employment ASAP.
the issue isn't recognizing malign influence within your current organization... it's an issue throughout the entire industry, and I think what we're all afraid of is that it's becoming more inevitable every day, because we're not the ones who have the final say. the luddites essentially failed, after all, because the wider world was not and is not ready for a discussion about quality versus profit.
A poor quality product can only be profitable if no high quality alternative exists (at a similar price point). Every time that's the case, it's an epic opportunity for anybody with the wherewithal to raise some funding and build that high quality alternative themselves. A dysfunctional industry running on AI slop will not be able to keep you from eating their lunch unless they can achieve some sort of regulatory capture, which would be a separate (political) issue.
Regarding your Luddite reference, I think the cost-vs-quality debate was actually the centerpiece of that incident. Would you rather pay $100 for a T-shirt that's only marginally better than one that costs $10? I certainly would not. People are constantly evaluating cost-quality tradeoffs when making purchasing decisions. The exact ratio of the tradeoff matters. There's always a price point at which something starts (or stops) making sense.
Maybe the process should have actual two stage pull requests. First stage is you have to comment the request and show some test cases against it. And only then next person has to take a look. Not sure if such flow is even possible with current tools.
> All while the author—who clearly only skimmed their “own” code—is taking no responsibility, going “whoopsie, Claude wrote that. Silly AI, ha-ha.”
Now I don't do code reviews in large teams anymore, but if I did and something like that happened, I'd allow it exactly once, otherwise I'd try to get the person fired. Barring that, I'd probably leave, as that sounds like a horrible experience.
Ya, there's not much you can do when leadership is so terrible. If this kind of workflow is genuinely blessed by management, I would just start using Claude for code reviews too. Then when things break and people want to point fingers at the code reviewer, I'd direct them to Claude. If it's good enough to write code without scrutiny, it's good enough to review code without scrutiny.
This is a broader issue about how where we place blame when LLMs are involved. Humans seem to want to parrot the work and take credit when it’s correct while deflecting blame when it’s wrong. With a few well placed lawsuits this paradigm will shift imho
I feel like I went through this stage ahead of time, a decade ago, when I was junior dev, and was starting my days by: first reviewing the work of a senior dev who was cramming out code and breaking things at the speed of light (without LLMs); and then leaving a few dozen comments on pull requests of the offshore team. By midday I had enough for the day.
Now that I'm no longer at that company since a few years ago, I'm invincible. No LLM can scare me!
The problem rather is that you still have to stay somewhat agreeable while calling out the bullshit. If you were "socially allowed" to treat colleagues like
> All while the author—who clearly only skimmed their “own” code—is taking no responsibility, going “whoopsie, Claude wrote that. Silly AI, ha-ha.”
as they really deserve, the problem would disappear really fast.
So the problem that you outlined is rather social, and not the LLMs per se (even though they very often do produce shitty code).
They should get a clear explanation of the problem and of the team expectations the first time it happens.
If it happens a second time? A stern talk from their manager.
A third time? PIP or fired.
Let your manager be the bad guy. That's part of what they're for.
Your manager won't do that? Then your team is broken in a way you can't fix. Appeal to their manager, first, and if that fails put your resume on the street.
> If it happens a second time? A stern talk from their manager.
In my experience, the stern talk would probably go to you, for making the problem visible. The manager wouldn't want their manager to hear of any problems in the team. Makes them look bad, and probably lose on bonuses.
Happened to me often enough. What you described I would call a lucky exception.
I have noticed Claude's extreme and obtuse reluctance to delete code, even code that it just wrote that I told it is wrong. For example, it might produce a fn:
fn foo(bar)
And then I say, no, I actually wanted you to "foo with a frobnitz", so now we get:
fn foo(bar) // Never called
fn foo_with_frobnitz(bar)
You have two options: Burn out because you need to correct every stupid line of code, or... Start to not give a damn about quality of code and live a happy life while getting paid.
The sane option is to join the cult. Just accept every pull request. Git blame won't show your name anyways. If CEOs want you to use AI, then tell AIs to do your review, even better.
> All while the author—who clearly only skimmed their “own” code—is taking no responsibility, going “whoopsie, Claude wrote that. Silly AI, ha-ha.”
After you made your colleagues upset submitting crappy code for review, you start to pay attention.
> LLM-written ones are almost entirely additive,
Unless you noticed that code has to be removed, and you instruct the LLM to do so.
I don't think LLMs really change the dynamics here. "Good programmers" will still submit good code, easy for their colleagues to review, whether it was written with the help of an LLM or not.
>After you made your colleagues upset submitting crappy code for review, you start to pay attention.
If the only thing keeping you from submitting crappy code is an emotional response from coworkers, you are not a "good programmer", no matter what you instruct your LLM.
I'm working on the second project handed to me that was vibe-coded. What annoys me assuming it runs is the high number of READMEs which I'm not even sure which one to use/if applicable.
They are usually verbose/include things like "how to run a virtual env for python"
I'd say it depends on how coding assistants are used, when on autopilot I'd agree, as they don't really take the time to reflect on the work they've done before going on with the next feature of the spec. But in a collaborative process that's of course different as you are pointing out things you want to have implemented in a different way. But I get your point, most PR's you'd flag as AI generated slop are the ones where someone just ran them on autopilot and was somewhat satisfied with the outcome, while treating the resulting code as blackbox
Ignoring LLMs for a second, some code I write is done in sort of full-craft full-diligence mode, where I am only committing something where I am very proud of it's structure and of every line of code. I know it inside and out, I have reasons for every decision, major or minor, and I don't know of any ways to make it better. Not only is the code excellent, I've also produced a person (me) who is an expert in that code.
Most code is not like that. Most code I want to get something done, and so I achieve something quite a bit below that bar. But some things I get to write in that way, and it is very rewarding to do so. It's my favorite code to write by a mile.
Back to LLMs - I find it is both easier than ever and harder than ever to write code in that mode. Easier than ever because, if I can actually get and stay in that mode psychologically, I can get the result I want faster, and the bar is higher. Even though I am able to write MUCH better code than an LLM is, I can write even better code with LLM assistance.
But it is harder than ever to get into that mode and stay in that mode. It is so easy to just skim LLM-generated code, and it looks good and it works. But it's bad code, maybe just a little bit at first, but it gets worse and worse the more you let through. Heck, sometimes it just starts out as not-excellent code, but every time you accept it without enough diligence the next output is worse. And by the time you notice it's often too late, you've slopped yourself, while also failing to produce an expert in the code that's been written.
Within the past 2 months, as I've started to use AI more, I've had this trajectory:
1. only using AI for small things, very impressed by it
2. giving AI bigger tasks and figuring out how to use it well for those bigger tasks
3. full-agentic mode where AI just does its thing and I review the code at the end
4. realising that I still need to think through all the code and that AI is not the shortcut I was hoping it to be (e.g. where I can give it a high-level plan and be reasonably satisfied with the final code)
5. going back to giving AI small tasks
I've found AI is very useful for research, proof-of-concepts and throwaway code of "this works, but is completely unacceptable in production". It's work I tend to do anyway before I start tackling the final solution.
Big-picture coding is in my hands, but AI is good at filling in the logic for functions and helping out with other small things.
Thank you, author. This essay made my day. It resonates with my thinking of last months. I tried to use AI at work, but most of times I regrettably scratched whatever it did and did stuff on my own. So many points I agree with. Delegating thinking to AI is the worst thing I can do to my career. AI at best is mediocre text generator.
So funny to read how people attack author using non-related to the essay’s message criticism.
The worst thing for me is that I am actually good at LLM-based coding
My coworkers that are in love with this new world are producing complete AI slop and still take ages to complete tasks. Meanwhile I can finally play my strength as I actually know software architecture, can ask the LLM to consider important corner case and so on.
Plus, I am naturally good at context management. Being neurodivergent has given me decades of practice in working with entities that have a different way of thinking that me own. I have more mechanical empathy for the LLM because I don't confuse it for a human. My coworkers meanwhile get super frustrated that the LLM can not read their mind.
That said, LLMs are getting better. My advantage will not last. And the more AI slop gets produced the more we need LLMs to cope with all the AI slop in our code bases. A vicious cycle. No one will actually know what the code does. Soon my job will mostly consist of praying to the machine gods.
It seems to me that someone like you, seen from the outside (e.g. from a code-reviewing colleague), simply appears to be getting more productive, with no drop in quality. Maybe some stylistic shifts.
I don't think anyone is complaining about that too much.
I wonder how many people there are like you, where we don't get much data. If people don't complain about it, we generally don't hear about it, because they're just quietly moving on with their work.
Not to be confused with the AI hypesters who are loudly touting the benefits with dubious claims, of course (:
I think I also fit into this category. Minor to medium productivity boost and maybe some stylistic evolving, but largely no complaints because it's just another tool I use sometimes.
Oh, first time hearing that term. Thank you, I love it!
Though I don't think this is at play here. Maybe a bit but seeing how my coworkers prompt, there is objective difference. I will spend half an hour on writing a good prompt, revise the implementation plan with the LLM multiple times before I allow it to even start doing anything while my coworkers just write "fix this" and wonder why the stupid AI can't read their minds.
I am producing AI slop as well, just hopefully a bit less. Obviously hand crafted code is still much better but my boss wants me to use "AI" so I do as I am told.
Configuring editors, dot files, and dev environments consistently adds value by giving you familiarity with your working environment, honing your skills with your tools, and creating a more productive space tailored to your needs.
Who else becomes the go to person for modifying build scripts?
The amount of people I know who have no idea how to work with Git after decades in the field using it is pretty amazing. It's not helpful for everyone else when you're the one they're delegating their merge conflict bullshit too cause they've never bothered to learn anything about the tools they're using.
How dumbed down does everything need to be? Git has warts for sure, but this whole ideas guy no actual understanding of anything is how you get trainwrecks. There is no free lunch, and you're going to pay one way or another for not understanding the tools of the craft, and that not everything can be ridiculously simple.
It's pretty great if you understand how to do resets, interactive rebases, understand the differences between merges and rebases, keep your commit history fairly clean, and just work with the tool. I haven't had a problem with Git since I spent a day going through the git book something like 10 years ago.
Meanwhile this is in a discussion about tools which people spend incalculable amounts of hours tuning, for reference. The number of articles on Hacker News about how people have tuned their LLM setups is... grand to say the least.
What about any tool, language, library, or codebase that is unnecessarily complex? Should we never bother to put in the effort to learn to use them? It doesn't mean they are without value to us as programmers. For better or worse, the hallmark of many good programmers I've met is a much higher than average tolerance for sitting down and just figuring out how something computer-related works instead of giving up and routing around it.
Maybe Git is too complicated for hobby users, because it has a steep learning curve. But after two weeks using you now enough to handle things, so it shouldn't be a problem in any professional environment.
I think the author makes a decent point with regards to 'problem solving' and better tools and how LLM's somehow feel different. Fortran is a better tool, but you can still reproducibly trace things back to assembly code through the compiler.
LLM's feel like a non-deterministic compiler that transforms English into code of some sort.
> Coding is the means to an end, not the end itself.
> That may be fun for you, but it doesn’t add value
I'm not disagreeing with you per se, but those statements are subjective, not an objective truth. Lots of people fundamentally enjoy the process of coding, and would keep doing it even in a hypothetical world with no problems left to solve, or if they had UBI.
Why would someone who likes solving problems choose a very lucrative career path solving problems… hmmm
You can also solve problems as a local handyman but that doesn’t pad the 401K quite as well as a career in software.
I feel like there’s a lot of tech-fetishist right now on the “if you don’t deeply love to write code then just leave!” train without somehow realizing that most of us have our jobs because we need to pay bills, not because it’s our burning passion.
It's because there are a significant number of us for who tinkering with and building shit is basically a compulsion. And software development is vastly more available, and quicker to iterate and thus more satisfying, than any other tinkering discipline. It's probably related to whatever drives some people to make art, the only difference being that the market has decided that the tinkers are worth a hell of a lot more.
For evidence towards the compulsion argument, look at the existence of FOSS software. Or videogame modding. Or all the other freely available software in existence. None of that is made by people who made the rational decision of "software development is a lucrative field that will pay me a comfortable salary, thus I should study software development". It's all made by people for whom there is no alternative but to build.
> I feel like there’s a lot of tech-fetishist right now on the “if you don’t deeply love to write code then just leave!” train without somehow realizing that most of us have our jobs because we need to pay bills, not because it’s our burning passion.
I would claim that I love coding quite a lot. The problem is rather that my bosses and colleagues don't care about what I love about it. It is rather appreciated if you implement tasks fast with shitty code instead of considering the fact that tasks are easy to implement and the code is really fast as a strong evidence that the abstractions were well-chosen.
Thus, I believe that people who just do it for the money have it easier in the "programming industry" than programmers who really love programming, and are thus a big annoyance to managers.
I thus really wonder myself why companies tell all the time about "love for programming" instead of "love for paying the bills" and "love for implementing tasks fast with shitty code", which would give them people who are a much better culture fit for their real organizational processes.
Very level-headed comment. I'm one of those who sees programming as a means to an end and nothing else.
If I order something to be delivered, I don't care what model of car the delivery company uses. Much less what kind of settings they have for the carburetor needles or what kind of oil they're using. Sure, somebody somewhere might have to care about this.
That's also how people like me see programming. If the code delivers what we need, then great. Leave it be like that. There are more interesting problems to solve, no need to mess with a solution which is working well.
The things is most times, you are indeed buying the car that is going to make the delivery. And it's going to live in your garage. And if you're not careful, one day it will drive itself off a cliff, stall in the middle of a 10 hour drive, or you'll get robbed by individuals hiding in the trunk.
People that realize this care about their oil type and what tire they put on. People that do not, pay it forward when that crash does happen and they don't know how to recover, so queue up the war room, etc...
Even if you're not dogfooding your own software, if you do not take care of it properly, the cost of changes will climb up.
> Even if you're not dogfooding your own software, if you do not take care of it properly, the cost of changes will climb up.
How do you mean? If the software works, then it's done. There is no maintenance and it will continue working like that for decades. It doesn't have corrosion and moving parts like a car. Businesses make sure not to touch it or the systems it is depending on.
> "...without somehow realizing that most of us have our jobs because we need to pay bills..."
Oh, I wouldn't say that. The hacker culture of the 1970s from which the word hacker originated often poked fun at incurious corporate programmers and IIRC even Edsger Dijkstra wrote a fair bit of acerbic comments about them and their disinterest in the craft and science of computing.
Well, most of them (the hackers from the 70s) probably did do it solely for the love of the game.
We’re 50 years past that now. We’re in the era of boot camps. I feel semi confident saying “most of us” meaning the current developer work force are here for well paying jobs.
Don’t get me wrong I like software development! I enjoy my work. And I think I’d probably like it better than most things I’d otherwise be doing.
But what I’ve been getting at is that I enjoy it for the solving problems part. The actual writing of code itself for me just happens to be the best way to enjoy problem solving while making good money that enables a comfortable life.
To be put it another way, if being a SWE paid a poverty wage, I would not be living in a trailer doing this for my love of coding. I would go be a different kind of engineer.
You owe your cushy job and big paycheck entirely to those tech-fetishists that came before you.
Secondly, you are very blind if you don’t see that the AI making your job “easier” is close to replacing you entirely, if you don’t also have a deep understanding of the code produced. What’s to stop the Project Manager from vibe coding you out of the loop entirely?
State of the industry both short and medium term is that you want to be the one doing replacing vs being the one being replaced. Not great but this is where we are at. If you are say SRE there are myriad of companies working hard to eliminate SREs but they need experts to set shit up so that SREs are not needed. Same thing will cascade to other Tech work, some faster than others. Career-wise I think it is wise now to position yourself as one that knows how to set shit up for the “great replacement”
Yes we are rapidly moving towards a time where bullshitting will be more valued than deep understanding and problem solving. Both LLMs and the broader culture are pushing in that direction.
We all owe every part of everything to those who’ve come before us. That goes without saying, really.
> Secondly, you are very blind if you don’t see that the AI making your job “easier” is close to replacing you entirely, if you don’t also have a deep understanding of the code produced.
Brother don’t patronize me. I’m a senior engineer I’m not yeeting vibe code I don’t understand into prod.
I also understand the possibility of all of this potentially devaluing my labor or even wholesale taking my job.
What would you like me to do about that? Is me refusing to use the tools going to change that possibility?
Have yet to hear what else we should be doing about this. The hackernews answer appears to be some combination of petulance + burying head in the sand.
It’s more of a funeral, collective expression of grievance of a great, painful loss. An obituary for a glorious, short time in history where it was possible to combine a specific kind of intelligence, creativity, discipline, passion and values and be well compensated for it. A time when the ability to solve problems and solve them well had value. Not just being better at taking credit than other people.
It was wonderful.
I know you don’t care. So just go to some other forum where you don’t have to endure the whining of us who have lost something that was important to us.
At 47, I am an older guy already. But in my generation, people who went on to be programmers usually started tinkering with code at ~ 11 y.o. (back then on ZX Spectrum and similar cheap beasts available in freshly post-Communist Europe) out of interest and passion, not because of "I want to build a lucrative career".
(Given how massively widespread piracy was back then, programming looked rather like a good way to do hard work for free.)
Money matters, but coders who were drawn into the field purely by money and are personally detached from the substance of the job is an unknown species for me.
"You can also solve problems as a local handyman"
That is NOT the same sort of talent. My fingers are clumsy; my mind is not.
> if handyman work was paying $600/hr your fingers would un-clums themselves reaaaaaaly fast
I don't believe that. When it comes to motoric skills, including dancing etc., I am probably in the lowest quintile of the population.
Of course, I could become somewhat better by spending crazy amounts of time on training, but I would still be non-competitive even in comparison with an average person.
OTOH I am pretty good at writing prose/commentary, even though it is not a particulary lucrative activity, to the degree of being a fairly known author in Czechia. My tenth book is just out.
Talents are weird and seem to have mind of their own. I never planned to become an author, but something inside just wanted out. My first book was published just a few days shy of my 40th birthday, so not a "youthful experiment" by any means.
A bit harsh off a single post. I like solving problems, not just software engineering problems and I like writing code as a hobby, but I went to this job field only due to high salary and benefits.
In fact, I usually hate writing code at day job because it is boring things 20 out of 26 sprints.
I don't think it is. Labeling passion and love for your work "tech fetishism", is spiritually bankrupt. Mind you we're in general here not talking about people working in a mine to survive, which is a different story.
But people who do have a choice in their career, doing something they have no love for solely to add more zeros to their bank account? That is the fetish, that is someone who has himself become an automaton. It's no surprise they seem to take no issues with LLMs because they're already living like one. Like how devoid of curiosity do you have to be to do something half your waking life that you don't appreciate if you're very likely someone who has the freedom to choose?
> Like how devoid of curiosity do you have to be to do something half your waking life that you don't appreciate if you're very likely someone who has the freedom to choose?
Do you understand work-life balance? I get paid to do the job, I satisfy my curiosities in my free-time.
> But people who do have a choice in their career, doing something they have no love for solely to add more zeros to their bank account?
Because I doubt finding a well paying job that you love is something that is achievable in our society, at least not for most people.
IMO, the real fetishization here is "work is something more than a way to get paid" that's a corporate propaganda I'm not falling for.
>Because I doubt finding a well paying job that you love is something that is achievable in our society,
Which is why I stressed twice, including in the part you chose to quote, that I am talking about people who can achieve that. If you have to take care of your sick grandmother, you don't need to feel addressed.
But if you did have the resources to choose a career, like many people who comment here, and you ended up a software developer completely devoid of passion for the craft you're living like a Severance character. You don't get to blame the big evil corporations for a lack of dedication to a craft. You don't need to work for one to be a gainfully employed programmer, and even if you do and end up on a deadbeat project, you can still love what you do.
This complete indifference to what you produce, complete alienation from work, voluntarily chosen is a diseased attitude.
> The point of most jobs in the world is to "solve problems". So why did you pick software over those?
Because in a lot of jobs where you (have to) solve problems, the actual problems to solve are rather "political". So, if you are not good at office politics or you are not a good diplomat, software is often a much better choice.
The honest answer that applies to almost everyone here is that as a kid, they liked playing computer games and heard that the job pays well.
It's interesting, because to become a plumber, you pretty much need a plumber parent or a friend to get you interested in the trade show you the ropes. Meanwhile, software engineering is closer to the universal childhood dream of "I want to become an astronaut" or "I want to be a pop star", except more attainable. It's very commoditized by now, so if you're looking for that old-school hacker ethos, you're gonna be disappointed.
I think you're grossly underestimating the number of people here who fell into software development because it's one of the best outlets for "the knack" in existence. Sure, this site is split between the "tech-bro entrepreneur"-types and developers, and there are plenty of developers who got into this for the cash, but in my experience about a quarter of developers (so maybe 10-15% of users on this site) got into this profession due to getting into programming because it fed an innate need to tinker, and then after they spent a ton of time on it discovered that it was the best way to pay the bills available to them.
I got stupidly lucky that one of my hobbies as an avid indoorsman was not only valued by the private sector but also happened to pay well. This career was literally the only thing that saved me from a life of poverty.
You can spend as much time as you want on "configuration of our editor, tinkering with dot files, and dev environments" and otherwise honing your craft, the business machine will still look at you as cogs.
May seem depressing, but the bright side is that you as an individual are then free to find joy in your work wherever you can find it... whether its in delivering high-quality code, or just collecting a paycheck.
These are my thoughts exactly. Whenever I use agents to assist me in creating a simple program for myself, I carefully guide it through everything I want created, with me usually writing pages and pages of detailed plaintext instructions and specifications when it comes to the backends of things, I then modify it and design a user interface.
I very much enjoy the end product and I also enjoy designing (not necessarily programming) a program that fits my needs, but rarely implementing, as I have issues focusing on things.
A chef who sharpens his knives should stop because it doesn't add value
A contractor who prefers a specific brand of tool is wrong because the tool is a means to an end
This is what you sound like. Just because you don't understand the value of a craftsman picking and maintaining their tools doesn't mean the value isn't real.
Yes, but the point of being a chef is the food, not the knives. If there's a better way to prepare food than a knife, but you refuse to change, are you really a chef? Or are you a chef knife enthusiast?
I don't think that's really the point of this post; it's all about how LLMs are destroying our craft (ie, "I really like using knives!"), not really about whether the food is better.
I think the real problem is that it's actually increasingly difficult to defend the artisanal "no-AI" approach. I say this as a prior staff-level engineer at a big tech company who has spent the last six months growing my SaaS to ~$100k in ARR, and it never could have happened without AI. I like the kind of coding the OP is talking about too, but ultimately I'm getting paid to solve a problem for my customers. Getting too attached to the knives is missing the point.
Call me crazy, but my guess is that that may not have been able to happen without the decade of experience it took you to get to the Staff level engineering position at a big tech company which has enabled you to gain the skills required to review the AI code you're producing properly.
A closer analogy would be a chef who chooses to have a robot cut his tomatoes. If the robot did it perfect every time I'm sure he would use the robot. If the robot mushed the tomatoes some of the time, would he spend time carefully inspecting the tomatoes? or would he just cut them himself?
Even if the robot did it perfectly, you'd still have posts like these lamenting the loss of the craft of cutting tomatoes. And they're not wrong!
I guess I don't understand posts like this IF you think you can do it better without LLMs. I mean, if using AI makes you miserable because you love the craft of programming, AND you think using AI is a net loss, then just...don't use it?
But I think the problem here that all these posts are speaking to is that it's really hard to compete without using AI. And I sympathize, genuinely. But also...are we knife enthusiasts or chefs?
There are chefs but they are not us. Though it will upset many to hear it, what we are is fast food workers, assembling and reheating prepackaged stuff provided to us. Now a machine threatens to do the assembling and reheating for us, better and faster than we on average do.
The chefs coming up with recipes and food scientists doing the pre-packaging will do fine and are still needed. The people making the fast food machine will also do well for themselves. The rest of us fast food workers, well, not so much...
This is a strawman. The point is that the original poster was going on about knives, forgetting that the final product is the actual thing that matters, not whatever tool is used to create it. In your example, if the food is inferior, then the food is inferior.
Some of you have never been laid off and it shows.
Intrinsic value is great, where achievable. Companies do not care at all about intrinsic value. I take pride in my work and my craft to the extent I am allowed to, but the reality is that those of us who can’t adapt to the businesses desires will be made obsolete and cut loose, regardless of whatever values we hold.
The issue is that a lot of “programmers” think bike-shedding is the essence of programming. Fifty years ago, they would have been the ones saying that not using punch cards takes away from the art of programming, and then proudly showing off multiple intricate hole punchers they designed for different scenarios.
Good problem solvers... solve problems. The technological environment will never devalue their skills. It’s only those who rest on their laurels who have this issue.
> LLMs seem like a nuke-it-from-orbit solution to the complexities of software. Rather than addressing the actual problems, we reached for something far more complex and nebulous to cure the symptoms.
The author overlooks a core motivation of AI here: to centralize the high-skill high-cost “creative” workers into just the companies that design AIs, so that every other business in the world can fire their creative workers and go back to having industrial cogs that do what they’re told instead of coming up with ‘improvements’ that impact profits. It’s not that the companies are reaching for something complex and nebulous. It’s that companies are being told “AI lets you eject your complex and nebulous creative workers”, which is a vast reduction in nearly everyone’s business complexity. Put in the terms of a classic story, “The Wizard of Oz”, no one bothers to look behind the curtain because everything is easier for them — and if there’s one constant across both people and corporations, it’s the willingness to disregard long-term concerns for short-term improvements so long as someone else has to pay the tradeoff.
When I started programming for Corporate™ back 1995, it was a wildly different career than what it has become. Say what you want about the lunatics running the asylum, but we liked it that way. Engineering knew their audience, knew the tech stack, knew what was going on in "the industry", ultimately called the shots.
Your code was your private sandbox. Want to rewrite it every other release? Go for it. Like to put your curly braces on a new line? Like TABs (good for you)? Go for it. It's your code, you own it. (You break it, you fix it.)
No unit tests (we called that parameter checking). No code reviews (well, nothing formal — often, time was spent in co-workers offices talking over approaches, white-boarding API… Often if a bug was discovered or known, you just fixed it. There may have been a formal process beginning, but to the lunatics, that was optional.
You can imagine how management felt — having to essentially just trust the devs to deliver.
In the end management won, of course.
When I am asked if I am sorry that I left Apple, I have to tell people, no. I miss working at Apple in the 90's, but that Apple was never coming back. And I hate to say it, but I suspect the industry itself will never return to those "cowboy coding" days. It was fun while it lasted.
Back when I started in the late 2000s you had much clearer lines around your career path and speciality.
There was a difference between a sysadmin and a programmer. Now, I’m expected to be my own sysadmin-ops guy while also delivering features. While I worked on my systems chops for fun on the side, I purposely avoided it on the work side, I don’t usually enjoy how bad vendor documentation, training, etc. can be in the real world of Corporate America.
I started around the same time. No unit tests but we did have code reviews because of ISO 9001 requirements. That meant printing out the diffs on the laser printer and corralling 3 people into a meeting room to pour over them and then have them literally sign off on the change. This was for an RTOS that ran big industrial controls in things like steel plants and offshore oil rigs.
Project management was a 40 foot Gantt chart printed out on laser printer paper and taped to the wall. The sweet sound of waterfall.
> And I hate to say it, but I suspect the industry itself will never return to those "cowboy coding" days. It was fun while it lasted.
I don't think the industry will return to it, but I suspect there will be isolated environments for cowboys. When I was at WhatsApp (2011-2019), we were pretty far on the cowboy side of the spectrum... although I suspect it's different now.
IMHO, what's appropriate depends on how expensive errors are to detect before production, and how expensive errors are when detected after production. I lean into reducing the cost to fix errors rather than trying to detect errors earlier. OTOH, I do try not to make embarrassing errors, so I try to test for things that are reasonable to test for.
Depends. I see the teams around me slowly being corralled like cattle, no longer doing the corralling. My own team is still chiefly cowboys but the writing is on the wall and as we grow younger we lose more and more footing in this battle.
It really is a higher level language for coding though. Not as precise as Fortran but far more upside. I imagine monks bemoaning the printing press that took away the joy of their perfectly handwritten bibles they made in solitude
I also agree with comments on this thread stating that problem solving should be the focus and not the code.
However my view is that our ability to solve problems which require a specific type of deep thought will diminish over time as we allow for AI to do more of this type of thinking.
Purely asking for a feature is not “problem solving”.
I think you can enjoy both aspects - both the problem solving and the craft. There will be people who agree that of course from a rational perspective solving the problem is what matters, but for them personally the "fun" is gone. Generally people that identify themselves as "programmers" as the article does would be the people who enjoy problem solving/tinkering/building.
What if you want to be a better problem solver (in the tech domain)? Where should you focus your efforts? That's what is confusing to me. There is a massive war between the LLM optimists and pessimists. Whenever I personally use LLM tools, they are disappointing albeit still useful. The optimists tell me I should be learning how to prompt better, that I should be spending time learning about agentic patterns. The pessimists tell me that I should be focusing on fundamentals.
> I would love to read a study on why people so readily believe and trust in AI chatbots.
We associate authority experts with a) quick and b) broad answers. It's like when we're listening to a radio show and they patch in "Dr So N. So" an expert in Whatever from Academia Forever U. They seem to know their stuff because a) they don't see "I don't know, let me get back to you after I've looked into that" and they can share a breadth of associated validations.
LLMs simulate this experience, by giving broadish, confident, answers very quickly. We have been trained by life's many experiences to trust these types of answers.
I think "Identity Crisis" is a bit over dramatic, but I for the most part agree with the sentiment. I have written something in the same vane, but still different enough that I would love to comment it but its just way more efficient to point to my post. I hope that is OK: https://handmadeoasis.com/ai-and-software-engineering-the-co...
It's the explicitly stated goal of several of the largest companies on the planet which put up a lot of money to try to reach that goal. And the progress over the past few years has been stunning.
I liked your emphasis on individual diversity, and an attendant need to explore, select, adapt, and integrate tooling. With associated self-awareness. Pushing that further, your "categories" seem more like exemplars/prototypes/archetypes/user-stories, helpful discussion points in a high-dimensional space of blended blobs. And as you illustrate, it branches not just on the individual, but also on what they are up to. And not just on work vs hobby, but on context and task.
It'd be neat to have a big user story catalog/map, which tracks what various services are able to help with.
I was a kid in NE43 instead of TFA's Building 26 across the street - with Lisp Machines and 1980s MIT AI's "Programmer's Apprentice" dreams. I years ago gave up on ever having a "this... doesn't suck" dev env, on being able to "dance code". We've had such a badly crippling research and industrial policy, and profession... "not in my lifetime" I thought. Knock on wood, I'm so happy for this chance at being wrong. And also, for "let's just imagine for a moment, ignoring the utterly absurd resources it would take to create, science education content that wasn't a wretched disaster... what might that look like?" - here too it's LLMs, or no chance at all.
This comes up whenever _anything_ is automated: "this is the end of programming as a career!" I heard this about Rational Rose in the 90's, and Visual Basic in the 80's.
I don't think I'm sticking my head in the sand - an advanced enough intelligence could absolutely take over programming tasks - but I also think that such an intelligence would be able to take over _every_ thought-related task. And that may not be a bad thing! Although the nature of our economy would have to change quite a bit to accommodate it.
I might be wrong: Doug Hofstadter, who is way, way smarter than me, once predicted that no machine would ever beat a human at chess unless it was the type of machine that said "I'm bored of chess now, I would prefer to talk about poetry". Maybe coding can be distilled to a set of heuristics the way chess programs have (I don't think so, but maybe).
Whether we're right or wrong, there's not much we can do about it except continue to learn.
Great read, unlike technologies of the past that automated away the dangerous/boring/repetitive/soul-sucking jobs, LLM's are an assault on our thinking.
Social media already reduced our attention spans to that of goldfish, open offices made any sort of deep meaningful work impossible.
> Creative puzzle-solving is left to the machines, and we become mere operators disassociated from our craft.
For me, at least, this has not been the case. If I leave the creative puzzle-solving to the machine, it's gonna get creative alright, and create me a mess to clean up. Whether this will be true in the future, hard to say. But, for now, I am happy to let the machines write all the React code I don't feel like writing while I think about other things.
Additionally, as an aside, I already don't think coding is always a craft. I think we want it to be one because it gives us the aura of craftspeople. We want to imagine ourselves as bent over a hunk of marble, carving a masterpiece in our own way, in our time. And for some of us, that is true. For most programmers in human history though, they were already slinging slop before anybody had coined the term. Where is the inherent dignity and human spirit on display in the internal admin tool at a second tier insurance company? Certainly, there is business value there, but it doesn't require a Michalengo to make something that takes in a pdf and spits out a slightly changed pdf.
Most code is already industrial code, which is precisely the opposite of code as craft. We are dissociated from the code we write, the company owns it, not us, which is by definition the opposite of a craftsmen and craft mode of production. I think AI is putting a finer, sharper point on this, but it was already there and has been since the beginning of the field.
To be honest I already reached that identity crisis even before LLMs.
Nowadays many enterprise projects have become placing SaaS products together, via low code/no code integrations.
A SaaS product for the CMS, another one for assets, another for ecommerce and payments, another for sending emails, another for marketing, some edge product for hosting the frontend, finally some no code tools to integrate everything, or some serverless code hosted somewhere.
Welcome to MACH architecture.
Agents now made this even less about programming, as the integrations can be orchestrated via agents, instead of low code/no code/serverless.
I'm in the opposite camp. Programming has never been fun to me, and LLMs are a godsend to deal with all the parts I don't care for. LLMs have accelerated my learning speed and productivity, and believe it or not, programming even started to become fun and engaging!
As an aside, I've been using copilot code review before handing off any of my code to colleagues. It's a bit pedantic, but it generally catches all the most stupid things I've done so that the final code review tends to be pretty smooth.
I hate to suggest that the fix to LLM slop is more LLMs, but in this case it's working for me. My coworkers also seem to appreciate the gesture.
I agree that LLMs are great for a cursory review, but crucially, when you ask copilot to review your code, you actually read and think about everything copilot tells you in the response. The biggest issues arise because people will blindly submit AI-generated code without reading or thinking about it.
This process has been affecting most of the world's workers for the past several centuries. Programming has received a special treatment for the last few decades, and it's understandable that HN users would jump to protect their life investment, but it need not.
Hand-coding can continue, just like knitting co-exists with machine looms, but it need not ultimately maintain a grip on the software productive process.
It is better to come to terms with this reality sooner rather than later in my opinion.
> This process has been affecting most of the world's workers for the past several centuries.
It has also been responsible for predicting revolutions which never failed to materialize. 3D printing would make some kind of manufacturing obsolete, computers would make about half the world's jobs obsolete, etc etc.
Hand coding can be the knitting to the loom, or it can be industrialized plastic injection molding to 3D printing. How do you know? That distinction is not a detail--it's the whole point.
It's survivorship bias to only look at horses, cars, calculators, and whatever other real job market shifting technologies occurred in the past and assume that's how it always happens. You have to include all predictions which never panned out.
As human beings we just tend no to do that.
[EDIT: this being Pedantry News let me get ahead of an inevitable reply: 3D printing is used industrially, and it does have tremendous value. It enabled new ways of working, it grew the economy, and in some cases yes it even replaced processes which used to depend on injection molding. But by and large, the original predictions of "out with the old, in with the new" did not pan out. It was not the automobile to the horse and buggy. It was mostly additive, complementary, and turned out to have different use cases. That's the distinction.]
> Hand coding can be the knitting to the loom, or it can be industrialized plastic injection molding to 3D printing. How do you know? That distinction is not a detail--it's the whole point.
One could have made a reasonable remark in the past about how injection molding is dramatically faster than 3D printing (it applies material everywhere, all at once), scales better for large parts, et cetera. This isn't really true for what I'm calling hand-coding.
Obviously nothing about the future can be known for certain... but there are obvious trends that need not stop at software engineering.
I think there is only a very narrow band where LLMs are good enough at producing software that "hand-coding" is genuinely dead but at the same time bad enough that (expensive) humans still need to be paid to be in the loop.
Did an AI write your post or did you "hand write it"?
Code needs to be simple and maintainable and do what it needs to do. Auto complete wasn't a huge time saver because writing code wasn't the bottleneck then and it definitely is not the bottleneck now. How much you rely on an LLM won't necessarily change the quality or speed of what you produce. Specially if you pretend you're just doing "superior prompting with no hand coding involved".
LLMs are awesome but the IDE didn't replace the console text editor, even if it's popular.
> Code needs to be simple and maintainable and do what it needs to do.
And yet after 3 decades in the industry I can tell you this fantasy exists only on snarky HN comments.
> Hand-coding is no longer "the future"?
hand-coding is 100% not the future, there are teams already that absolutely do not hand-code anything anymore (I help with one of them that used to have 19 "hand-coders" :) ). The typing for sure will get phased out. it is quite insane that it took "AI" to make people realize how silly and wasteful is to type characters into IDEs/editors. the sooner you see this clearly the better it will be for your career
> How much you rely on an LLM won't necessarily change the quality or speed of what you produce.
if it doesn't you need to spend more time and learn and learn and learn more. 4/6/8 terminals at a time doing all various things for you etc etc :)
I started writing code in basic on a beige box. My first code on windows was a vb6 window that looked like the AOL login screen and used open email relays to send me passwords.
I've written a ton of code in my life and while I've been a successful startup CTO, I've always stayed in IC level roles (I'm in one right now in addition to hobby coding) outside of that, data structures and pipelines, keep it simple, all that stuff that makes a thing work and maintainable.
But here is the thing, writing code isn't my identity, being a programmer, vim vs emacs, mechanical keyboard, RTFM noob, pure functions, serverless, leetcode, cargo culting, complexity merchants, resume driven dev, early semantic css lunacy, these are thing outside of me.
I have explored all of these things, had them be part of my life for better or worse, but they aren't who I am.
I am a guy born with a bunch of heart defects who is happy to be here and trying new stuff, I want to explore in space and abstraction through the short slice of time I've got.
I want to figure stuff out and make things and sometimes that's with a keyboard and sometimes that's with a hammer.
I think there are a lot of societal status issues (devs were mostly low social status until The Social Network came out) and personal identity issues.
I've seen that for 40 years, anything tied to a persons identity is basically a thing they can't be honest about, can't update their priors on, can't reason about.
And people who feel secure and appreciated don't give much grace to those who don't, a lot of callous people out there, in the dev community too.
I don't know why people are so fast to narrow the scope of who they are.
Humans emit meaning like stars emit photons.
The natural world would go on without us, but as far as we have empirically observed we make the maximally complex, multi modally coherent meaning of the universe.
We are each like a unique write head in the random walk of giving the universe meaning.
There are a ton of issues from a network resilience and maximizing the random meaning generation walk where Ai and consolidation are extremely dangerous, I think as far as new stuff in the pipeline it's between Ai and artificial wombs that have the greatest risks for narrowing the scope of human discovery and unique meaning expansion to a catastrophic point.
But so many of these arguments are just post-hoc rationalizations to poorly justify what at root is this loss of self identity, we were always in the business of automating jobs out from under people, this is very weak tea and crocodile tears.
The simple fact is, all our tools should allow us to have materially more comfortable and free lives, the Ai isn't the problem, it's the fact that devs didn't understand that tech is best when empowering people to think and connect better and have more freedom and self determination with their time.
If that isn't happening it's not the codes fault, it's the network architecture of our current human power structures fault.
Agree, and well said. There are no points for hard work, only results -- this is an extremely liberating principle when taken to the limit and we should be happy to say goodbye to an era of manual software-writing being the norm, even if it costs the ego of some guy who spent the last 20 years being told SWE made him a demi-god.
Some people code to talk and don't want anything said for them. That's okay. Photography and paintings landed in different places with different purposes.
But all of Programming isn't the same thing. We just need new names for different types of programmers. I'm sure there were farmers that lamented the advent of machines because of how it threatened their identity, their connection to the land, etc....
but I want to personally thank the farmers who just got after growing food for the rest of us.
It's honestly not that deep. If AI increases productivity, we should accept it. If it doesn't, then the hype will eventually fade out. In any case, having attachment to the craft is a bit cringe. Technological progress trumps any emotional attachment.
I think in a few years, we will realize that LLMs have impacted our lives in a deeply negative way. The relatively small improvements LLMs bring to my life will be vastly outweighted by the negatives.
If LLM abilities stagnate around the current level it's not even out of the question that LLMs will negatively impact productivity simply because of all of the AI slop we'll have to deal with.
The IT world is waiting for a revolution. Only in order to blame that revolution for the mistakes of a few powerful people.
I would not be surprised if all this revolutionary sentiment is manufactured. That thing about "Luddites" (not a thing that will stick by the way), this nostalgic stuff, all of it.
We need to be much smarter than that and not fall for such obvious traps.
An identity is a target on your back. We don't need one. We don't need to unite to a cause, we're already amongst one of the most united kinds of workers there is, and we don't need a galvanizing identity to do it.
John Von Neumann famously questioned the value of compilers. Eventually we get the keyboard kids that have dominated computing since the early 70's in some form or another whether in a forward thinking way like Dan Ingalls or in an idealic way like the gcc/Free Software crowd. In parallel to this you have people like Laurel, Sutherland, Nelson who live in lateral thinking land.
The real issue is that we've been in-store for a big paradigm shift in how we interact with computers for decades at this point. SketchPad let us do competent, constraints based mathematics with images. Video games and the Logo language demonstrate the potential for programming using, "kinetics." In the future we won't code with symbols we'll dance our intent into and through the machine.
OK, but if you can't find out how to use new tools well, how good are you really as a craftsperson?
"We've always done it this way" is the path of calcification, not of a vibrant craft. And there are certainly many ways you can use LLMs to craft better things, without slop and vibecoding.
Programmer isn't a real thing, all these classes of people are made up. The biggesdt difference between an iPad Toddler and Dijkstra is that the toddler is much more efficient at programming.
Sure you can discover things that aren't intuitively obvious and these things may be useful, but that's more scientist than anything to do with programming.
programming + science = computer science
programming + engineering = software engineering
programming + iPad = interactive computing
programming + AI = vibe coding
Don't equate programming with software engineering when they are clearly two distinct things. This article would more accurately be called the software engineers' identity crisis.
Maybe some hobby engineers (programming + craft) might also be feeling this depending on how many external tools they already rely on.
What's really shocking is how many software engineers claim to put in Herculean effort in their code, but ship it on top (or adjacent if you have an API) of "platforms" that could scarcely be less predictable. These platforms have to work very hard to build trust, but it's all meaningless cause users are locked in anyway. When user abuse is rampant people are going to look for deus ex machina and some slimy guy will be there to sell it to them.
The problem I have with this argument is that it actually is English this time.
COBOL and SQL aren't English, they're formal languages with keywords that look like English. LLMs work with informal language in a way that computers have never been able to before.
Say that to the prompt guys and their AGENT.md rules.
Formalism is way easier than whatever this guys are concocting. And true programmer bliss is live programming. Common programming is like writing a sheet music and having someone else play it. Live programming is you at the instrument tweaking each part.
Yes natural languages are by nature ambiguous. Sometimes it's better to write specification in code rather than in a natural language(Jetbrains MPS for example).
But in faithful adherence to some kind of uncertainty principle, LLM prompts are also not a programming language, no matter if you turn down the temperature to zero and use a specialized coding model.
They can just use programming languages as their output.
This is also a strength. Formal languages struggle to work with concepts that cannot be precisely defined, which are especially common in the physical world.
e.g. it is difficult to write a traditional program to wash dishes, because how do you formally define a dish? You can only show examples of dishes and not-dishes. This is where informal language and neural networks shine.
The thing is... All those people were right. We no longer need the kinds of people we used to call programmers. There exists a new job, only semi related, that now goes by the name programmer. I don't know how many of the original programming professionals managed to make the transition to this new progression.
I'm seeing this reaction a lot from younger people (say, roughly under 25). And it's a shame this new suspicion has now translated into a prohibition on the use of dashes.
It's utterly uncommon in the kind of casual writing for which people are using AI, that's why it got noticed. Social media posts, blogs, ...
AI almost certainly picked it up mainly from typeset documents, like PDF papers.
It's also possible that some models have a tokenizing rule for recognizing faked-out em-dashes made of hyphens and turning them into real em-dash tokens.
On my own (long abandoned) blog, about 20% of (public) posts seem to contain an em dash: https://shreevatsa.wordpress.com/?s=%E2%80%94 (going by 4 pages of search results for the em dash vs 21 pages in total).
Ironically, I love using em dashes in my writing, but if I ever have to AI generate an email or summary or something, I will remove it for this exact reason.
That's simply not true, and pointlessly derogatory.
This article does not appear to be AI-written, but use of the emdash is undeniably correlated with AI writing. Your reasoning would only make sense if the emdash existed on keyboards. It's reasonable for even good writers to not know how or not care to do the extra keystrokes to type an emdash when they're just writing a blog post - that doesn't mean they have bad writing skills or don't understand grammar, as you have implied.
> That's simply not true, and pointlessly derogatory.
That same critique should first be aimed at the topmost comment, which has the same problem plus the added guilt of originating (A) a false dichotomy and (B) the derogatory tone that naturally colors later replies.
> It's reasonable for even good writers to not know how or not care
The text is true, but in context there's an implied fallacy: If X is "reasonable", it does not follow that Not-X is unreasonable.
More than enough (reasonable) real humans do add em-dashes when they write. When it comes to a long-form blog post—like this one submitted to HN—it's even more likely than usual!
> the extra keystrokes
Such as alt + numpad 0150 on Windows, which has served me well when on that platform for... gosh, decades now.
I don't think the character is that uncommon in the output of slightly-sophisticated writers and is not hard to generate (e.g., on macOS pressing option-shift-minus generates an em-dash).
In fact, on macOS and iOS simply typing two dashes (--) gets autocorrected to an em dash. I used it heavily, which was a bit sloppy since it doesn't also insert the customary hair spaces around the em dash.
Incidentally, I turned this autocorrection off when people started associating em dashes with AI writing. I now leave them manual double dashes--even less correct than before, but at least people are more likely to read my writing.
That's a silly take, just because they existed and were proper grammar before AI slop popularized them doesn't mean they're not statistically likely to indicate slop today, depending on the context.
What's sillier is people associating em-dashes with AI slop specifically because they are unsophisticated enough never to have learned how to use them as part of their writing, and assuming everyone else must be as poor of a writer as they are.
It's the literary equivalent of thinking someone must be a "hacker" because they have a Bash terminal open.
You're overthinking it. LLMs exploded the prevalence of em-dashes. That doesn't mean you should assume any instance of an em-dash means LLM content, but it's a reasonable heuristic at the moment.
> That doesn't mean you should assume any instance of an em-dash means LLM content
No, it doesn't. But people are putting that out there, people are getting accused of using AI because they know how to use em dashes properly, and this is dumb.
People have long talked about how reading code is far more important than writing code when working as a professional SWE. LLMs have only increased the relative importance of code review. If you're not doing a detailed code review of every line your LLM generates (just like you should have always been doing while reviewing human-generated code), you're doing a bad job. Sure, it's less fun, but that's not the point. You're a professional.
To me, the most salient point was this:
> Code reviewing coworkers are rapidly losing their minds as they come to the crushing realization that they are now the first layer of quality control instead of one of the last. Asked to review; forced to pick apart. Calling out freshly added functions that are never called, hallucinated library additions, and obvious runtime or compilation errors. All while the author—who clearly only skimmed their “own” code—is taking no responsibility, going “whoopsie, Claude wrote that. Silly AI, ha-ha.”
LLMs have made Brandolini's law ("The amount of energy needed to refute bullshit is an order of magnitude larger than to produce it") perhaps understated. When an inexperienced or just inexpert developer can generate thousands of lines of code in minutes, the responsibility for keeping a system correct & sane gets offloaded to the reviewers who still know how to reason with human intelligence.
As a litmus test, look at a PR's added/removed LoC delta. LLM-written ones are almost entirely additive, whereas good senior engineers often remove as much code as they add.
In my opinion this is another case where people look at it as a technical problem when it's actually a people problem. If someone does it once, they get a stern message about it. If it happens twice, it gets rejected and sent to their manager. Regardless of how you authored a pull request, you are signing off on it with your name. If it's garbage, then you're responsible.
I largely agree with sibling responses.
BUT...
How do have code review be an educational experience for onboarding/teaching if any bad submission is cut down with due prejudice?
I am happy to work with a junior engineer and is trying, and we have to loop on some silly mistakes, and pick and choose which battles to balance building confidence with developing good skills.
But I am not happy to have a junior engineer throw LLM stuff at me, inspired the confidence that the psycophantic AI engendered in it, and then have to churn on that. And if you're not in the same office, how do you even hope to sift through which bad parts are which kind?
To mentor requires a mentee. If a junior is not willing to learn (reasoning, coming up, with an hypothesis, implementing the concept, and verifying it), then why should a senior bother to teach. As a philosopher has once said, a teacher is not meant to give you the solution, but to help you come up with your own.
I agree and I’m surprised more people don’t get this. Bad behaviors aren’t suddenly okay because AI makes them easy.
If you are wasting time you may be value negative to a business. If you are value negative over the long run you should be let go.
We’re ultimately here to make money, not just pump out characters into text files.
How do you know the net value add isn’t greater with the AI, even if it requires more code review comments (and angrier coworkers)?
The problem is leadership buy in. The person throwing the LLM slop at github has great metrics when the leadership are looking at cursor usage, lines of code, PR numbers, while the person slowing down to actually read wtf the other people are submitting is now so drowning in slop that they have less time to produce on their own. So the execs look at it as the person complaining "not keeping up with the times".
If leadership is that inept, then this is likely only 1 of many problems they are creating for the organization. I would be looking for alternative employment ASAP.
the issue isn't recognizing malign influence within your current organization... it's an issue throughout the entire industry, and I think what we're all afraid of is that it's becoming more inevitable every day, because we're not the ones who have the final say. the luddites essentially failed, after all, because the wider world was not and is not ready for a discussion about quality versus profit.
A poor quality product can only be profitable if no high quality alternative exists (at a similar price point). Every time that's the case, it's an epic opportunity for anybody with the wherewithal to raise some funding and build that high quality alternative themselves. A dysfunctional industry running on AI slop will not be able to keep you from eating their lunch unless they can achieve some sort of regulatory capture, which would be a separate (political) issue.
Regarding your Luddite reference, I think the cost-vs-quality debate was actually the centerpiece of that incident. Would you rather pay $100 for a T-shirt that's only marginally better than one that costs $10? I certainly would not. People are constantly evaluating cost-quality tradeoffs when making purchasing decisions. The exact ratio of the tradeoff matters. There's always a price point at which something starts (or stops) making sense.
This a million times. If you do this three times, that's grounds for firing. You're literally not doing your job and lying that you are.
It's bizarre to me that people want to blame LLMs instead of the employees themselves.
(With open source projects and slop pull requests, it's another story of course.)
Maybe the process should have actual two stage pull requests. First stage is you have to comment the request and show some test cases against it. And only then next person has to take a look. Not sure if such flow is even possible with current tools.
Build the PR and run tests against it. Supported by all major CI/CD tools.
The solve is just rejecting the commit with a "clean this up" message as soon as you spot some BS. Trust is earned!
What do you do if the manager enables it?
> whereas good senior engineers often remove as much code as they add
https://www.folklore.org/Negative_2000_Lines_Of_Code.html
"One of my most productive days was throwing away 1000 lines of code." - Ken Thompson
> All while the author—who clearly only skimmed their “own” code—is taking no responsibility, going “whoopsie, Claude wrote that. Silly AI, ha-ha.”
Now I don't do code reviews in large teams anymore, but if I did and something like that happened, I'd allow it exactly once, otherwise I'd try to get the person fired. Barring that, I'd probably leave, as that sounds like a horrible experience.
Ya, there's not much you can do when leadership is so terrible. If this kind of workflow is genuinely blessed by management, I would just start using Claude for code reviews too. Then when things break and people want to point fingers at the code reviewer, I'd direct them to Claude. If it's good enough to write code without scrutiny, it's good enough to review code without scrutiny.
This is a broader issue about how where we place blame when LLMs are involved. Humans seem to want to parrot the work and take credit when it’s correct while deflecting blame when it’s wrong. With a few well placed lawsuits this paradigm will shift imho
I feel like I went through this stage ahead of time, a decade ago, when I was junior dev, and was starting my days by: first reviewing the work of a senior dev who was cramming out code and breaking things at the speed of light (without LLMs); and then leaving a few dozen comments on pull requests of the offshore team. By midday I had enough for the day.
Now that I'm no longer at that company since a few years ago, I'm invincible. No LLM can scare me!
The problem rather is that you still have to stay somewhat agreeable while calling out the bullshit. If you were "socially allowed" to treat colleagues like
> All while the author—who clearly only skimmed their “own” code—is taking no responsibility, going “whoopsie, Claude wrote that. Silly AI, ha-ha.”
as they really deserve, the problem would disappear really fast.
So the problem that you outlined is rather social, and not the LLMs per se (even though they very often do produce shitty code).
They should get a clear explanation of the problem and of the team expectations the first time it happens.
If it happens a second time? A stern talk from their manager.
A third time? PIP or fired.
Let your manager be the bad guy. That's part of what they're for.
Your manager won't do that? Then your team is broken in a way you can't fix. Appeal to their manager, first, and if that fails put your resume on the street.
> If it happens a second time? A stern talk from their manager.
In my experience, the stern talk would probably go to you, for making the problem visible. The manager wouldn't want their manager to hear of any problems in the team. Makes them look bad, and probably lose on bonuses.
Happened to me often enough. What you described I would call a lucky exception.
> Let your manager be the bad guy. That's part of what they're for.
> Your manager won't do that? Then your team is broken in a way you can't fix.
If you apply this standard, then most teams are broken.
"A big enough system is always failing somewhere" - can't remember who said it
> LLM-written ones are almost entirely additive
I have noticed Claude's extreme and obtuse reluctance to delete code, even code that it just wrote that I told it is wrong. For example, it might produce a fn:
And then I say, no, I actually wanted you to "foo with a frobnitz", so now we get:You have two options: Burn out because you need to correct every stupid line of code, or... Start to not give a damn about quality of code and live a happy life while getting paid.
The sane option is to join the cult. Just accept every pull request. Git blame won't show your name anyways. If CEOs want you to use AI, then tell AIs to do your review, even better.
> All while the author—who clearly only skimmed their “own” code—is taking no responsibility, going “whoopsie, Claude wrote that. Silly AI, ha-ha.”
After you made your colleagues upset submitting crappy code for review, you start to pay attention.
> LLM-written ones are almost entirely additive,
Unless you noticed that code has to be removed, and you instruct the LLM to do so.
I don't think LLMs really change the dynamics here. "Good programmers" will still submit good code, easy for their colleagues to review, whether it was written with the help of an LLM or not.
>After you made your colleagues upset submitting crappy code for review, you start to pay attention.
If the only thing keeping you from submitting crappy code is an emotional response from coworkers, you are not a "good programmer", no matter what you instruct your LLM.
I'm working on the second project handed to me that was vibe-coded. What annoys me assuming it runs is the high number of READMEs which I'm not even sure which one to use/if applicable.
They are usually verbose/include things like "how to run a virtual env for python"
I'd say it depends on how coding assistants are used, when on autopilot I'd agree, as they don't really take the time to reflect on the work they've done before going on with the next feature of the spec. But in a collaborative process that's of course different as you are pointing out things you want to have implemented in a different way. But I get your point, most PR's you'd flag as AI generated slop are the ones where someone just ran them on autopilot and was somewhat satisfied with the outcome, while treating the resulting code as blackbox
Ignoring LLMs for a second, some code I write is done in sort of full-craft full-diligence mode, where I am only committing something where I am very proud of it's structure and of every line of code. I know it inside and out, I have reasons for every decision, major or minor, and I don't know of any ways to make it better. Not only is the code excellent, I've also produced a person (me) who is an expert in that code.
Most code is not like that. Most code I want to get something done, and so I achieve something quite a bit below that bar. But some things I get to write in that way, and it is very rewarding to do so. It's my favorite code to write by a mile.
Back to LLMs - I find it is both easier than ever and harder than ever to write code in that mode. Easier than ever because, if I can actually get and stay in that mode psychologically, I can get the result I want faster, and the bar is higher. Even though I am able to write MUCH better code than an LLM is, I can write even better code with LLM assistance.
But it is harder than ever to get into that mode and stay in that mode. It is so easy to just skim LLM-generated code, and it looks good and it works. But it's bad code, maybe just a little bit at first, but it gets worse and worse the more you let through. Heck, sometimes it just starts out as not-excellent code, but every time you accept it without enough diligence the next output is worse. And by the time you notice it's often too late, you've slopped yourself, while also failing to produce an expert in the code that's been written.
Within the past 2 months, as I've started to use AI more, I've had this trajectory:
I've found AI is very useful for research, proof-of-concepts and throwaway code of "this works, but is completely unacceptable in production". It's work I tend to do anyway before I start tackling the final solution.Big-picture coding is in my hands, but AI is good at filling in the logic for functions and helping out with other small things.
Thank you, author. This essay made my day. It resonates with my thinking of last months. I tried to use AI at work, but most of times I regrettably scratched whatever it did and did stuff on my own. So many points I agree with. Delegating thinking to AI is the worst thing I can do to my career. AI at best is mediocre text generator.
So funny to read how people attack author using non-related to the essay’s message criticism.
The worst thing for me is that I am actually good at LLM-based coding
My coworkers that are in love with this new world are producing complete AI slop and still take ages to complete tasks. Meanwhile I can finally play my strength as I actually know software architecture, can ask the LLM to consider important corner case and so on.
Plus, I am naturally good at context management. Being neurodivergent has given me decades of practice in working with entities that have a different way of thinking that me own. I have more mechanical empathy for the LLM because I don't confuse it for a human. My coworkers meanwhile get super frustrated that the LLM can not read their mind.
That said, LLMs are getting better. My advantage will not last. And the more AI slop gets produced the more we need LLMs to cope with all the AI slop in our code bases. A vicious cycle. No one will actually know what the code does. Soon my job will mostly consist of praying to the machine gods.
It seems to me that someone like you, seen from the outside (e.g. from a code-reviewing colleague), simply appears to be getting more productive, with no drop in quality. Maybe some stylistic shifts.
I don't think anyone is complaining about that too much. I wonder how many people there are like you, where we don't get much data. If people don't complain about it, we generally don't hear about it, because they're just quietly moving on with their work.
Not to be confused with the AI hypesters who are loudly touting the benefits with dubious claims, of course (:
I think I also fit into this category. Minor to medium productivity boost and maybe some stylistic evolving, but largely no complaints because it's just another tool I use sometimes.
A Russell conjugation: my LLM-based coding output, your Claude throwaway code, his complete AI slop.
Oh, first time hearing that term. Thank you, I love it!
Though I don't think this is at play here. Maybe a bit but seeing how my coworkers prompt, there is objective difference. I will spend half an hour on writing a good prompt, revise the implementation plan with the LLM multiple times before I allow it to even start doing anything while my coworkers just write "fix this" and wonder why the stupid AI can't read their minds.
I am producing AI slop as well, just hopefully a bit less. Obviously hand crafted code is still much better but my boss wants me to use "AI" so I do as I am told.
> One could only wonder why they became a programmer in the first place, given their seeming disinterest in coding.
To solve problems. Coding is the means to an end, not the end itself.
> careful configuration of our editor, tinkering with dot files, and dev environments
That may be fun for you, but it doesn’t add value. It’s accidental complexity that I am happy to delegate.
Configuring editors, dot files, and dev environments consistently adds value by giving you familiarity with your working environment, honing your skills with your tools, and creating a more productive space tailored to your needs.
Who else becomes the go to person for modifying build scripts?
The amount of people I know who have no idea how to work with Git after decades in the field using it is pretty amazing. It's not helpful for everyone else when you're the one they're delegating their merge conflict bullshit too cause they've never bothered to learn anything about the tools they're using.
Have you considered that the problem is with Git and not the users?
How dumbed down does everything need to be? Git has warts for sure, but this whole ideas guy no actual understanding of anything is how you get trainwrecks. There is no free lunch, and you're going to pay one way or another for not understanding the tools of the craft, and that not everything can be ridiculously simple.
But it's not that people don't grasp the concept of merge conflicts, it's just that the UX of git is bad.
It's pretty great if you understand how to do resets, interactive rebases, understand the differences between merges and rebases, keep your commit history fairly clean, and just work with the tool. I haven't had a problem with Git since I spent a day going through the git book something like 10 years ago.
Meanwhile this is in a discussion about tools which people spend incalculable amounts of hours tuning, for reference. The number of articles on Hacker News about how people have tuned their LLM setups is... grand to say the least.
What about any tool, language, library, or codebase that is unnecessarily complex? Should we never bother to put in the effort to learn to use them? It doesn't mean they are without value to us as programmers. For better or worse, the hallmark of many good programmers I've met is a much higher than average tolerance for sitting down and just figuring out how something computer-related works instead of giving up and routing around it.
Nah.
Maybe Git is too complicated for hobby users, because it has a steep learning curve. But after two weeks using you now enough to handle things, so it shouldn't be a problem in any professional environment.
Careful with the “doesn’t add value” talk. If you follow it far enough to its logical end, you get to “Existence doesn’t add value”
That’s the point lol.
I think the author makes a decent point with regards to 'problem solving' and better tools and how LLM's somehow feel different. Fortran is a better tool, but you can still reproducibly trace things back to assembly code through the compiler.
LLM's feel like a non-deterministic compiler that transforms English into code of some sort.
> Coding is the means to an end, not the end itself. > That may be fun for you, but it doesn’t add value
I'm not disagreeing with you per se, but those statements are subjective, not an objective truth. Lots of people fundamentally enjoy the process of coding, and would keep doing it even in a hypothetical world with no problems left to solve, or if they had UBI.
The point of most jobs in the world is to "solve problems". So why did you pick software over those?
Why would someone who likes solving problems choose a very lucrative career path solving problems… hmmm
You can also solve problems as a local handyman but that doesn’t pad the 401K quite as well as a career in software.
I feel like there’s a lot of tech-fetishist right now on the “if you don’t deeply love to write code then just leave!” train without somehow realizing that most of us have our jobs because we need to pay bills, not because it’s our burning passion.
It's because there are a significant number of us for who tinkering with and building shit is basically a compulsion. And software development is vastly more available, and quicker to iterate and thus more satisfying, than any other tinkering discipline. It's probably related to whatever drives some people to make art, the only difference being that the market has decided that the tinkers are worth a hell of a lot more.
For evidence towards the compulsion argument, look at the existence of FOSS software. Or videogame modding. Or all the other freely available software in existence. None of that is made by people who made the rational decision of "software development is a lucrative field that will pay me a comfortable salary, thus I should study software development". It's all made by people for whom there is no alternative but to build.
> I feel like there’s a lot of tech-fetishist right now on the “if you don’t deeply love to write code then just leave!” train without somehow realizing that most of us have our jobs because we need to pay bills, not because it’s our burning passion.
I would claim that I love coding quite a lot. The problem is rather that my bosses and colleagues don't care about what I love about it. It is rather appreciated if you implement tasks fast with shitty code instead of considering the fact that tasks are easy to implement and the code is really fast as a strong evidence that the abstractions were well-chosen.
Thus, I believe that people who just do it for the money have it easier in the "programming industry" than programmers who really love programming, and are thus a big annoyance to managers.
I thus really wonder myself why companies tell all the time about "love for programming" instead of "love for paying the bills" and "love for implementing tasks fast with shitty code", which would give them people who are a much better culture fit for their real organizational processes.
Very level-headed comment. I'm one of those who sees programming as a means to an end and nothing else.
If I order something to be delivered, I don't care what model of car the delivery company uses. Much less what kind of settings they have for the carburetor needles or what kind of oil they're using. Sure, somebody somewhere might have to care about this.
That's also how people like me see programming. If the code delivers what we need, then great. Leave it be like that. There are more interesting problems to solve, no need to mess with a solution which is working well.
The things is most times, you are indeed buying the car that is going to make the delivery. And it's going to live in your garage. And if you're not careful, one day it will drive itself off a cliff, stall in the middle of a 10 hour drive, or you'll get robbed by individuals hiding in the trunk.
People that realize this care about their oil type and what tire they put on. People that do not, pay it forward when that crash does happen and they don't know how to recover, so queue up the war room, etc...
Even if you're not dogfooding your own software, if you do not take care of it properly, the cost of changes will climb up.
> Even if you're not dogfooding your own software, if you do not take care of it properly, the cost of changes will climb up.
How do you mean? If the software works, then it's done. There is no maintenance and it will continue working like that for decades. It doesn't have corrosion and moving parts like a car. Businesses make sure not to touch it or the systems it is depending on.
> "...without somehow realizing that most of us have our jobs because we need to pay bills..."
Oh, I wouldn't say that. The hacker culture of the 1970s from which the word hacker originated often poked fun at incurious corporate programmers and IIRC even Edsger Dijkstra wrote a fair bit of acerbic comments about them and their disinterest in the craft and science of computing.
Well, most of them (the hackers from the 70s) probably did do it solely for the love of the game.
We’re 50 years past that now. We’re in the era of boot camps. I feel semi confident saying “most of us” meaning the current developer work force are here for well paying jobs.
Don’t get me wrong I like software development! I enjoy my work. And I think I’d probably like it better than most things I’d otherwise be doing.
But what I’ve been getting at is that I enjoy it for the solving problems part. The actual writing of code itself for me just happens to be the best way to enjoy problem solving while making good money that enables a comfortable life.
To be put it another way, if being a SWE paid a poverty wage, I would not be living in a trailer doing this for my love of coding. I would go be a different kind of engineer.
You owe your cushy job and big paycheck entirely to those tech-fetishists that came before you.
Secondly, you are very blind if you don’t see that the AI making your job “easier” is close to replacing you entirely, if you don’t also have a deep understanding of the code produced. What’s to stop the Project Manager from vibe coding you out of the loop entirely?
State of the industry both short and medium term is that you want to be the one doing replacing vs being the one being replaced. Not great but this is where we are at. If you are say SRE there are myriad of companies working hard to eliminate SREs but they need experts to set shit up so that SREs are not needed. Same thing will cascade to other Tech work, some faster than others. Career-wise I think it is wise now to position yourself as one that knows how to set shit up for the “great replacement”
Yes we are rapidly moving towards a time where bullshitting will be more valued than deep understanding and problem solving. Both LLMs and the broader culture are pushing in that direction.
We all owe every part of everything to those who’ve come before us. That goes without saying, really.
> Secondly, you are very blind if you don’t see that the AI making your job “easier” is close to replacing you entirely, if you don’t also have a deep understanding of the code produced.
Brother don’t patronize me. I’m a senior engineer I’m not yeeting vibe code I don’t understand into prod.
I also understand the possibility of all of this potentially devaluing my labor or even wholesale taking my job.
What would you like me to do about that? Is me refusing to use the tools going to change that possibility?
Have yet to hear what else we should be doing about this. The hackernews answer appears to be some combination of petulance + burying head in the sand.
It’s simpler than that.
It’s more of a funeral, collective expression of grievance of a great, painful loss. An obituary for a glorious, short time in history where it was possible to combine a specific kind of intelligence, creativity, discipline, passion and values and be well compensated for it. A time when the ability to solve problems and solve them well had value. Not just being better at taking credit than other people.
It was wonderful.
I know you don’t care. So just go to some other forum where you don’t have to endure the whining of us who have lost something that was important to us.
It’s not about not caring. It’s about accepting reality.
I get it, but fundamentally this is a forum discussing technology, and AI is part of that. Especially as it relates to software engineering.
I come here to learn, discuss, and frankly, to hang onto a good life as long as I can have it.
The collective whinging in every AI topic is both annoying and self-defeating.
At 47, I am an older guy already. But in my generation, people who went on to be programmers usually started tinkering with code at ~ 11 y.o. (back then on ZX Spectrum and similar cheap beasts available in freshly post-Communist Europe) out of interest and passion, not because of "I want to build a lucrative career".
(Given how massively widespread piracy was back then, programming looked rather like a good way to do hard work for free.)
Money matters, but coders who were drawn into the field purely by money and are personally detached from the substance of the job is an unknown species for me.
"You can also solve problems as a local handyman"
That is NOT the same sort of talent. My fingers are clumsy; my mind is not.
Hard agree, I am 51 and all of this resonates true with me except…
> That is NOT the same sort of talent. My fingers are clumsy; my mind is not.
if handyman work was paying $600/hr your fingers would un-clums themselves reaaaaaaly fast :)
Handyman work can pay very very well for those who are good at it
> if handyman work was paying $600/hr your fingers would un-clums themselves reaaaaaaly fast
I don't believe that. When it comes to motoric skills, including dancing etc., I am probably in the lowest quintile of the population.
Of course, I could become somewhat better by spending crazy amounts of time on training, but I would still be non-competitive even in comparison with an average person.
OTOH I am pretty good at writing prose/commentary, even though it is not a particulary lucrative activity, to the degree of being a fairly known author in Czechia. My tenth book is just out.
Talents are weird and seem to have mind of their own. I never planned to become an author, but something inside just wanted out. My first book was published just a few days shy of my 40th birthday, so not a "youthful experiment" by any means.
Just chiming in to say that in this — my era of AI Anxiety — it’s pretty cool you found something new and interesting to apply your talents to at 40.
It feels like we’re all going to have to have a reinvention or two ahead of us.
Sounds like a mediocre developer. No respect for people like you.
It’s a good thing I haven’t needed your respect so far to have a pretty successful career as a software engineer.
A bit harsh off a single post. I like solving problems, not just software engineering problems and I like writing code as a hobby, but I went to this job field only due to high salary and benefits.
In fact, I usually hate writing code at day job because it is boring things 20 out of 26 sprints.
>A bit harsh off a single post.
I don't think it is. Labeling passion and love for your work "tech fetishism", is spiritually bankrupt. Mind you we're in general here not talking about people working in a mine to survive, which is a different story.
But people who do have a choice in their career, doing something they have no love for solely to add more zeros to their bank account? That is the fetish, that is someone who has himself become an automaton. It's no surprise they seem to take no issues with LLMs because they're already living like one. Like how devoid of curiosity do you have to be to do something half your waking life that you don't appreciate if you're very likely someone who has the freedom to choose?
It definitely is. You are taking it way too far with your criticisms.
> Like how devoid of curiosity do you have to be to do something half your waking life that you don't appreciate if you're very likely someone who has the freedom to choose?
Do you understand work-life balance? I get paid to do the job, I satisfy my curiosities in my free-time.
> But people who do have a choice in their career, doing something they have no love for solely to add more zeros to their bank account?
Because I doubt finding a well paying job that you love is something that is achievable in our society, at least not for most people.
IMO, the real fetishization here is "work is something more than a way to get paid" that's a corporate propaganda I'm not falling for.
>Because I doubt finding a well paying job that you love is something that is achievable in our society,
Which is why I stressed twice, including in the part you chose to quote, that I am talking about people who can achieve that. If you have to take care of your sick grandmother, you don't need to feel addressed.
But if you did have the resources to choose a career, like many people who comment here, and you ended up a software developer completely devoid of passion for the craft you're living like a Severance character. You don't get to blame the big evil corporations for a lack of dedication to a craft. You don't need to work for one to be a gainfully employed programmer, and even if you do and end up on a deadbeat project, you can still love what you do.
This complete indifference to what you produce, complete alienation from work, voluntarily chosen is a diseased attitude.
> The point of most jobs in the world is to "solve problems". So why did you pick software over those?
Because in a lot of jobs where you (have to) solve problems, the actual problems to solve are rather "political". So, if you are not good at office politics or you are not a good diplomat, software is often a much better choice.
The honest answer that applies to almost everyone here is that as a kid, they liked playing computer games and heard that the job pays well.
It's interesting, because to become a plumber, you pretty much need a plumber parent or a friend to get you interested in the trade show you the ropes. Meanwhile, software engineering is closer to the universal childhood dream of "I want to become an astronaut" or "I want to be a pop star", except more attainable. It's very commoditized by now, so if you're looking for that old-school hacker ethos, you're gonna be disappointed.
I think you're grossly underestimating the number of people here who fell into software development because it's one of the best outlets for "the knack" in existence. Sure, this site is split between the "tech-bro entrepreneur"-types and developers, and there are plenty of developers who got into this for the cash, but in my experience about a quarter of developers (so maybe 10-15% of users on this site) got into this profession due to getting into programming because it fed an innate need to tinker, and then after they spent a ton of time on it discovered that it was the best way to pay the bills available to them.
I got stupidly lucky that one of my hobbies as an avid indoorsman was not only valued by the private sector but also happened to pay well. This career was literally the only thing that saved me from a life of poverty.
Yep, and the younger people like us growing up now are just fucked.
Don’t worry, once you’re no longer needed you’ll get to experience that life of poverty you missed out.
Nah, I've reached the point where I'll be just fine. Don't worry about me.
> but it doesn’t add value
Sad to see people reduce themselves willingly to cogs inside business machine.
You can spend as much time as you want on "configuration of our editor, tinkering with dot files, and dev environments" and otherwise honing your craft, the business machine will still look at you as cogs.
May seem depressing, but the bright side is that you as an individual are then free to find joy in your work wherever you can find it... whether its in delivering high-quality code, or just collecting a paycheck.
> To solve problems. Coding is the means to an end, not the end itself.
solving problems is an outcome of programming, not the purpose of programming
These are my thoughts exactly. Whenever I use agents to assist me in creating a simple program for myself, I carefully guide it through everything I want created, with me usually writing pages and pages of detailed plaintext instructions and specifications when it comes to the backends of things, I then modify it and design a user interface.
I very much enjoy the end product and I also enjoy designing (not necessarily programming) a program that fits my needs, but rarely implementing, as I have issues focusing on things.
A chef who sharpens his knives should stop because it doesn't add value
A contractor who prefers a specific brand of tool is wrong because the tool is a means to an end
This is what you sound like. Just because you don't understand the value of a craftsman picking and maintaining their tools doesn't mean the value isn't real.
Yes, but the point of being a chef is the food, not the knives. If there's a better way to prepare food than a knife, but you refuse to change, are you really a chef? Or are you a chef knife enthusiast?
Ok then throw a frozen meal from the supermarket into the microwave and be done with it.
Outcome is really the same, right? Why waste all that effort on a deep understanding of how to prepare food?
The point is, a lot of us aren't convinced reviewing 8 meals made by agents in parallel _is_ producing better food.
And it also seems exceedingly wasteful to boot.
I don't think that's really the point of this post; it's all about how LLMs are destroying our craft (ie, "I really like using knives!"), not really about whether the food is better.
I think the real problem is that it's actually increasingly difficult to defend the artisanal "no-AI" approach. I say this as a prior staff-level engineer at a big tech company who has spent the last six months growing my SaaS to ~$100k in ARR, and it never could have happened without AI. I like the kind of coding the OP is talking about too, but ultimately I'm getting paid to solve a problem for my customers. Getting too attached to the knives is missing the point.
Call me crazy, but my guess is that that may not have been able to happen without the decade of experience it took you to get to the Staff level engineering position at a big tech company which has enabled you to gain the skills required to review the AI code you're producing properly.
Totally true. But that's also a different point than "But I love using my knives!"
A closer analogy would be a chef who chooses to have a robot cut his tomatoes. If the robot did it perfect every time I'm sure he would use the robot. If the robot mushed the tomatoes some of the time, would he spend time carefully inspecting the tomatoes? or would he just cut them himself?
Even if the robot did it perfectly, you'd still have posts like these lamenting the loss of the craft of cutting tomatoes. And they're not wrong!
I guess I don't understand posts like this IF you think you can do it better without LLMs. I mean, if using AI makes you miserable because you love the craft of programming, AND you think using AI is a net loss, then just...don't use it?
But I think the problem here that all these posts are speaking to is that it's really hard to compete without using AI. And I sympathize, genuinely. But also...are we knife enthusiasts or chefs?
There are chefs but they are not us. Though it will upset many to hear it, what we are is fast food workers, assembling and reheating prepackaged stuff provided to us. Now a machine threatens to do the assembling and reheating for us, better and faster than we on average do.
The chefs coming up with recipes and food scientists doing the pre-packaging will do fine and are still needed. The people making the fast food machine will also do well for themselves. The rest of us fast food workers, well, not so much...
Absolutely true. In my case, I'm trying to run a restaurant, so this is all excellent news for me.
You'll be doing fine too, just doing other work.
And you can see it coming so there is plenty of time to prepare.
>The point of being a chef is the food, not the knives
They will never be able to undestand this, unfortunately
But what if the New Way to prepare food was to put a box into a microwave , wait 60 seconds, then hand it to the customer?
Sure the customer still gets fed but it's a far inferior product... And is that chef really cheffing?
This is a strawman. The point is that the original poster was going on about knives, forgetting that the final product is the actual thing that matters, not whatever tool is used to create it. In your example, if the food is inferior, then the food is inferior.
If that's your analogy, then shouldn't you be able to dominate the market by not using AI?
> coding is the means to an end
...
> doesn't add value
What about intrinsic value? So many programmers on HN seem to just want to be MBAs in their heart of hearts
Some of you have never been laid off and it shows.
Intrinsic value is great, where achievable. Companies do not care at all about intrinsic value. I take pride in my work and my craft to the extent I am allowed to, but the reality is that those of us who can’t adapt to the businesses desires will be made obsolete and cut loose, regardless of whatever values we hold.
I got a few paragraphs into this piece before rolling my eyes and putting it down.
I consider myself an engineer — a problem solver. Like you said, code is just the means to solve the problems put before me.
I’m just as content if solving the problem turns out to be a process change or user education instead of a code commit.
I have no fetish for my terminal window or IDE.
The issue is that a lot of “programmers” think bike-shedding is the essence of programming. Fifty years ago, they would have been the ones saying that not using punch cards takes away from the art of programming, and then proudly showing off multiple intricate hole punchers they designed for different scenarios.
Good problem solvers... solve problems. The technological environment will never devalue their skills. It’s only those who rest on their laurels who have this issue.
> LLMs seem like a nuke-it-from-orbit solution to the complexities of software. Rather than addressing the actual problems, we reached for something far more complex and nebulous to cure the symptoms.
The author overlooks a core motivation of AI here: to centralize the high-skill high-cost “creative” workers into just the companies that design AIs, so that every other business in the world can fire their creative workers and go back to having industrial cogs that do what they’re told instead of coming up with ‘improvements’ that impact profits. It’s not that the companies are reaching for something complex and nebulous. It’s that companies are being told “AI lets you eject your complex and nebulous creative workers”, which is a vast reduction in nearly everyone’s business complexity. Put in the terms of a classic story, “The Wizard of Oz”, no one bothers to look behind the curtain because everything is easier for them — and if there’s one constant across both people and corporations, it’s the willingness to disregard long-term concerns for short-term improvements so long as someone else has to pay the tradeoff.
I found this article really interesting. This is pretty much exactly how I feel about LLM programming.
I really enjoy programming and like the author said, it's my hobby.
On some level I kind of resent the fact that I don't really get to do my hobby for work any more. It's something fundamentally different now.
Full disclosure: I am old.
When I started programming for Corporate™ back 1995, it was a wildly different career than what it has become. Say what you want about the lunatics running the asylum, but we liked it that way. Engineering knew their audience, knew the tech stack, knew what was going on in "the industry", ultimately called the shots.
Your code was your private sandbox. Want to rewrite it every other release? Go for it. Like to put your curly braces on a new line? Like TABs (good for you)? Go for it. It's your code, you own it. (You break it, you fix it.)
No unit tests (we called that parameter checking). No code reviews (well, nothing formal — often, time was spent in co-workers offices talking over approaches, white-boarding API… Often if a bug was discovered or known, you just fixed it. There may have been a formal process beginning, but to the lunatics, that was optional.
You can imagine how management felt — having to essentially just trust the devs to deliver.
In the end management won, of course.
When I am asked if I am sorry that I left Apple, I have to tell people, no. I miss working at Apple in the 90's, but that Apple was never coming back. And I hate to say it, but I suspect the industry itself will never return to those "cowboy coding" days. It was fun while it lasted.
Back when I started in the late 2000s you had much clearer lines around your career path and speciality.
There was a difference between a sysadmin and a programmer. Now, I’m expected to be my own sysadmin-ops guy while also delivering features. While I worked on my systems chops for fun on the side, I purposely avoided it on the work side, I don’t usually enjoy how bad vendor documentation, training, etc. can be in the real world of Corporate America.
I started around the same time. No unit tests but we did have code reviews because of ISO 9001 requirements. That meant printing out the diffs on the laser printer and corralling 3 people into a meeting room to pour over them and then have them literally sign off on the change. This was for an RTOS that ran big industrial controls in things like steel plants and offshore oil rigs.
Project management was a 40 foot Gantt chart printed out on laser printer paper and taped to the wall. The sweet sound of waterfall.
> And I hate to say it, but I suspect the industry itself will never return to those "cowboy coding" days. It was fun while it lasted.
I don't think the industry will return to it, but I suspect there will be isolated environments for cowboys. When I was at WhatsApp (2011-2019), we were pretty far on the cowboy side of the spectrum... although I suspect it's different now.
IMHO, what's appropriate depends on how expensive errors are to detect before production, and how expensive errors are when detected after production. I lean into reducing the cost to fix errors rather than trying to detect errors earlier. OTOH, I do try not to make embarrassing errors, so I try to test for things that are reasonable to test for.
100% agreed. It’s just full of business assholes and vibe coder script kiddies now. Everything has turned to shit.
Try game dev. It's still like that today.
Depends. I see the teams around me slowly being corralled like cattle, no longer doing the corralling. My own team is still chiefly cowboys but the writing is on the wall and as we grow younger we lose more and more footing in this battle.
It really is a higher level language for coding though. Not as precise as Fortran but far more upside. I imagine monks bemoaning the printing press that took away the joy of their perfectly handwritten bibles they made in solitude
I absolutely loved this piece.
I also agree with comments on this thread stating that problem solving should be the focus and not the code.
However my view is that our ability to solve problems which require a specific type of deep thought will diminish over time as we allow for AI to do more of this type of thinking.
Purely asking for a feature is not “problem solving”.
I think you can enjoy both aspects - both the problem solving and the craft. There will be people who agree that of course from a rational perspective solving the problem is what matters, but for them personally the "fun" is gone. Generally people that identify themselves as "programmers" as the article does would be the people who enjoy problem solving/tinkering/building.
What if you want to be a better problem solver (in the tech domain)? Where should you focus your efforts? That's what is confusing to me. There is a massive war between the LLM optimists and pessimists. Whenever I personally use LLM tools, they are disappointing albeit still useful. The optimists tell me I should be learning how to prompt better, that I should be spending time learning about agentic patterns. The pessimists tell me that I should be focusing on fundamentals.
> I would love to read a study on why people so readily believe and trust in AI chatbots.
We associate authority experts with a) quick and b) broad answers. It's like when we're listening to a radio show and they patch in "Dr So N. So" an expert in Whatever from Academia Forever U. They seem to know their stuff because a) they don't see "I don't know, let me get back to you after I've looked into that" and they can share a breadth of associated validations.
LLMs simulate this experience, by giving broadish, confident, answers very quickly. We have been trained by life's many experiences to trust these types of answers.
I think "Identity Crisis" is a bit over dramatic, but I for the most part agree with the sentiment. I have written something in the same vane, but still different enough that I would love to comment it but its just way more efficient to point to my post. I hope that is OK: https://handmadeoasis.com/ai-and-software-engineering-the-co...
I can think of few truer identity crises than having a craft you have spent years honing and perfecting automated away.
I fully agree with that statement, but I dont agree with the premise that, that is whats happening currently.
It's the explicitly stated goal of several of the largest companies on the planet which put up a lot of money to try to reach that goal. And the progress over the past few years has been stunning.
I liked your emphasis on individual diversity, and an attendant need to explore, select, adapt, and integrate tooling. With associated self-awareness. Pushing that further, your "categories" seem more like exemplars/prototypes/archetypes/user-stories, helpful discussion points in a high-dimensional space of blended blobs. And as you illustrate, it branches not just on the individual, but also on what they are up to. And not just on work vs hobby, but on context and task.
It'd be neat to have a big user story catalog/map, which tracks what various services are able to help with.
I was a kid in NE43 instead of TFA's Building 26 across the street - with Lisp Machines and 1980s MIT AI's "Programmer's Apprentice" dreams. I years ago gave up on ever having a "this... doesn't suck" dev env, on being able to "dance code". We've had such a badly crippling research and industrial policy, and profession... "not in my lifetime" I thought. Knock on wood, I'm so happy for this chance at being wrong. And also, for "let's just imagine for a moment, ignoring the utterly absurd resources it would take to create, science education content that wasn't a wretched disaster... what might that look like?" - here too it's LLMs, or no chance at all.
That is actually a great idea, and I agree it would be very useful to have such a catalog/map!
I wonder though if the space is mature enough for such a map or if it would become to generic to say anything meaningful.
Hi op. “Conform or be cast out” ha. Read your article then right after got an email announcing Rush tickets going on sale. Must be a sign I should go.
I forwarded your article to my son the dev, since your post captured the magic of being a programmer so well.
And yes Levy’s book Hackers is most excellent.
Subdivisions is my favourite song of all time and I thought about Rush as well while reading that line.
Getting rid of the programmer has always been the wet dream of managers, and LLMs are being sold as the solution.
Maybe it is
This comes up whenever _anything_ is automated: "this is the end of programming as a career!" I heard this about Rational Rose in the 90's, and Visual Basic in the 80's.
I don't think I'm sticking my head in the sand - an advanced enough intelligence could absolutely take over programming tasks - but I also think that such an intelligence would be able to take over _every_ thought-related task. And that may not be a bad thing! Although the nature of our economy would have to change quite a bit to accommodate it.
I might be wrong: Doug Hofstadter, who is way, way smarter than me, once predicted that no machine would ever beat a human at chess unless it was the type of machine that said "I'm bored of chess now, I would prefer to talk about poetry". Maybe coding can be distilled to a set of heuristics the way chess programs have (I don't think so, but maybe).
Whether we're right or wrong, there's not much we can do about it except continue to learn.
Visual Basic didn't exist in the 80's. First release was 1991.
Thanks for reminding me about Rational Rose though! That was a nostalgia trip
Great read, unlike technologies of the past that automated away the dangerous/boring/repetitive/soul-sucking jobs, LLM's are an assault on our thinking.
Social media already reduced our attention spans to that of goldfish, open offices made any sort of deep meaningful work impossible.
I hope this madness dies before it devours us.
Probably too late.
> Creative puzzle-solving is left to the machines, and we become mere operators disassociated from our craft.
For me, at least, this has not been the case. If I leave the creative puzzle-solving to the machine, it's gonna get creative alright, and create me a mess to clean up. Whether this will be true in the future, hard to say. But, for now, I am happy to let the machines write all the React code I don't feel like writing while I think about other things.
Additionally, as an aside, I already don't think coding is always a craft. I think we want it to be one because it gives us the aura of craftspeople. We want to imagine ourselves as bent over a hunk of marble, carving a masterpiece in our own way, in our time. And for some of us, that is true. For most programmers in human history though, they were already slinging slop before anybody had coined the term. Where is the inherent dignity and human spirit on display in the internal admin tool at a second tier insurance company? Certainly, there is business value there, but it doesn't require a Michalengo to make something that takes in a pdf and spits out a slightly changed pdf.
Most code is already industrial code, which is precisely the opposite of code as craft. We are dissociated from the code we write, the company owns it, not us, which is by definition the opposite of a craftsmen and craft mode of production. I think AI is putting a finer, sharper point on this, but it was already there and has been since the beginning of the field.
To be honest I already reached that identity crisis even before LLMs.
Nowadays many enterprise projects have become placing SaaS products together, via low code/no code integrations.
A SaaS product for the CMS, another one for assets, another for ecommerce and payments, another for sending emails, another for marketing, some edge product for hosting the frontend, finally some no code tools to integrate everything, or some serverless code hosted somewhere.
Welcome to MACH architecture.
Agents now made this even less about programming, as the integrations can be orchestrated via agents, instead of low code/no code/serverless.
I'm in the opposite camp. Programming has never been fun to me, and LLMs are a godsend to deal with all the parts I don't care for. LLMs have accelerated my learning speed and productivity, and believe it or not, programming even started to become fun and engaging!
I will never, ever go back to the time before.
As an aside, I've been using copilot code review before handing off any of my code to colleagues. It's a bit pedantic, but it generally catches all the most stupid things I've done so that the final code review tends to be pretty smooth.
I hate to suggest that the fix to LLM slop is more LLMs, but in this case it's working for me. My coworkers also seem to appreciate the gesture.
I agree that LLMs are great for a cursory review, but crucially, when you ask copilot to review your code, you actually read and think about everything copilot tells you in the response. The biggest issues arise because people will blindly submit AI-generated code without reading or thinking about it.
It’s fascinating idea, though, to invert the process and have devs develop and LLMs do the code reviews. Might be more productive in the long run.
This process has been affecting most of the world's workers for the past several centuries. Programming has received a special treatment for the last few decades, and it's understandable that HN users would jump to protect their life investment, but it need not.
Hand-coding can continue, just like knitting co-exists with machine looms, but it need not ultimately maintain a grip on the software productive process.
It is better to come to terms with this reality sooner rather than later in my opinion.
> This process has been affecting most of the world's workers for the past several centuries.
It has also been responsible for predicting revolutions which never failed to materialize. 3D printing would make some kind of manufacturing obsolete, computers would make about half the world's jobs obsolete, etc etc.
Hand coding can be the knitting to the loom, or it can be industrialized plastic injection molding to 3D printing. How do you know? That distinction is not a detail--it's the whole point.
It's survivorship bias to only look at horses, cars, calculators, and whatever other real job market shifting technologies occurred in the past and assume that's how it always happens. You have to include all predictions which never panned out.
As human beings we just tend no to do that.
[EDIT: this being Pedantry News let me get ahead of an inevitable reply: 3D printing is used industrially, and it does have tremendous value. It enabled new ways of working, it grew the economy, and in some cases yes it even replaced processes which used to depend on injection molding. But by and large, the original predictions of "out with the old, in with the new" did not pan out. It was not the automobile to the horse and buggy. It was mostly additive, complementary, and turned out to have different use cases. That's the distinction.]
> Hand coding can be the knitting to the loom, or it can be industrialized plastic injection molding to 3D printing. How do you know? That distinction is not a detail--it's the whole point.
One could have made a reasonable remark in the past about how injection molding is dramatically faster than 3D printing (it applies material everywhere, all at once), scales better for large parts, et cetera. This isn't really true for what I'm calling hand-coding.
Obviously nothing about the future can be known for certain... but there are obvious trends that need not stop at software engineering.
The trend of mistaken hype predictions indeed won't stop at software engineering.
How would you formulate this verifiably? Wanna take it to longbets.org?
I think there is only a very narrow band where LLMs are good enough at producing software that "hand-coding" is genuinely dead but at the same time bad enough that (expensive) humans still need to be paid to be in the loop.
This is so funny to me.
Hand-coding is no longer "the future"?
Did an AI write your post or did you "hand write it"?
Code needs to be simple and maintainable and do what it needs to do. Auto complete wasn't a huge time saver because writing code wasn't the bottleneck then and it definitely is not the bottleneck now. How much you rely on an LLM won't necessarily change the quality or speed of what you produce. Specially if you pretend you're just doing "superior prompting with no hand coding involved".
LLMs are awesome but the IDE didn't replace the console text editor, even if it's popular.
> Code needs to be simple and maintainable and do what it needs to do.
And yet after 3 decades in the industry I can tell you this fantasy exists only on snarky HN comments.
> Hand-coding is no longer "the future"?
hand-coding is 100% not the future, there are teams already that absolutely do not hand-code anything anymore (I help with one of them that used to have 19 "hand-coders" :) ). The typing for sure will get phased out. it is quite insane that it took "AI" to make people realize how silly and wasteful is to type characters into IDEs/editors. the sooner you see this clearly the better it will be for your career
> How much you rely on an LLM won't necessarily change the quality or speed of what you produce.
if it doesn't you need to spend more time and learn and learn and learn more. 4/6/8 terminals at a time doing all various things for you etc etc :)
I started writing code in basic on a beige box. My first code on windows was a vb6 window that looked like the AOL login screen and used open email relays to send me passwords.
I've written a ton of code in my life and while I've been a successful startup CTO, I've always stayed in IC level roles (I'm in one right now in addition to hobby coding) outside of that, data structures and pipelines, keep it simple, all that stuff that makes a thing work and maintainable.
But here is the thing, writing code isn't my identity, being a programmer, vim vs emacs, mechanical keyboard, RTFM noob, pure functions, serverless, leetcode, cargo culting, complexity merchants, resume driven dev, early semantic css lunacy, these are thing outside of me.
I have explored all of these things, had them be part of my life for better or worse, but they aren't who I am.
I am a guy born with a bunch of heart defects who is happy to be here and trying new stuff, I want to explore in space and abstraction through the short slice of time I've got.
I want to figure stuff out and make things and sometimes that's with a keyboard and sometimes that's with a hammer.
I think there are a lot of societal status issues (devs were mostly low social status until The Social Network came out) and personal identity issues.
I've seen that for 40 years, anything tied to a persons identity is basically a thing they can't be honest about, can't update their priors on, can't reason about.
And people who feel secure and appreciated don't give much grace to those who don't, a lot of callous people out there, in the dev community too.
I don't know why people are so fast to narrow the scope of who they are.
Humans emit meaning like stars emit photons.
The natural world would go on without us, but as far as we have empirically observed we make the maximally complex, multi modally coherent meaning of the universe.
We are each like a unique write head in the random walk of giving the universe meaning.
There are a ton of issues from a network resilience and maximizing the random meaning generation walk where Ai and consolidation are extremely dangerous, I think as far as new stuff in the pipeline it's between Ai and artificial wombs that have the greatest risks for narrowing the scope of human discovery and unique meaning expansion to a catastrophic point.
But so many of these arguments are just post-hoc rationalizations to poorly justify what at root is this loss of self identity, we were always in the business of automating jobs out from under people, this is very weak tea and crocodile tears.
The simple fact is, all our tools should allow us to have materially more comfortable and free lives, the Ai isn't the problem, it's the fact that devs didn't understand that tech is best when empowering people to think and connect better and have more freedom and self determination with their time.
If that isn't happening it's not the codes fault, it's the network architecture of our current human power structures fault.
Agree, and well said. There are no points for hard work, only results -- this is an extremely liberating principle when taken to the limit and we should be happy to say goodbye to an era of manual software-writing being the norm, even if it costs the ego of some guy who spent the last 20 years being told SWE made him a demi-god.
Some people code to talk and don't want anything said for them. That's okay. Photography and paintings landed in different places with different purposes.
But all of Programming isn't the same thing. We just need new names for different types of programmers. I'm sure there were farmers that lamented the advent of machines because of how it threatened their identity, their connection to the land, etc....
but I want to personally thank the farmers who just got after growing food for the rest of us.
It's honestly not that deep. If AI increases productivity, we should accept it. If it doesn't, then the hype will eventually fade out. In any case, having attachment to the craft is a bit cringe. Technological progress trumps any emotional attachment.
I think in a few years, we will realize that LLMs have impacted our lives in a deeply negative way. The relatively small improvements LLMs bring to my life will be vastly outweighted by the negatives.
If LLM abilities stagnate around the current level it's not even out of the question that LLMs will negatively impact productivity simply because of all of the AI slop we'll have to deal with.
> Creative puzzle-solving is left to the machines, and we become mere operators disassociated from our craft.
You could say that about programming languages in general. "Why are we leaving all the direct binary programming for the compilers?"
Truly, the ideas in this essay are reflected in this comment section.
It's like that trope of the little angel and demon sitting on the protagonist's shoulders.
"I can get more work done"
"But it's not proper work"
"Sometimes it doesn't matter if it's proper work, not everything is important"
"But you won't learn the tools"
"Tools are incidental"
"I feel like I'm not close to the craft"
"Your colleagues weren't really reading your PRs anyway"
"This isn't just another tool"
"This is just another tool"
And so on forever.
I'm staying to think that if you don't have both these opposing views swirling around in your mind, you haven't thought enough about it.
I believe this sentiment to be a mistake.
The IT world is waiting for a revolution. Only in order to blame that revolution for the mistakes of a few powerful people.
I would not be surprised if all this revolutionary sentiment is manufactured. That thing about "Luddites" (not a thing that will stick by the way), this nostalgic stuff, all of it.
We need to be much smarter than that and not fall for such obvious traps.
An identity is a target on your back. We don't need one. We don't need to unite to a cause, we're already amongst one of the most united kinds of workers there is, and we don't need a galvanizing identity to do it.
John Von Neumann famously questioned the value of compilers. Eventually we get the keyboard kids that have dominated computing since the early 70's in some form or another whether in a forward thinking way like Dan Ingalls or in an idealic way like the gcc/Free Software crowd. In parallel to this you have people like Laurel, Sutherland, Nelson who live in lateral thinking land.
The real issue is that we've been in-store for a big paradigm shift in how we interact with computers for decades at this point. SketchPad let us do competent, constraints based mathematics with images. Video games and the Logo language demonstrate the potential for programming using, "kinetics." In the future we won't code with symbols we'll dance our intent into and through the machine.
https://www.youtube.com/watch?v=6orsmFndx_o http://www.squeakland.org/tutorials/ https://vimeo.com/27344103
OK, but if you can't find out how to use new tools well, how good are you really as a craftsperson?
"We've always done it this way" is the path of calcification, not of a vibrant craft. And there are certainly many ways you can use LLMs to craft better things, without slop and vibecoding.
Programmer isn't a real thing, all these classes of people are made up. The biggesdt difference between an iPad Toddler and Dijkstra is that the toddler is much more efficient at programming.
Sure you can discover things that aren't intuitively obvious and these things may be useful, but that's more scientist than anything to do with programming. programming + science = computer science programming + engineering = software engineering programming + iPad = interactive computing programming + AI = vibe coding Don't equate programming with software engineering when they are clearly two distinct things. This article would more accurately be called the software engineers' identity crisis. Maybe some hobby engineers (programming + craft) might also be feeling this depending on how many external tools they already rely on. What's really shocking is how many software engineers claim to put in Herculean effort in their code, but ship it on top (or adjacent if you have an API) of "platforms" that could scarcely be less predictable. These platforms have to work very hard to build trust, but it's all meaningless cause users are locked in anyway. When user abuse is rampant people are going to look for deus ex machina and some slimy guy will be there to sell it to them.
When COBOL was born, some people said, "It's English! We won't need programmers anymore!"
When SQL was born, some people said, "It's English! We won't need programmers anymore!"
Now we have AI prompting, and some people are saying, "It's English! We won't need programmers anymore!"
Really?
The problem I have with this argument is that it actually is English this time.
COBOL and SQL aren't English, they're formal languages with keywords that look like English. LLMs work with informal language in a way that computers have never been able to before.
Say that to the prompt guys and their AGENT.md rules.
Formalism is way easier than whatever this guys are concocting. And true programmer bliss is live programming. Common programming is like writing a sheet music and having someone else play it. Live programming is you at the instrument tweaking each part.
Yes natural languages are by nature ambiguous. Sometimes it's better to write specification in code rather than in a natural language(Jetbrains MPS for example).
This is true.
But in faithful adherence to some kind of uncertainty principle, LLM prompts are also not a programming language, no matter if you turn down the temperature to zero and use a specialized coding model.
They can just use programming languages as their output.
On the other hand, the problem is exactly that it’s not a formal language.
This is also a strength. Formal languages struggle to work with concepts that cannot be precisely defined, which are especially common in the physical world.
e.g. it is difficult to write a traditional program to wash dishes, because how do you formally define a dish? You can only show examples of dishes and not-dishes. This is where informal language and neural networks shine.
I can't agree more.
Every time they have been closer to being right.
The thing is... All those people were right. We no longer need the kinds of people we used to call programmers. There exists a new job, only semi related, that now goes by the name programmer. I don't know how many of the original programming professionals managed to make the transition to this new progression.
I am a programmer. I don’t think LLMs will replace/wipe out software engineers.
The author sounds like a scribe meditating on the arrival of the printing press.
Whenever I see an em dash (—), I suspect the entire text was written by an AI.
The article itself is very skeptical of AI, so I highly doubt that's the case.
Also in the footer: "Everything on this website—emdash and all—is created by a human."
I'm seeing this reaction a lot from younger people (say, roughly under 25). And it's a shame this new suspicion has now translated into a prohibition on the use of dashes.
I use three hyphens. In my case, I picked it up from Knuth's TeX many years ago; it's a lexical notation which typesets to a proper em dash.
Three hyphens---it looks good! When I use three hyphens, it's like I dropped three fast rounds out of a magazine. It demands attention.
It's comical too because the only reason AI uses emdashes is because it was so common before AI.
It's utterly uncommon in the kind of casual writing for which people are using AI, that's why it got noticed. Social media posts, blogs, ...
AI almost certainly picked it up mainly from typeset documents, like PDF papers.
It's also possible that some models have a tokenizing rule for recognizing faked-out em-dashes made of hyphens and turning them into real em-dash tokens.
Not uncommon even on Hacker News: https://news.ycombinator.com/item?id=45071722
On my own (long abandoned) blog, about 20% of (public) posts seem to contain an em dash: https://shreevatsa.wordpress.com/?s=%E2%80%94 (going by 4 pages of search results for the em dash vs 21 pages in total).
Maybe because the em dash is not on the keyboard of most people? It is not about the dash, but about the long em dash.
Ironically, I love using em dashes in my writing, but if I ever have to AI generate an email or summary or something, I will remove it for this exact reason.
Whenever I see these takes, I'm thinking of Idiocracy - a world built on very simple rules, like yours.
I published a book once (way before LLMs came along). My publisher insisted that I replace parenthetical inserts with em dashes. Humans do use them.
That says more about your lack of writing skills and understanding of grammar than AI.
That's simply not true, and pointlessly derogatory.
This article does not appear to be AI-written, but use of the emdash is undeniably correlated with AI writing. Your reasoning would only make sense if the emdash existed on keyboards. It's reasonable for even good writers to not know how or not care to do the extra keystrokes to type an emdash when they're just writing a blog post - that doesn't mean they have bad writing skills or don't understand grammar, as you have implied.
> That's simply not true, and pointlessly derogatory.
That same critique should first be aimed at the topmost comment, which has the same problem plus the added guilt of originating (A) a false dichotomy and (B) the derogatory tone that naturally colors later replies.
> It's reasonable for even good writers to not know how or not care
The text is true, but in context there's an implied fallacy: If X is "reasonable", it does not follow that Not-X is unreasonable.
More than enough (reasonable) real humans do add em-dashes when they write. When it comes to a long-form blog post—like this one submitted to HN—it's even more likely than usual!
> the extra keystrokes
Such as alt + numpad 0150 on Windows, which has served me well when on that platform for... gosh, decades now.
> use of the emdash is undeniably correlated with AI writing
Where do you think the training data came from?
Pressing "-" and a space gets replaced by an emdash to me in LibreOffice. No extra keystrokes required.
I don't think the character is that uncommon in the output of slightly-sophisticated writers and is not hard to generate (e.g., on macOS pressing option-shift-minus generates an em-dash).
In fact, on macOS and iOS simply typing two dashes (--) gets autocorrected to an em dash. I used it heavily, which was a bit sloppy since it doesn't also insert the customary hair spaces around the em dash.
Incidentally, I turned this autocorrection off when people started associating em dashes with AI writing. I now leave them manual double dashes--even less correct than before, but at least people are more likely to read my writing.
That's a silly take, just because they existed and were proper grammar before AI slop popularized them doesn't mean they're not statistically likely to indicate slop today, depending on the context.
What's sillier is people associating em-dashes with AI slop specifically because they are unsophisticated enough never to have learned how to use them as part of their writing, and assuming everyone else must be as poor of a writer as they are.
It's the literary equivalent of thinking someone must be a "hacker" because they have a Bash terminal open.
You're overthinking it. LLMs exploded the prevalence of em-dashes. That doesn't mean you should assume any instance of an em-dash means LLM content, but it's a reasonable heuristic at the moment.
> That doesn't mean you should assume any instance of an em-dash means LLM content
No, it doesn't. But people are putting that out there, people are getting accused of using AI because they know how to use em dashes properly, and this is dumb.
Referring to an orthographic construct as grammar is not a good indication that you understand what grammar is.
People have long talked about how reading code is far more important than writing code when working as a professional SWE. LLMs have only increased the relative importance of code review. If you're not doing a detailed code review of every line your LLM generates (just like you should have always been doing while reviewing human-generated code), you're doing a bad job. Sure, it's less fun, but that's not the point. You're a professional.