> I find it particularly disillusioning to realize how deep the LLM brainworm is able to eat itself even into progressive hacker circles.
That's the thing, hacker circles didn't always have this 'progressive' luddite mentality. This is the culture that replaced hacker culture.
I don't like AI, generally. I am skeptical of corporate influence, I doubt AI 2027 and so-called 'AGI'. I'm certain we'll be "five years away" from superintelligence for the forseeable future. All that said, the actual workday is absolutely filled with busy work that no one really wants to do, and the refusal of a loud minority to engage with that fact is what's leading to this. It's why people can't post a meme, quote, article, whatever could be interpreted (very often, falsely) as AI-generated in a public channel, or ask a chatbot to explain a hand-drawn image without the off chance that they get an earful from one of these 'progressive' people. These people bring way more toxicity to daily life than who they wage their campaigns against.
> This is the culture that replaced hacker culture.
Somewhere along the lines of "everybody can code," we threw out the values and aesthetics that attracted people in the first place. What began as a rejection of externally imposed values devolved into a mouthpiece of the current powers and principalities.
This is evidenced by the new set of hacker values being almost purely performative when compared against the old set. The tension between money and what you make has been boiled away completely. We lean much more heavily on where someone has worked ("ex-Google") vs their tech chops, which (like management), have given up on trying to actually evaluate. We routinely devalue craftsmanship because it doesn't bow down to almighty Business Impact.
We sold out the culture, which paved the way for it to be hollowed out by LLMs.
There is a way out: we need to create a culture that values craftmanship and dignifies work done by developers. We need to talk seriously and plainly about the spiritual and existential damage done by LLMs. We need to stop being complicit in propagating that noxious cloud of inevitability and nihilism that is choking our culture. We need to call out the bullshit and extended psyops ("all software jobs are going away!") that have gone on for the past 2-3 years, and mock it ruthlessly: despite hundreds of billions of dollars, it hasn't fully delivered on its promises, and investors are starting to be a bit skeptical.
"There is a way out: we need to create a culture that values craftmanship and dignifies work done by developers. We need to talk seriously and plainly about the spiritual and existential damage done by LLMs"
This is the exact sentiment when said about some other profession or craft, countless people elsewhere and on HN have noted that it's neither productive not wise to be so precious about a task that evolved as a necessity into the ritualized, reified, pedestal-putting that prevents progress. It conflates process with every single other thing about whatever is being spoken about.
Also: Complaining that a new technology bottlenecked by lack of infrastructure, pushback from people with your mindset, poorly understood in its best use because the people who aren't of your mindset are still figuring out and creating the basic tooling we currently lack?
That is a failure of basic observation. A failure to see the thing you don't like because you don't like and decide not to look. Will you like it if you look? I don't know, sounds like your mind is made up, or you might find good reasons why you should maintain your stance. In the later case, you'd be able to make a solid contribution to the discussion.
I'm firmly in the “don't want to use it; if you want to, feel free, but stop nagging me to” camp.
Oh, and the “I'm not accepting 'the AI did it' as an excuse for failures” camp. Just like outsourcing to other humans: you chose the tool(s), you are responsible for verifying the output.
I got into programming and kicking infrastructure because I'm the sort of sad git who likes the details, and I'm not about to let some automaton steal my fun and turn me into its glorified QA service!
I'd rather go serve tables or stack shelves, heck I've been saying I need a good long sabbatical from tech for a few years now… And before people chime in with “but that would mean dropping back to minimum wage”: if LLMs mean almost everybody can program, then programming will pretty soon be a minimum wage job anyway, and I'll just be choosing how I earn that minimum (and perhaps reclaiming tinkering with tech as the hobby it was when I was far younger).
Now this, putting aside my thoughts above, i find a compelling argument. You just don’t want to. I think that should go along with a reasonable understanding of what a person is choosing to not use, but I’ll presume you have that.
Then? Sure, the frustrating part is to see someone making that choice tell other people that theirs is invalid, especially when we don’t know what the scene will look like when the dust settles.
There’s no reason to think there wouldn’t be room for “pure code” folks. I use the camera comparison— I fully recognize it doesn’t map in all respect to this. But the idea that painters should have given up paint?
There were in fact people at the time who said, “Painting is dead!”. Gustav Flaubert, famous author, said painting was obsolete. Paul Delaroche Actually said it was dead. Idiots. Amazingly talented and accomplished, but short sighted, idiots. Well like be laughing at some amazing and talented people making such statement about code today in the same light.
Code as art? Well, two things: 1) LLM’s have tremendous difficulty parsing very dense syntax, and then addressing the different pieces and branching ideas. Even now. I’m guessing this transfers to code that must be compact, embedded, and optimized to a precision such that sufficient training data, generalizable to the task with all the different architectures of microcontrollers and embedded systems… not yet. My recommendation to coders who want to look for areas where AI will be unsuitable? There’s plenty of room at the bottom. Career has never taken me there, but the most fun I’ve had coding has been homebrew microcontrollers.
2) code as art. Not code to produce art, or not something separable from the code that created it. Think Thing minor things from the past like the obfuscated C challenges. Much of that older hacker ethos is fundamentally an artistic mindset. Art has a business model, some enterprising person aught to crack the code of coding code into a recognized art form where aesthetic is the utility.
I don’t even mean the visual code, but that is viable: Don’t many coders enjoy the visual aesthetic of source code, neatly formatted, colored to perfect contrasts between types etc? I doubt that’s the limit of what could be visually interesting, something that still runs. Small audience for it sure— same with most art.
Doesn’t matter, I doubt that will be something masses of coders turn to, but my point is simply that there are options there are options that involve continuing the “craft” aspects you enjoy, whether my napkin doodle of an idea above holds or not. The option, for many, may simply not include keeping the current trajectory of their career. Things change: not many professional coders that began at 20 in 1990 have been able— or willing— to stay in the narrow area they began in. I knew some as a kid that I still know, some that managed to stay on that same path. He’s a true craftsman at COBOL. When I was a bit older in one of my first jobs he helped me learn my way around a legacy VMS cluster. Such things persist, reduced in proportion to the rest is all. But that is an aspect of what’s happening today.
"progress" is doing a lot of work here. Progress in what sense, and for whom? The jury is still out on whether LLMs even increase productivity (which is not the same as progress), and I say this as a user of LLMs.
Man, there is something true in what he is saying though. Can't you see it? I like the idea of some of this technology. I think its cool you can use natural language to create things. I think there is real potential in using these tools in certain context, but the way in which these tools got introduced, no transparency, how its being used to shape thought, the over-reliance on it and how its use to take away our humanity is a real concern.
If this tech was designed in an open way and not put under paywalls and used to develop models that are being used to take away peoples power, maybe I'd think differently. But right now its being promoted by the worst of the worst, and nobody is talking about that.
Responding to and enumerating, in this case, the viewpoint of someone. It's the general process by which discussions take place and progress.
If the thread were about 1) the current problems and approaches AI alignment, 2) the poorly understood mechanisms of hallucination, 3a) the mindset the doesn't see the conflict whey they say "don't anthropomorphize" but runs off to create a pavlovian playground in post-training, 3b) the mindsets that do much the reverse and how both these are dangerous and harmful, 4) the poorly understood trade off of sparse inference optimizations. But it's not, so I hold those in reserve.
> we need to create a culture that values craftmanship and dignifies work done by developers.
Mostly I agree with you. But there's a large group of people who are way too contemptuous of craftsmen using AI. We need to push back against this arrogant attitude. Just as we shouldn't be contemptuous of a woodworking craftsman using a table saw.
I've been programming for 20 years and GPT-4 (the one from early 2023) does it better than me.
I'm the guy other programmers I know ask for advice.
I think your metaphor might be a little uncharitable :)
For straightforward stuff, they can handle it.
For stuff that isn't straightforward, they've been trained on pattern matching some nontrivial subset of all human writing. So chances are they'll say, "oh, in this situation you need an X!", because the long tail is, mostly, where they grew up.
--
To really drive the point home... it's easy to laugh at the AI clocks.[0] But I invite you, dear reader, to give it a try! Try making one of those clocks! Measure how long it takes you, how many bugs you write. And how well you'd do it if you only had one shot, and/or weren't allowed to look at the output! (Nor Google anything, for that matter...)
I have tried it, and it was a humbling experience.
Now tell the AI to distill a bunch of user goals into a living system which has to evolve over time, integrate with other systems, etc etc. And deliver and support that system
I use Claude code every day and it is a slam dunk for situations like the one above, fiddly UIs and the like. Seriously , some of the best money I spend. But it is not good at more abstract stuff. Still a massive time saver for me and does effectively do a lot of work that would have gotten farmed out to junior engineers.
Maybe this will change in a few years and I'll have to become a potato farmer. I'm not going to get into predictions. But to act like it can do what an engineer with 20 years of experience can do means the AI brain worm got you or it says something about your abilities.
right, but this is akin to arguing why the table saw also does not do x/y/z — I don't know why we only complain about AI and how it does NOT do everything well yet.
Maybe it's expectations set by all the AI companies, idk, but this kind of mentality seems very particular to AI products and nothing else.
I'm OK pondering the right use for the tool for as long as it'll take for the dust to settle. And I'm OK too trying some of it myself. What I resent is the pervasive request/pressure to use it everywhere right now, or 'be left out'.
My biggest gripe with the hype, as there's so much talk of craftmanship here, is: most programmers I've met hate doing code reviews and a good proportion prefer rewriting to reading and understanding other people's code. Now suddenly everyone is to be a prompter and astute reviewer of a flood of code they didn't write and now that you have the tool you should be faster faster faster or there's a problem with you.
I'm not complaining about it, I said in my post that it's a huge time saver. It's here to stay, and that's pretty clear to see. It has mostly automated away the need for junior engineers, which just 5 years ago would have been a very unexpected outcome, but it's kind of the reality now.
All that being said:
There's a segment of the software eng population that has their heads in the sand about it and the argument basically boils down to "AI bad". Those people are in trouble because they are also the people who insist on a whole committee meeting and trail of design documents to change the color of a button on a website that sells shoes. Most of their actual hard skills are pretty easy to outsource to an AI.
There's also a techbro segment of the population, who are selling snake oil about AGI being imminent, so fire your whole team and hire me in order to outsource your entire product to an army of AI agents. Their thoughts basically boil down to "I'm a grifter, and I smell money". Nevermind the fact that the outcome of such a program would be a smoldering tire fire, they'll be onto the next grift by then.
As with literally everything, there are loud, crazy people on either side and the truth is in the middle somewhere.
AI doesn’t program better than me yet. It can do some things better than me and I use it for that but it has no taste and is way too willing to write a ton of code. What is great about it compared to an actual junior is if i find out it did something stupid it will redo the work super fast and without getting sad
Too willing to write a ton of code - this is absolutely one of the things that drives me nuts. I ask it to write me a stub implementation and it goes and makes up all the details of how it works, 99% of which is totally wrong. I tell it to rename a file and add a single header line, and it does that - but throws away everything after line 400. Just unreliable and headache-inducing.
That's because there's nothing "craftsman" about using AI to do stuff for you. Someone who uses AI to write their programs isn't the equivalent of a carpenter using a table saw, they are the equivalent of a carpenter who subcontracts the piece out to someone else. And we wouldn't show respect to the latter person either.
But you wouldn't call them a craftsperson because they didn't do any craft other than "be a manager". Reviewing work is not on the same plane as actually creating something.
Simply put most industries started moving away from craftsmanship starting in the late 1700s to the mid 1900s. Craftsmanship does make a few nice things but it doesn't scale. Mass production lead to most people actually having stuff and the general condition of humanity improving greatly.
Software did kind of get a cheat code here though, we can 'craft' software and then endlessly copy it without the restrictions of physical objects. With all that said, software is rarely crafted well anyway. HN has an air about it that software developers are the craftsman of their gilded age, but most software projects fail terribly and waste huge amounts of money.
Does Steve Jobs deserve any respect for building the iPhone then? What is this "actually creating"? I'm sure he wasn't the one to do any of the "actually creating" and yet, there's no doubt in my mind that he deserves credit for the iPhone existing and changing the world.
Nothing craftsman? The detail required to setup a complex image gen pipeline to produce something the has the consistent style, composition, placement, etc, and quite a bit more-- for things that will go into production and need a repeatable pipeline-- it's huge. Take every bit as much creative vision.
Taking just images, consider AI merely a different image capture mechanism, like the camera is vs. painting. (You could copy.paste many critiques about this sort of ai and just replace it with "camera") Sure it's more accessible to a non professional, in AI's case much more so than cameras wear to years of learning painting. But there's a world of difference between what most people do in a prompt online and how professionals integrating it into their workflow are doing. Are such things "art"? That's not a productive question, mostly, but there's this: when it occurs, it has every bit as much intention, purpose and from a human behind it as that which people complain is lacking, but are referring to the one-shot prompt process in their mind when they do.
I'm no fan of "AI" but I think it could be argued that if we're sticking to the metaphor, the carpenter can pick up the phone and subcontract out work to the lowest bidder, but perhaps that "work" doesn't actually require high craftsmanship. Or we could make the comparison that developers building systems of parts need to know how they all fit together, not that they built each part themselves, i.e., the carpenter can buy precut lumber rather than having to cut it all out of a huge trunk themselves.
I'm not implying a hierarchy of value or status here, btw. And the point about difficulty is interesting too. I did manual labor and it was much harder than programming, as you might expect!
You can certainly outsource "up", in terms of skill. That's just how business works, and life... I called a plumber not so long ago! And almost everyone outsources their health...
It's very telling when someone invokes this comparison..I see it fairly often. It implies there is this hirearchy of skill/talent between the "architect" and the "bricklayer" such that any architect could be a bricklayer but a bricklayer couldn't be an architect. The conceit is telling.
Almost every bit of work I've hired people to do has been through an intermediary of some sort. Usually one with "contractor" or "engineer" as a title. They are the ones who can organize others, have connections, understand talent, plan and keep schedules, recognize quality, and can identify and troubleshoot problems. They may also be craftsmen, or have once been, but the work is not necessarily their craft. If you want anything project-scoped, you have a team, there is someone in a leadership role (even if informally), someone handling the money, etc. Craftsmanship may or may not happen within that framework, they are somewhat orthogonal concerns, and I don't see any reason to disrespect the people that make room for it to happen.
Of course you can also get useless intermediaries, which may be more akin to vibe coding. Not entirely without merit, but the human in the loop is providing questionable value. I think this is the exception rather than the norm.
a) Nothing about letting AI do grunt work for you is "not being a craftsman".
b) Things are subcontracted all the time. We don't usually disrespect people for that.
No, the idea is that such a CNC saw shouldn't need an operator at all. To the extent it still does, the operator doesn't even need to be in the same town, much less the same building.
Good or bad, converting craft work to production work is not making the craft worker more productive, it's eliminating the craft worker.
The unskilled operator's position is also precarious, as you point out, but while it lasts, it's a different and (arguably) less satisfying form of work.
The LLM is not a table saw that makes a carpenter faster, it's an automatic machine that makes an owner's capital more efficient.
>Somewhere along the lines of "everybody can code," we threw out the values and aesthetics that attracted people in the first place.
At some point people started universally accepting the idea that any sort of gatekeeping was a bad thing.I think by now people are starting to realize that this was a flawed idea. (at best, gatekeeping is not a pure negative; it's situational) But, despite coming to realize this I think parts of our culture still maintain this as a default value. "If more people can code, that's a _good_ thing!" Are we 100% sure that's true? Are there _no_ downsides? Even if it's a net positive, we should be able to have some discussion about the downsides as well.
Your point is that the hacker ethos involved ... Fewer people being excited about programming? I don't think we experienced this on the same planet.
Web 1.0 was full of weirdos doing cool weird stuff for the pure joy of discovery. That's the ethos we need back, and it's not incompatible with AI. The wrong turn we took was letting business overtake joy. That's a decision we can undo today by opting out of that whole ecosystem.
This is because in Web 1.0 times, only weird hacker types were capable of using the internet effectively. Normies (and weirdos who were weird in ways not related to familiarity with and interest in personal computer technology) were simply not using the internet in earnest, because it wasn't effective for their needs yet. Then people made that happen and now everyone is online, including boring normies with boring interests.
If you want a space where weird hacker values and doing stuff for the pure joy of discovery reign, gatekeep harder.
I think that the ratio of weirdos doing stuff remained constant through the population, it's just that the whole population is now on the web, so they are harder to find.
Not to mention 20 years ago I personally (and probably others my age) had much more time to care about random weird stuff.
So, I am skeptical without some actual analysis or numbers that things really are so bad.
There's a mountain of software work you can do that doesn't involve participating in this rat race. There's nothing that says you need to make 500k and live in silicon valley. It's possible to be perfectly happy working integrating industrial control systems in a sleepy mountain town where cost of living is practically nothing. I am well qualified to make that statement.
We do not need to do things no one needs. We do not need a million differen webshops, and the next CRUD application.
We need a system which allows the earth resources being used as efficient and fair as possible.
Then we can again start apprechiating real craftmanship but not for critical things and not because we need to feed ourselves but because we want to do it.
Each time someone says "we" without asking me I find it at least insulting. With this mindset the next step might be to tell me what I need, without considering my opinion.
Yes, the current system seems flawed, but is the best we came up with and is not fixed either, it is slowly evolving.
Yes, some resources are finite (energy from the sun seems quite plenty though), but don't think we will be ever able to define "fair". I would be glad with "do not destroy something completely and irremediably".
> We do not need a million differen webshops, and the next CRUD application.
The thing about capitalism is that unecessary webshop isn't getting any customers if it's truly unecessary, and will soon be out of business. We can appreciate Ghostty, because why? Because the guy writing it is independently wealthy and can fly jets around for fun, and has deigned to grace us with his coding gifts once again? Don't get me wrong, it's a nice piece of software, but I don't know that system's any better.
"We routinely devalue craftsmanship because it doesn't bow down to almighty Business Impact."
I actually disagree with this pretty fundamentally. I've never seen hacker culture as defined by "craftsmanship" so much as about getting things done. When I think of our culture historically, it's cleverness, quick thinking, building out quick and dirty prototypes in weekend "hackathons", startup culture that cuts corners to get an MVP product out there. I mean, look at your URL bar: do you think YC companies are prioritizing artisanal lines of code?
We didn't trade craftsmanship for "Business Impact". The latter just aligns well with our culture of Getting Shit Done. Whether it's for play (look at the jank folks bring out to the playa that's "good enough") or business, the ethos is the same.
If anything, I feel like there has been more of an attempt to erase/sideline our actual culture by folks like y'all as a backlash against AI. But frankly, while a lot of us scruffy hacker types might have some concerns about AI, we also see a valuable tool that helps us move faster sometimes. And if there's a good tool that gets a thing done in a way that I deem satisfactory, I'm not going to let someone's political treatise get in my way. I'm busy building.
YES. The "This is evidenced by the new set of hacker values being almost purely performative" is so incredibly true. I went to a privacy event about Web3, and the event organisers hired a photographer who took photos of everyone (no "no photo" stickers available), and they even flew a drone above our heads to take overarching videos of everyone :D I guess "privacy" should have been in quotes. All the values and aesthetics of the original set of people who actually cared about privacy (and were attracted to it) has been evaporated. All that remained are the hype. It was wild.
I realized recently that if you want to talk about interesting topics with smart people, if you expect things like critical thinking and nuanced discussion, you're currently much better off talking literature or philosophy than anything related to tech. I mean, everyone knows that discussing politics/economics is rather hopelessly polarized, everyone has their grievances or their superstitions or injuries that they cannot really put aside. But this is a pretty new thing that discussing software/engineering on merits is almost impossible.
Yes, I know about the language / IDE / OS wars that software folks have indulged in before. But the reflexive shallow pro/anti takes on AI are way more extreme and are there even in otherwise serious people. And in general anti-intellectual sentiment, mindless follow-the-leader, and proudly ignorant stances on many topics are just out of control everywhere and curiosity seems to be dead or dying.
You can tell it's definitely tangled up with money though and this remains a good filter for real curiosity. Math that's not maybe related to ML is something HN is guaranteed to shit on. No one knows how to have a philosophy startup yet (WeWork and other culty scams notwithstanding!). Authors, readers, novels, and poetry aren't moving stock markets. So at least for now there's somewhere left for the intellectually curious to retreat
I don't really see it any different than the Windows/Unix, Windows/Mac, etc, flame wars that boiled even amongst those with no professional stake it in for decades. Those were otherwise serious people too, parroting meaningless numbers and claims that didn't actually make much of a difference to them.
If anything, the AI takes are more much more meaningful. A Mac/PC flame war online was never going to significantly affect your career. A manager who either is all-in on AI or all-out on it can.
OS and IDE wars are something people take pretty seriously in their teens and very early careers, and eventually become more agnostic about after they realize it's not going to be the end-all predictor of coworker code quality. It predicts something for sure, but not strictly skill-level.
Language-preference wars stick around until mid-career for some, and again it predicts something. But still, serious people are not likely to get bogged down in pointless arguments about nearly equivalent alternatives at least (yaml vs json; python vs ruby).
Shallow takes on AI (whether they are pro or anti) are definitely higher stakes than all this, bad decisions could be more lasting and more damaging. But the real difference to my mind is.. AI "influencers" (again, pro or anti) are a very real thing in a way that doesn't happen with OS / language discussions. People listen, they want confirmation of biases.
I mean there's always advocates and pundits doing motivated reasoning, but usually it's corporate or individuals with clear vested interests that are trying to short-circuit inquiry and critical thinking. It's new that so many would-be practitioners in the field are eager to sabotage and colonize themselves, and forcing a situation where honest evaluations and merit-based discussion of engineering realities are impossible
This is classically framed as philosophy vs sophistry. The truth is that both are necessary, but only one makes money. When your entire culture assigns value with money it's obvious which way the scales will tip.
> But the reflexive shallow pro/anti takes on AI are way more extreme
But this is philosophy (and ethics/morality)
My feelings about AI, about its impact on every aspect of our lives, on the value of human existence and the purpose of the creative process, have less to do with what AI is capable of and more to do with the massive failures of ethics and morality that surround every aspect of its introduction and the people who are involved.
The fast inverse sqrt that John carmack did not. write also does well. I know there's many more. Are you sure that's not just a caricature of Hacker News you've built up in your head?
Hmm. No. Not really. I don't think "hacker" ever much meant this at all; mostly because "hacker" never actually was much connected to "labor for money."
"Going to work" and "being a hacker" were overwhelmingly mutually exclusive. Hacking was what you don't do on company time (in favor of the company.)
This is the fate that befalls any wildly successful subculture: the MOPs start showing up, fascinated by it, and the sociopaths monetize it to get rich. The original geeks who created the scene become increasingly powerless.
I’ve been a “software engineer” or closely adjacent for 30 years. During that time, I’ve worked for small and medium “lifestyle companies”, startups, boring Big Enterprise, $BigTech and over the past 5 years (including my time at $BigTech) worked as a customer facing cloud consultant where I’ve seen every type of organization imaginable and how they work.
No one ever gave a rip about “craftsmanship”. They hire you for one reason - to make them more money than they are paying you for or to save them more money than you are costing them.
As far as me, I haven’t written a single line of code for “enjoyment” since the day I stepped into college. For the next four years it was about getting a degree and for the next 30, it was about exchanging my labor for money to support my addictions to food and shelter - that’s the transaction.
I don’t dislike coding or dread my job. But at the end of the day (and at the beginning of the day) I’ve found plenty of things I enjoy that don’t involve computers - working out, teaching fitness classes part time, running, spending time with family and friends, traveling, etc.
If an LLM helps me exchange my labor for money more efficiently, I’m going to use it just like I graduated from writing everything in assembly in 1987 on my Apple //e to using a C compiler or even for awhile using Visual Basic 6.
Right now its just a tool you can use or not and if you are smart enough, you figure out very quickly when to use a tool for efficency and when not.
I do not vibe code my core architecture because i control it and know it very well. I vibe code some webui i don't care about or a hobby idea in 1-4h on a weekend because otherwise it would take me 2 full weekends.
I fix emails, i get feedback etc.
When I do experiemnts with vibe coding, i'm very aware what i'm doing.
Nonetheless, its 2025. Alone 2026 we will add so much more compute and the progress we see is just crazy fast. In a few month there will be the next version of claude, gpt, gemini and co.
And this progress will not stop tomorrow. We don't know yet how fast it will progress and when it will be suddenly a lot better then we are.
Additionally you do need to learn how to use these tools. I learned through vibe coding that i have to specify specific things i just assume the smart LLM will do right without me telling for example.
Now i'm thinking about doing an experiemnt were i record everything about a small project i want to do, to then subscribe it into text and then feeding it into an llm to strucuture it and then build me that thing. I could walk around outside with a headset to do so and it would be a fun experiemnt how it would feel like.
I can imagine myself having some non intrusive AR Google and the ai sometimes shows me results and i basically just give feedback .
Well I have personally tested it on the green field projects I mostly work on and it does the grunt work of IAC (Terraform) and even did a decently complicated API with some detailed instructions like I would give another developer.
I’ve done literally dozens of short term quick turn around POCs from doing the full stack from an empty AWS account to “DevOps” to the software development -> training customers how to fish and showing them the concepts -> move on to next projects between working at AWS ProServe and now a third party consulting company. I’m familiar with the level of effort for these types of projects. I know how many fewer man hours it takes me now.
I have avoided front end work for well over a decade. I had to modify the front end part of the project we released to the customer that another developer did to remove all of the company specific stuff to make it generic so I could put it in our internal repo. I didn’t touch one line of front end code to make the decently extensive modifications, honestly I didn’t even look at the front end changes. I just made sure it worked as expected.
If you are “consulting” on an hourly rate, you’re doing it wrong. The company and I get paid for delivering projects not the number of hours we work. A smaller project may just say they have me for 6 weeks with known deliverable. I’m rarely working 40 hours a week.
When I did do one short term project independently, I gave them the amount I was going to charge for the project based on the requirements.
All consulting companies - including the division at AWS - always eventually expand to the staff augmentation model where you assign warm bodies and the client assigns the work. I have always refused to touch that kind of work with a ten foot pole.
All of my consulting work has been working full time and salaries for either the consulting division of AWS where I got the same structured 4 year base + RSUs as every other employee or now making the same amount (with a lot less stress and better benefits) in cash.
I’m working much less now than I ever have in my life partially because I’m getting paid for my expertise and not for how much code I can pump out.
I am working fewer hours. I at most work 4 hours a day unless it’s a meeting heavy day. I haven’t typed a line of code in the last 8 months yet I’ve produced just as much work as I did before LLMs.
I really agree with your point. I think that this forum being hackernews and all though lends itself to a slightly different kind of tech person. Who really values for themselves and their team, the art of getting stuck in with a deeply technical problem and being able to overcome it.
You really think that people at BigTech are doing it for the “enjoyment” and not for the $250K+ they are making 3 years out of college? From my n=1 experience, they are doing it for the pay + RSUs.
If you see what it takes to get ahead in large corporations, it’s not about those who are “passionate”, it’s about people who know how to play the game.
If you look at the dumb AI companies that YC is funding, those “entrepreneurs” aren’t doing 996 because they enjoy it. They are looking for the big exit.
How many of them do you think started their companies out of “passion”?
Some of the ones I spotted checked had a couple of non technical founders looking for a “founding engineer” that they could underpay with the promise of “equity” that would probably be worthless.
I'm not disagreeing with the fact that there's a shit ton of founders out there looking for a quick pay day (I'd guess the majority fall into that category). Just pointing out there are exceptions, and the exceptions can be quite successful.
We need to talk seriously and plainly about the spiritual and existential damage done by LLMs.
I'm tempted to say "You're not helping," as my eyes roll back in their sockets far enough to hurt. But I can also understand how threatening LLMs must appear to programmers, writers, and artists who aren't very good at their jobs.
The question about why you should care about others and not just yourself has literature stretching back thousands of years. Maybe start with one of the major world religions?
Have you seen the latest AI slop in game design lately, destroying human creativity?
Have you seen how this tech is being used to control narratives to subjugate populations to the will of authoritarian governments?
This shit is real. We are slowly sliding into a world where every aspect of our lives are going to be dictated by people in power with tools that can shape the future by manipulating what people think about.
If you don't care that the world is burning to the ground, good luck with that. Im not saying the tech is necessarily bad, its the way in which we are allowing it to be used. There has to be controls is place to steer this tech in the right direction or we are heading for a world I don't want to be apart of.
> All that said, the actual workday is absolutely filled with busy work that no one really wants to do, and the refusal of a loud minority to engage with that fact is what's leading to this.
The attitude and push back from this loud minority has always been weird to me. Ever since I got my hands on my first computer as a kid, I've been outsourcing parts of my brain to computing so that I can focus on more interesting things. I no longer have to remember phone numbers, I no longer have to carry a paper notepad, my bookshelf full of reference books that constantly needed to be refreshed became a Google search away instead. Intellisense/code completion meant I didn't have to waste time memorizing every specific syntax and keyword. Hell, IDEs have been generating code for a long time. I was using Visual Studio to automatically generate model classes from my database schema for as long as I can remember, and even generating CRUD pages.
The opportunity to outsource even more of the 'busywork' is great. Isn't this was technology is supposed to do? Automate away the boring stuff?
The only reasoning I can think of is that the most vocal opponents work in careers where that same busywork is actually most of their job, and so they are naturally worried about their future.
> Ever since I got my hands on my first computer as a kid, I've been outsourcing parts of my brain to computing so that I can focus on more interesting things. I no longer have to remember phone numbers, I no longer have to carry a paper notepad, my bookshelf full of reference books that constantly needed to be refreshed became a Google search away instead. Intellisense/code completion meant I didn't have to waste time memorizing every specific syntax and keyword. Hell, IDEs have been generating code for a long time. I was using Visual Studio to automatically generate model classes from my database schema for as long as I can remember, and even generating CRUD pages.
I absolutely agree with you, but I do think there's a difference in kind between a deterministic automation you can learn to use and get better at, and a semi-random coding agent.
The thing I'm really struggling with is that unlike e.g. code completion, there doesn't seem to be a clear class of tasks that LLMs are good at vs bad at. So until the LLMs can do everything, how do I keep myself in the loop enough that I'll have the requisite knowledge to step in when the LLM fails?
You mention how technology means we no longer have to remember phone numbers. But what if all digital contact lists had a very low chance of randomly deleting individual contacts over time? Do you keep memorizing phone numbers? I'm not sure!
Thank you for expressing well what I was thinking. I derive intense joy from coding. Like you my over my 40 year career I've been exploiting more and more ways to outsource work to computers. The space of software is so vast that I've never worried for a second that I'd not have work to do. Coding is a means to solving interesting problems. It is not an end in itself.
When you off load that stuff to a computer, you loose cognitive abilities. Heck Im even being careful how much I use mapping tools now because I want to know where I am going and how I get there.
FYI: I do not work for any corporations, I provide technical services directly to the public. So there is really concerns about this tech by everyday people that do not have a stake in keeping a job.
What are the "interesting parts" is hard to quantify because my interests vary, so even if a machine can do those parts better than me, doesn't necessarily mean I'll use the machine.
The arts is a good example. I still enjoy analog photography & darkroom techniques. Digital can (arguably) do it better, faster, and cheaper. Doesn't change the hobby for me.
But, at least the option is there. Should I need to shoot a wedding, or some family photos for pay, I don't bust out my 35mm range finder and shoot film. I bring my R6, and send the photos through ImagenAI to edit.
In that way, the interesting parts are whatever I feel like doing myself, for my own personal enjoyment.
Just the other day I used AI to help me make a macOS utility to have a live wallpaper from an mp4. Didn't feel like paying for any of the existing "live wallpaper" apps. Probably a side project I would never have done otherwise. Almost one shot it outside of a use-after-free bug I had to fix myself, which ended up being quite enjoyable. In that instance, the interesting part was in the finding a problem and fixing it, while I got to outsource 90% of the rest of the work.
I'm rambling now, but the TL;DR is I'm more so excited about having the option to outsource portions of something rather than always outsourcing. Sometimes all you need is a cheap piece of mass produced crap, and other times you want to spend more money (or more time) making it yourself, or buying handmade from an expert craftsman.
This was very insightful. It made me think about how "hacker culture" has changed.
I'm middle-aged. 30 years ago, hacker culture as I experienced it was about making cool stuff. It was also about the identity -- hackers were geeks. Intelligent, and a little (or a lot) different from the rest of society.
Generally speaking, hackers could not avoid writing code. Whether it was shell scripts or HTML or Javascript or full-blown 3D graphics engines. To a large extent, coding became the distinguishing feature of "hackers" in terms of identity.
Nearly anybody could install Linux or build a PC, but writing nontrivial code took a much larger level of commitment.
There are legitimate functional and ethical concerns about AI. But I think a lot of "hackers" are in HUGE amounts of denial about how much of their opposition to AI springs from having their identities threatened.
> opposition to AI springs from having their identities threatened.
I think there's definitely some truth to this. I saw similar pushback from the "learn to code" and coding bootcamp era, and you still frequently see it in Linux communities where anytime the prospect of more "normies" using Linux comes up, a not insignificant part of the community is actively hostile to that happening.
The attitude goes all the way back to eternal september.
And it's "the bootcamp era" rather than the new normal because it didn't work out as well as advertised. Because of the issues highlighted in that pushback.
Well there are a lot of us very clear that our identities are being threatened and scared shitless we will lose the ability to pay our rent or buy food because of it.
As somebody currently navigating the brutal job market, I'm scared shitless about that too. I have to tell you though, that the historical success rate of railing against "technologies that make labor more efficient" is currently at 0.0000000%.
We've survived and thrived through inflection points like this before, though. So I'm doing my best to have an adapt-or-die mindset.
"computers are taking away human jobs"
"visual basic will eliminate the need for 'real coders'"
"nobody will think any more. they'll 'just google it' instead of actually understanding things"
"SQL is human readable. it's going to reduce the need for engineers" (before my time, admittedly)
"offshoring will larely eliminate US-based software development"
etc.
Ultimately (with the partial exception of offshoring) these became productivity-enhancers that increased the expectations placed on the shoulders of engineers and expanded the profession, not things that replaced the profession. Admittedly, AI feels like our biggest challenge yet. Maybe.
I consider myself progressive and my main issue with the technology is that it was created by stealing from people who have not been compensated in any way.
I wouldn’t blame any artist that is fundamentally against this tech in every way. Good for them.
Every artist and creator of anything learned by engaging with other people's work. I see training AI as basically the same thing. Instead of training an organic mind, it's just training a neural network. If it reproduces works that are too similar to the original, that's obviously an issue, but that's the same as human artists.
This is a bad-faith argument, but even if I were to indulge it: human artists can/do get sued for mimicing the works of others for profit, which AI precisely does. Secondly, many of the works in question have explicit copyright terms that prohibit derivative works. They have built a multi-billion dollar industry on scaled theft. I don't see a more charitable interpretation.
It's "unauthorized use" rather than "stealing", since the original work is not moved anywhere. It's more like using your creative work to train a software system that generates similar-looking, competing works, for pennies, at industrial scale and speed.
Usually "obtaining" is just making a bunch of HTTP requests - which is kind of how the web is designed to work. The "consent" (and perhaps "desired payment" when there is no paywall) issue is the important bit and ultimately boils down to the use case. Is it a human viewing the page, a search engine updating its index, or OpenAI collecting data for training? It is annoying when things like robots.txt are simply ignored, even if they are not legally or technically binding.
The legal situation is unsurprisingly murky at the moment. Copyright law was designed for a different use case, and might not be the right tool or regulatory framework to address GenAI.
But as I think you are suggesting, it may be an example of regulatory entrepreneurship, where (AI) companies try to move forward quickly before laws and regulations catch up with them, while simultaneously trying to influence new laws and regulations in their favor.
[Copyright law itself also has many peculiarities, for example not applying to recipes, game rules, or fashion designs (hence fast fashion, knockoffs, etc.) Does it, or should it, apply to AI training and GenAI services? Time will tell.]
> hacker circles didn't always have this 'progressive' luddite mentality
Richard Stallman has his email printed out on paper for him to read, and he only connects to the internet by using wget to fetch web pages and then has them printed off.
> It's why people can't post a meme, quote, article, whatever could be interpreted (very often, falsely) as AI-generated in a public channel, or ask a chatbot to explain a hand-drawn image without the off chance that they get an earful from one of these 'progressive' people.
It happens, but I think it's pretty uncommon. What's a lot more common is people getting called out for offloading tasks to LLMs in a way that just breaches protocol.
For example, if we're having an argument online and you respond with a chatbot-generated rebuttal to my argument, I'm going to be angry. This is because I'm putting an effort and you're clearly not interested in having that conversation, but you still want to come out ahead for the sake of internet points. Some folks would say it's fair game, but consider the logical conclusion of that pattern: that we both have our chatbots endlessly argue on our behalf. That's pretty stupid, right?
By extension of this, there's plenty of people who use LLMs to "manage" their online footprint: write responses to friends' posts, come up with new content to share, generate memes, produce a cadence of blog posts. Anyone can ask an LLM to do that, so what's the point of generating this content in the first place? It's not yours. It's not you. So what's the game, other than - again - trying to come out on top for internet points?
Another fairly toxic pattern is when people use LLMs to produce work output without the effort to proofread or fact-check it. Over the past year or so, I've gotten so many LLM-generated documents that simply made no sense, and the sender considered their job to be done and left the QA to me.
unfortunately, it will be less and less purely human generated content any more. it will be more and more AI generated or AI assisted content in the future.
We are angry because we grow up in an age that content are generated by human and computer bot are inefficient. however, for newer generation, AI generated content will be a new normal, like how we see people from a big flat box (TV)
I'm looking at code at my tech job right now where someone AI outsourced it, didn't proofread it, and didn't realize a comparison table it built is just running over the same dataset twice causing every comparison to look "identical" even when the data isn't
IDK, to me it looks that hacker culture has always been progressive, it's just definition of what is progressive has changed somewhat.
But hacker culture always sought to empower an individual (especially a smart, tech-savvy individual) against corporations, and rejection of gen AI seems reasonable in this light.
If hacker culture wasn't luddite, it's because of the widespread belief that the new digital technology does empower the individual. It's very hard to believe the same about LLMs, unless your salary depends on it
People assume programmers have the same motivations as luddites but "smashing the autolooms" presumably requires firebombing a whole bunch of datacenters, whereas it's pretty easy to download and run an open-source Chinese autoloom.
I largely agree with this, but at the same time, I empathize with the FA's author. I think it's because LLMs feel categorically different from other technological leaps I've been exited about.
The recent results in LLMs and diffusion models are undeniably, incredibly impressive, even if they're not to the point of being universally useful for real work. However they fill me with a feeling of supreme dissapointment, because each is just this big black box we shoved an unreasonable amount of data into and now the black box is the best image processing/natural language processing system we've ever made, and depending on how you look at it, they're either so unimaginably complex that we'll never understand how they really work, or they're so brain-dead simple that there's nothing to really understand at all. It's like some cruel joke the universe decided to play on people who like to think hard and understand the systems around them.
But think about it: if digital painting were solved not by a machine learning model, but human-readable code, it would be an even more bleak and cruel joke, isn't it?
Interesting that people seem to have this assumption.
"The lesson is considered "bitter" because it is less anthropocentric than many researchers expected and so they have been slow to accept it."
I mean we are so many people on the planet, its easy to feel useless when you know you can get replaced by millions of other humans. How is that different being replaced by a computer?
I was not sure how AGI would come to us, but I assumed there will be AGI in the future.
Weirdest thing for me is mathematics and physics: I assumed that would be such an easy field to find something 'new' through brute force alone, im more shocked that this is only happening now.
I realized with DeepMind and Alphafold that the smartest people with the best tools are in the industry and specificly in the it industry because they are a lot better using tools to help them than normal researchers who struggle writing code.
I think that's going to become like asking a child to read Shakespeare; surely valuable, but requiring a whole parallel text to give modern translation and context.
I think you're missing that a lot of what we call "learning" would be categorized as "busy work" after the fact. If we replace this "busy work" with AI, we are becoming collectively more stupid. Which may be a goal on itself for our AI overlords.
As mr Miyagi said: "Wax on. Wax off."
This may turn out very profitable for the pre-AI generations, as the junior to senior pipeline won't churn seniors at the same rate. But following generations are probably on their way to digital serfdom if we don't act.
> If we replace this "busy work" with AI, we are becoming collectively more stupid.
I've seen this same thing said about Google. "If you outsource your memory to Google searching instead, you won't be able to do anything without and you'll become dumber."
Maybe that did happen, but it didn't seem to result in any meaningful change on the whole. Instead, I got to waste less time memorizing things, or spending time leafing through thousand page reference manuals, to find something.
We've been outsourcing parts of our brains to computers for decades now. That's what got me interested and curious about computers when I got my first machine as a kid (this was back in the late 90s/early 00s). "How can I automate as much of the boring stuff as possible to free myself up for more interesting things."
LLMs are the next evolution of that to an extent, but I also think they do come with some harms and that we haven't really figured out best practices yet. But, I can't help but be excited at the prospect of being able to outsource even more to a computer.
Indeed, this line of reasoning goes all they way back to Socrates who argued that outsourcing your memory to writing would make you stupider [1]:
> For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.
I, for one, am glad we have technologies -- like writing, the internet, Google, and LLMs -- that let us expand the limits of what our minds can do.
Well there's more than just one hacker circle. That was never really the case and it's less and less the case as the earth's technologically-inclined population increases.
Culture is emergent. The more you try to define it, the less it becomes culture and the more it becomes a cult. Instead of focusing on culture I prefer to focus on values. I value craftsmanship, so I'm inclined to appreciate normal coding more than AI-assisted coding, for sure. But there's also a craftsmanship to gluing a bunch of AI technologies together and observing some fantastic output. To willfully ignore that is silly.
The OP's rant comes across as a wistful pining for the days of yore, pinning its demise on capitalists and fascists, as if they had this AI thing planned all along. Focusing on boogeymen isn't going to solve anything. You also can't reverse time by demanding compliance with your values or forming a union. AI is here to stay and we're going to have to figure out how to live with it, like it or not.
I have only experienced the exact opposite - AI tools being forced on employees left and right, and infinite starry eyed fake enthusiasm amongst a rising ocean of slop poisoning all communication and written human knowledge at scale.
I am yet to see issues caused by restrain.
> It's why people can't post a meme, quote, article, whatever could be interpreted (very often, falsely) as AI-generated in a public channel, or ask a chatbot to explain a hand-drawn image without the off chance that they get an earful from one of these 'progressive' people. These people bring way more toxicity to daily life than who they wage their campaigns against.
> This is the culture that replaced hacker culture.
Breathless hustlecore tech industry culture is a place where finance bros have turned programmers into dogs that brag to one another about what a good dog they are. We should reject at every turn the idea that such a culture represents the totality of programming. Programming is so much more than that.
That's because AI-generated memes are lame, not saying that memes are smart, generally speaking, but the AI-generated ones are even lamer. And nothing wrong with being a luddite, to the contrary, in this day and age still thinking that technology is the way forward no matter what is nothing short of criminal.
Ironically, the actual luddites weren't anti-technology at all. Mechanized looms at the time produced low-quality, low-durability cloth at low prices. The luddite pushback was more about the shift from durable to disposable.
It's a message that's actually pretty relevant in an age of AI slop.
They were anti-technology in the sense that they destroyed the machines, because of the machines' negative effects on pay and quality. Maybe you could debate whether they were anti-technology absent its effects, but all technologies have effects. https://en.wikipedia.org/wiki/Luddite
The only thing more insufferable than the "AI do everything and replace everyone" crowd is the "AI is completely useless" crowd. It's useful for some things and useless for others, just like any other tool you'll encounter.
The proposition that AI is completely is trivially nullified. For example, it is provably useful for large-scale cheating on course assignments - a non-trivial task that had previously used human-operated "essay mills" and other services.
Hackers in the '80s were taking apart phone hardware and making free long-distance calls because the phone company didn't deserve its monopoly purely for existing before they were born. Hackers in the '90s were bypassing copyright and wiping the hard drive of machines they cobbled together out of broken machines to install an open source OS on it so that Redmond, WA couldn't dictate their computing experience.
I think there's a direct through-line from hacker circles to modern skepticism of the kind of AI discussed in this article: the kind where rules you don't control determine the behavior of the machine and where most of the training and operation of the largest and most successful systems can, currently, only be accessed via the cloud portals of companies with extremely questionable ethics.
... but I don't expect hackers to be anti-AI indefinitely. I expect them to be sorting out how many old laptops with still-serviceable graphics cards you have to glue together to build a training engine that can produce a domain-specific tool that rivals ChatGPT. If that task proves impossible, then I suspect based on history this may be the one place where hackers end up looking a little 'luddite' as it were.
... because "If the machine cannot be tamed it must be destroyed" is very hacker ethos.
The whole point was to take these things apart, figure out how they work, and make them things we want them to do instead of being bound by arbitrary rules.
Bypassing arbitrary (useless, silly, meaningless, etc) rules has always been a primary motiving factor for some of us :D
I agree. I think this is what happens when a persons transitions from a progressive mindset to a conservative one, but has made being "progressive" a central tenant of their identity.
Progressiveness is forward looking and a proponent of rapid change. So it is natural that LLM's are popular amongst that crowd. Also, progressivism should be accepting of and encouraging the evolution of concepts and social constructs.
In reality, many people define "progressiveness" as "when things I like happen, not when things I don't like happen." When they lose control of the direction of society, they end up just as reactionary and dismissive as the people they claim to oppose.
>AI systems exist to reinforce and strengthen existing structures of power and violence. They are the wet dream of capitalists and fascists.
>Craft, expression and skilled labor is what produces value, and that gives us control over ourselves
To me, that sums up the author's biases. You may value skilled labor, but generally people don't. Nor should they. Demand is what produces value. The later half of the piece falls into a diatribe of "Capitalism Bad".
Just seeing that sentence fragment about "structures of power and violence" told me so much about the author.
Its the sort of language that brings with it a whole host of stereotypes, some of which were immediately confirmed with a little more digging (and others would require way too much effort to confirm, but likely could be).
And yes, this whole "capitalism bad" mentality I see in tech does kinda irk me. Why? Because it was capitalism that gave them the tools to be who they are and the opportunities to do what they do.
> And yes, this whole "capitalism bad" mentality I see in tech does kinda irk me. Why? Because it was capitalism that gave them the tools to be who they are and the opportunities to do what they do.
It's not hard to see why that mentality exists though. That same capitalism also gave rise to the behemoth, abusive monopolies we have today. It gave rise to the over financialization of the sector and declining product quality because you get richer doing stock buybacks and rent-seeking instead of making a better product.
Early hacker culture was also very much not pro-capitalism. The core principle of "Information should be free" itself is a statement against artificial scarcity and anti-proprietary systems, directly opposed to the capitalist ethos of locking up knowledge for profit. The FOSS we use and love rose directly from this culture, which is fundamentally communal, not capitalist.
I'm not ignorant to this fact that it helped us for quite a long time but it also created climate change. Overpopulation.
We are still stuck on planet earth, have not figured out the reason for live or the origin of the universe.
I would prefer a world were we think about using all the resources earth provides sustainable and how to use them the most efficient way for the max amount of human beings. The rest of it we would use to advance society.
I would like to have Post-Scarcity Scientific Humanism
You would need to demonstrate that some other system would have given us all the things you want while avoiding every problem you cite, while not introducing other comparable or worse problems.
Likely progressive, but definitely not luddite [0]. Anti-capitalist for sure.
I struggle with this discourse deeply. With many posters like OP, I align almost completely - unions are good, large megacorps are bad, death to facists etc. It's when we get to the AI issue that I do a bit of a double take.
Right now, AI is almost completely in the hands of a few large corp entities, yes. But once upon a time, so was the internet, so were processing chips, so was software. This is the power of the byte - it shrinks progressively and multiplies infinitely - thus making it inherently diffuse and populist (at the end of the day). It's not the relationship to our cultural standards that causes this - it's baked right into the structure of the underlying system. Computing systems are like sand - you can melt them into a tower of glass, but those are fragile and will inevitably become sand once again. Sand is famously difficult to hold in a tight grasp.
I won't say that we should stop fighting against the entrenchment of powers like OpenAI - fine, that's potentially a worthy fight and if that's what you want to focus on go ahead. However, if you really want to hack the planet, democratize power and distribute control, what you have to be doing is working towards smaller local models, distributed training, and finding an alternative to backprop that can compete without the same functional costs.
We are this close to having a guide in our pocket that can help us understand the machine better. Forget having AI "do the work" for you, it can help you to grok the deeper parts of the system such that you can hack them better - and if we're to come out of this tectonic shift in tech with our heads above water, we absolutely need to create models that cannot be owned by the guy with the $5B datacenter.
Deepseek shows us the glimmer of a way forward. We have to take it. The megacorp AI is already here to stay, and the only panacea is an AI that they cannot control. It all comes down to whether or not you genuinely believe that the way of the hacker can overcome the monolith. I, for one, am a believer.
Not true for the Internet. It was the open system anyone could join and many people were shocked it succeeded over the proprietary networks being developed.
How are unions any better than mega corps? My brother is part of a union and the leaders make millions.
He's pigenholed at the same low pay rate and can't ever get a raise, until everyone in the same role also gets a raise (which will never happen). It traps people, because many union jobs can't or won't innovate, and when they look elsewhere, are underskilled (and stuck).
You mention 'deepseek'. Are you joking? It's owned by the Chinese government..and you claim to hate fascism? Lol?
Big companies only have the power now, because the processing power to run LLMs is expensive. Once there are break throughs, anyone can have the same power in their house.
We have been in a tech slump for awhile now. Large companies will drive innovations for AI that will help everyone.
That's not a union - that's another corporate entity parading as a union. A union, operating as it should, is governed by the workers as a collective and enriches all of them at the same rate.
Deepseek is open source, which is why I mention it. It was made by the Chinese government but it shows a way to create these models at vastly reduced cost and was done with transparent methodology so we can learn from it. I am not saying "the future is Deepseek", I am saying "there are lessons to be learned from Deepseek".
I actually agree with you on the corporate bootstrap argument - I think we ought to be careful, because if they ever figure out how to control the output they will turn off outputs that help develop local models (gotta protect that moat!), but for now I use them myself to study and learn about building locally and I think everyone else ought to get on this train as well. For now, the robust academic discourse is a very very good thing.
Being anti "AI" has nothing to do with being progressive. Historically, hackers have always rejected bloated tools, especially those that are not under their control and that spy on them and build dossiers like ChatGPT.
Hackers have historically derided any website generators or tools like ColdFusion[tm] or VisualStudio[tm] for that matter.
It is relatively new that some corporate owned "open" source developers use things like VSCode and have no issues with all their actions being tracked and surveilled by their corporate masters.
Hackers never had a very cohesive and consistent ideology or moral framework, we heard non stop of the exploits of people funded as part of Cold War military pork projects that got the plug pulled eventually, but some antipathy and mistrust of the powerful and belief in the power of knowledge were recurrent themes nonetheless
So why is it a surprise that hackers mistrust these tools pushed by megacorps, that also sell surveillance to governments, with “suits” promising other “suits” that they’ll be making knowledge obsolete? That people will no longer need to use their brains, that people with knowledge won’t be useful?
It’s not Luddism that people with an ethos of empowering the individual with knowledge are resisting these forces
Just taking what people argue for on its own merits breaks down when your capacity to read whole essays or comments chains is so easily overwhelmed by the speed at which people put out AI slop
How do you even know that the other person read what they supposedly wrote, themselves, and you aren’t just talking to a wall because nobody even meant to say the things you’re analyzing?
Good faith is impossible to practice this way, I think people need to prove that the media was produced in good faith somehow before it can be reasonably analyzed in good faith
It’s the same problem with 9000 slop PRs submitted for code review
I've seen it happen to short, well written articles. Just yesterday there was an article that discussed the authors experiences maintaining his FOSS project after getting a fair number of users, and if course someone in the HN comments claimed it was written by AI, even though there were zero indications it was, and plenty of indications it wasn't.
Someone even argued that you could use prompts to make it look like it wasn't AI, and that this was the best explanation that it didn't look like ai slop.
If we can't respect genuine content creators, why would anyone ever create genuine content?
I get that these people probably think they're resisting AI, but in reality they're doing the opposite: these attacks weighs way heavier on genuine writers than they do on slop-posters.
The blanket bombing of "AI slop!" comments is counterproductive.
It is kind of a self fulfilling prophesy however: keep it up and soon everything really will be written by AI.
> Hackers have historically derided any website generators or tools like ColdFusion[tm] or VisualStudio[tm] for that matter.
A lot of hackers, including the black hat kind, DGAF about your ideological purity. They get things done with the tools that make it easy. The tools they’re familiar with.
Some of the hacker circles I was most familiar with in my younger days primarily used Windows as their OS. They did a lot of reverse engineering using Windows tools. They might have used .NET to write their custom tools because it was familiar and fast. They pulled off some amazing reverse engineering feats.
Yet when I tell people they preferred Windows and not Linux you can tell who’s more focused on ideological purity than actual achievements because eww Windows.
> Please do no co-opt the term "hacker".
Right back at you. To me, hacker is about results, not about enforcing ideological purity about only using the acceptable tools on your computer.
In my experience: The more time someone spends identifying as a hacker, gatekeeping the word, and trying to make it a culture war thing about the tools you use, the less “hacker” like they are. When I think of hacker culture I think about the people who accomplish amazing things regardless of the tools or whether HN finds them ideologically acceptable to use.
Same to me as well. A hacker would "hack out" some tool in a few crazy caffeine fueled nights that would be ridiculed by professional devs who had been working on the problem as a 6 man team for a year. Only the hacker's tool actually worked and saved 8000 man-hours of dev time. Code might be ugly, might use foundational tech everyone sneers at - but the job would be done. Maintaining it left up to the normies to figure out.
It implies deep-level expertise about a specific niche in the space they are hacking on. And it implies "getting shit done" - not making things full of design beauty.
Of course there are different types of hackers everywhere - but that was the "scene" to me back in the day. Teenage kids running circles around the greybeards clucking at the kids doing it wrong.
Same. Back then, and even now, the people who were busy criticizing other people for using the wrong programming language, text editor, or operating system were a different set of people than the ones actually delivering results.
In a way it was like hacker fashion: These people knew what was hot and what was not. They ran the right window manager on the right hardware and had the right text editor and their shell was tricked out. They knew what to sneer at and what to criticize for fashion points. But actually accomplishing things was, and still is, orthogonal to being fashionable.
To wit: my brother has never worked as a developer and has just a limited knowledge of python. In the past few days, he's designed, vibe-coded, and deployed a four-player online chess game, in about four hours of actual work, using Google's Antigravity. I looked at the code when it was partly done, and it was pretty good.
The gatekeepers wouldn't consider him a hacker, but that's kinda what he is now.
Ideological purity is a crutch for those that can't hack it. :)
I love it when the .NET threads show up here, people twist themselves in knots when they read about how the runtime is fantastic and ASP.NET is world class, and you can read between the lines of comments and see that it is very hard for people to believe these things while also knowing that "Micro$oft" made them.
Inevitably when public opinion swells and changes on something (such as VSCode), all the dissonance just melts away, and they were _always_ a fan. Funny how that works.
> hackers have always rejected bloated tools [...] Hackers have historically derided any website generators
Ah yes, true hackers would never, say, build a Debian package...
Managing complexity has always been part of the game. To a very large extent it is the game.
Hate the company selling you a SaaS subscription to the closed-source tool if you want, and push for open-source alternatives, but don't hate the tool, and definitely don't hate the need for the tool.
> Please do no co-opt the term "hacker".
Indeed, please don't. And leave my true scotsman alone while we're at it!
You were proven right three minutes after you posted this. Something happened, I'm not sure what and how. Hacking became reduced to "hacktivism", and technology stopped being the object of interest in those spaces.
> and technology stopped being the object of interest in those spaces.
That happened because technology stopped being fun. When we were kids, seeing Penny communicating with Brain through her watch was neat and cool! Then when it happened in real life, it turned out that it was just a platform to inject you with more advertisements.
The "something" that happened was ads. They poisoned all the fun and interest out of technology.
Where is technology still fun? The places that don't have ads being vomited at you 24/7. At-home CNC (including 3d printing, to some extent) is still fun. Digital music is still fun.
A lot of fun new technology gets shouted down by reactionaries who think everything's a scam.
Here on "hacker news" we get articles like this, meanwhile my brother is having a blast vibe-coding all sorts of stuff. He's building stuff faster than I ever dreamed of when I was a professional developer, and he barely knows Python.
In 2017 I was having great fun building smart contracts, constantly amazed that I was deploying working code to a peer-to-peer network, and I got nothing but vitriol here if I mentioned it.
I expect this to keep happening with any new tech that has the misfortune to get significant hype.
> That happened because technology stopped being fun.
Exactly and I'm sure it was our naivete to think otherwise. As software became more common, it grew, regulations came in, corporate greed took over and "normies" started to use.
As a result, now everything is filled in subscriptions, ads, cookie banners and junk.
Let's also not kid ourselves but an entire generation of "bootcamp" devs joined the industry in the quest of making money. This group never shared any particular interest in technology, software or hardware.
It's not ads, honestly. It's quality. The tool being designed to empower the user. Have you ever seen something encrusted in ads be designed to empower the user? At least, it necessitates reducing the user's power to remove the ads.
But it's fundamentally a correlation, and this observation is important because something can be completely ad-free and yet disempowering and hence unpleasant to use; it's just that vice-versa is rare.
> It's not ads, honestly. It's quality. The tool being designed to empower the user. Have you ever seen something encrusted in ads be designed to empower the user? At least, it necessitates reducing the user's power to remove the ads.
Yes, a number of ad-supported sites are designed to empower the user. Video streaming platforms, for example, give me nearly unlimited freedom to watch what I want when I want. When I was growing up, TV executives picked a small set of videos to make available at 10 am, and if I didn’t want to watch one of those videos I didn’t get to watch anything. It’s not even a tradeoff, TV shows had more frequent and more annoying ads.
No, they wouldn't. On Youtube, for example, videos were consistently trending longer over time, and you used to see frequent explainers (https://www.wired.com/story/youtube-video-extra-long/) on why this was happening and how Youtube benefits from it. Short-form videos are harder to monetize and reduce retention, but users demand them so strongly that most platforms have built a dedicated experience for them to compete with TikTok.
You can. It’s not a hermetic seal, I assume because they live in the same database as normal videos, but if you’re thinking of the separate “shorts” section there’s a triple dot option to turn it off.
The ads are just a symptom. The tsunami of money pouring in was the corrosive force. Funny enough - I remain hopeful on AI as a skill multiplier. I think that’ll be hugely empowering for the real doers with the concrete skill sets to create good software that people actually want to use. I hope we see a new generation of engineer-entrepreneurs that opt to bootstrap over predatory VCs. I’d rather we see a million vibrant small software businesses employing a dozen people over more “unicorns”.
>The "something" that happened was ads. They poisoned all the fun and interest out of technology.
Disagree. Ads hurt, but not as much as technology being invaded by the regular masses who have no inherit interest in tech for the sake of tech. Ads came after this since they needed an audience first.
Once that line was crossed, it all became far less fun for those who were in it for the sheer joy, exploration, and escape from the mundane social expectations wider society has.
It may encompass both "hot takes" to simply say money ruined tech. Once future finance bros realized tech was easier than being an investment banker for the easy life - all hope was lost.
I don't think that just because something becomes accessible to a lot more people that it devalues the experience.
To use the two examples I gave in this thread. Digital music is more accessible than ever before and it's going from strength to strength. While at-home subtractive CNC is still in the realm of deep hobbyists, 3d printing* and CNC cutting/plotting* (Cricut, others) have been accessible and interested by the masses for a decade now and those spaces are thriving!
* Despite the best efforts of some of the sellers of these to lock down and enshittify the platforms. If this continues, this might change and fall into the general tech malaise, and it will be a great loss if that happens.
No. You're both about 50% correct; what's making everything weird is that the things associated with "hacking" transitioned from "completely optional side hobby" to "fundamental basis of the economy, both bullshit and not."
This is why I'm finding most of this discussion very odd.
lol, no. They're people who think faster. Someone who uses vscode will never produce code faster than someone proficient in vim. Someone who clicks through GUI windows will never be able to control their computer as fast as someone with a command prompt.
I'm sure that there are some examples who enjoy it for the interface. I think CRT term/emulator is peak aesthetic. And a few who aren't willing to invest the time to use a gui an a terminal, and they learned the terminal first.
Calling either group a luddite is stupid, but if I was forced to defend one side. Given most people start with a gui because it's so much easier. I'd rather make the argument that those who never progress onto the faster more powerful options deserve the insult of luddite.
> Someone who uses vscode will never produce code faster than someone proficient in vim.
Is this an actually serious/honest take of yours?
I've been using vim for 20 years and, while I've spent almost no time with VS Code, I'd say that a lot of JetBrains' IDEs' built in features have definitely made me faster than I ever was with vim.
Oh wait. No true vim user would come to this conclusion, right?
The take was supposed to be read as slightly hyperbolic. Because while the fastest user of an IDE, has never come close to the fastest I've seen in vim, as you pointed out, thats not really a reasonable comparison either. Here I'm intentionally only considering raw text editing speed, jumping across lines, switching files. If you're including IDE features, when you expect someone in vim to leave vim, you're comparing something that doesn't equate to my strawman.
My larger point was it's absurd to say someone who's faster using [interface] is a luddite because they don't use [other interface] with nearly identical features.
> Oh wait. No true vim user would come to this conclusion, right?
I guess that's fitting insult, given I started with a strawman example too.
edit: I can offer another equally absurd example, (and why I say it's only slightly hyperbolic because the following is true), I can write code much faster using vim, than I can with [IDE], I don't even use tab complete, or anything similar either. I, personally, am able to write better code, faster, when there's nothing but colored text to distract me. Does that make me a luddite? I've tried both, and this fits better for me. Or is it just how comfortable you are with a given interface? Because I know most people can find tab complete useful.
> is absolutely filled with busy work that no one really wants to do
Well, LLMs don't fix that problem.
(They fix the "need to train your classification model on your own data" problem, but none of you care about that, you want the quick sci-fi assistant dopamine hit.)
> That's the thing, hacker circles didn't always have this 'progressive' luddite mentality.
I think, by definition, Luddites or neo-Luddites or whatever you want to call them are reactionaries but I think that's kind of orthogonal to being "progressive." Not sure where progressive comes in.
> All that said, the actual workday is absolutely filled with busy work that no one really wants to do, and the refusal of a loud minority to engage with that fact is what's leading to this.
I think that's maybe part of the problem? We shouldn't try to automate the busy work, we should acknowledge that it doesn't matter and stop doing it. In this regard, AI addresses a symptom but does not cure the underlying illness caused by dysfunctional systems. It just shifts work over so we get to a point where AI generated output is being analyzed by an AI and the only "winner" is Anthropic or Google or whoever you paid for those tokens.
> These people bring way more toxicity to daily life than who they wage their campaigns against.
I don't believe for a second that a gaggle of tumblrinas are more harmful to society than a single Sam Altman, lol.
> And yeah, I get it. We programmers are currently living through the devaluation of our craft, in a way and rate we never anticipated possible.
I'm a programmer, been coding professionally for 10 something years, and coding for myself longer than that.
What are they talking about? What is this "devaluation"? I'm getting paid more than ever for a job I feel like I almost shouldn't get paid for (I'm just having fun), and programmers should be some of the most worry-free individuals on this planet, the job is easy, well-paid, not a lot of health drawbacks if you have a proper setup and relatively easy to find a new job when you need it (granted, the US seems to struggle with that specific point as of late, yet it remains true in the rest of the world).
And now, we're having a huge explosion of tools for developers, to build software that has to be maintained by developers, made by developers for developers.
If anything, it seems like Balmers plea of "Developers, developers, developers" has came true, and if there will be one profession left in 100 year when AI does everything for us (if the vibers are to be believed), then that'd probably be software developers and machine learning experts.
What exactly is being de-valuated for a profession that seems to be continuously growing and been doing so for at least 20 years?
The "devaluation" they mention is just the correction against the absurd ZIRP bump, that lured would-be doctors and lawyers into tech jobs at FAANG and FAANG-alike firms with the promise of upper middle class lifestyles for trivially weaving together API calls and jockeying JIRA tickets. You didn't have to spend years more in grad school, you didn't have to be a diligent engineer. You just had to had to have a knack for standardized tests (Leetcode) and the time to grid some prep.
The compensation and hiring for that kind of inexpert work were completely out of sync with anything sustainable but held up for almost a decade because money was cheap. Now, money is held much more tightly and we stumbled into a tech that can cheaply regurgitate a lot of so the trivial inexpert work, meaning the bottom fell out of these untenable, overpaid jobs.
You and I may not be effected, having charted a different path through the industry and built some kind of professional career foundation, but these kids who were (irresponsibly) promised an easy upper middle class life are still real people with real life plans, who are now finding themselves in a deeply disappointing and disorienting situation. They didn't believe the correction would come, let alone so suddenly, and now they don't know how they're supposed to get themselves back on track for the luxury lifestyle they thought they legitimately earned.
While that is part of the equation it's not at all that simple. If the average business owner wants a custom piece of software for their workflow how are they getting it now? For decades the answer would have been new hires, agencies, consultants, and freelancers. It didn't matter that most software boiled down to a simple CRUD backend and a flashy frontend. There was still a need for developers to create every piece of software.
Now AI makes it unbelievably easy to make those simple but bespoke software packages. The business owner can boot up Lovable and get something that is good enough. The non-software folk generally aren't scrutinizing the software they use. It doesn't matter if the backend is spaghetti code or if there are bugs here and there. If it works well enough then they're happy.
In my opinion that's the unfortunate truth of AI software development. It's dirt cheap, fast, and good enough for most people. Computer's couldn't write software before and now they can. Obviously that is real devaluation, right?
So far, the tools help many programmers write simple code more quickly.
For technically adept professionals who are not programmers, though, we still haven't seen anything really break through the ceiling consistently encountered by previous low-code/no-code tools like FoxPro, Access, Excel, VBA, IFTTT, Zapier, Salesforce etc.
The LLM-based tools for this market work differently than the comparable tools that preceded them over the last 40 years, in that they have a much richer vocabulary of output, but the ceiling that all of these tools encountered in the last has been a human one: most non-programmers don't know how to describe what they need with sufficient detail for anything much beyond a fragile, narrow toy.
Maybe GPT-8 or Gemini 6 or whatever will somehow finally be able to shatter this ceiling, and somebody will make finally make a no-code software builder that devours the market for custom/domain software. But that hasn't happened yet and it's at least as easy to be skeptical as it is to be convinced.
I'm fairly certain that it's happening right now. There is no threshold that LLMs need to "break through" to see adoption. The number of non-technical using them to write software is growing every day.
I was working freelance through late 2023 - mid 2025 and the shift seemed quite obvious to me. Other freelancers, agency managers, etc that I talked to could see it too. The volume of clients, and their expectations, is changing very rapidly in that space.
When I first earned money for coding (circa 20 years ago) it was small e-commerce shop. Today nobody does them, because there's woocomerce, shopify, FB marketplace. All dirt cheap and fast.
It isn't devaluation. It's good - it freed a lot of people to work on more ambitious things.
Nailed it. It's a pendulum and we're swinging back to baseline. We just finished our last big swing (zirp, post COVID dev glut) and are now in full free fall.
I love this post. It really encapsulates a lot of what my take on the situation is as well. It has just been so blatantly obvious that a lot of people have a very protectionist mindset surrounding AI, and a legitimate fear that they are going to be replaced by it.
> What exactly is being de-valuated for a profession
You're probably fine as a more senior dev...for now.
But if I was a junior I'd be very worried about the longevity I can expect as a dev. It's already easier for many/most cases to assign work to a LLM vs handholding a human through it.
Plus as an industry we've been exploiting our employer's lack of information to extract large salaries to produce largely poor quality outputs imo. And as that ignorance moat gets smaller, this becomes harder to pull off.
This is just not happening anywhere around me. I don't know why it keeps getting repeated in every one of these discussions.
Every software engineer I know is using LLM tools, but every team around me is still hiring new developers. Zero firing is happening in any circle near me due to LLMs.
LLMs can not do unsupervised work, period. They do not replace developers. They replace Stack Overflow and Google.
I can tell you where I am seeing it change things for sure, at the early stages. If you wanted to work at a startup I advise or invest in, based on what I'm seeing, it might be more difficult than it was 5 years because there is a slightly different calculus at the early stage. often your go to market and discovery processes seed/pre-seed are either: not working well yet, nonexistent, or decoupled from prod and eng, the goal obviously is over time to bring it all together into a complete system (a business) - as long as I've been around early stage startup there has always been a tension between engineering and growth on budget division, and the dance of how you place resources across them such that they come together well is quite difficult. Now what I'm seeing is: engineering could do with being a bit faster, but too much faster and they're going to be sitting around waiting for the business teams to get their shit together, where as before they would look at hiring a junior, now they will just hire some AI tools, or invest more time in AI scaffolding etc... allowing them to go a little bit faster, but it's understood: not as fast as hiring a jr engineer. I noticed this trend starting in the spring this year, and i've been watching to see if the teams who did this then "graduate" out of it to hiring a jr, so far only one team has hired and it seems they skipped jr and went straight to a more sr dev.
Around 80% of my work is easy while the remaining 20% is very hard. At this stage the hard stuff is far outside the capability of LLM but the easy stuff is very much within its capabilities. I used to hire contractors to help with that 80% work but now I use LLMs instead. It’s far cheaper, better quality, and zero hassle. That’s 3 junior / mid level jobs that are gone now. Since the hard stuff is combinatorial complexity I think by the time LLM is good enough to do that then it’s probably good enough to do just about everything and we’ll be living in an entirely different world.
Exactly this, I lead cloud consulting + app dev projects. Before I would have staffed my projects with at least me leading it and doing the project management + stakeholder meetings and some of the work and bringing a couple of others in to do some of the grunt work. Now with Gen AI even just using ChatGPT and feeding it a lot of context - diagrams I put together, statements of work, etc - I can do it all myself without having to go through the coordination effort of working with two other people.
On the other hand, when I was staffed to lead a project that did have another senior developer who is one level below me, I tried to split up the actual work but it became such a coordination nightmare once we started refining the project because he could just use Claude code and it would make all of the modifications needed for a feature from the front end work, to the backend APIs, to the Terraform and the deployment scripts.
Today's high-end LLMs can do a lot of unsupervised work. Debug iterations are at least junior level. Audio and visual output verification is still very week (i.e. to verify web page layout and component reactivity). Once the visual model is good enough to look at the screen pixels and understand, it will instantly replace junior devs. Currently if you have only text output all new LLMs can iterate flawlessly and solve problems on it. New backend dev from scratch is completely doable with vibe coding now, with some exceptions around race conditions and legacy code comprehension.
> Once the visual model is good enough to look at the screen pixels and understand, it will instantly replace junior devs
Curious if you gave Antigravity a try yet? It auto-launches a browser and you can watch it move the mouse and click around. It's able to review what it sees and iterate or report success according to your specs. It takes screen recordings and saves them as an artifact for you to verify.
I only tried some simple things with it so far but it worked well.
Right, and as a hiring manager, I'm more inclined to hire junior devs since they eventually learn the intricacies of the business, whereas LLMs are limited in that capacity.
I'd rather babysit a junior dev and give them some work to do until they can stand on their own than babysit an LLM indefinitely. That just sounds like more work for me.
Completely agree. I use LLM like I use stackoverflow, except this time i get straight to the answer and no one closes my question and marks it as a duplicate, or stupid.
I dont want it integrated into my IDE, i'd rather just give it the information it needs to get me my result. But yeah, just another google or stackoverflow.
You're mostly right but very few teams are hiring in the grand scheme of things. The job market is not friendly for devs right now (not saying that's related to AI, just a bad market right now)
It's me. I'm the LM having work assigned to me that junior dev used to get. I'm actually just a highly proficient BA who has always almost read code, followed and understood news about software development here and on /. before, but generally avoided writing code out of sheer laziness. It's always been more convenient to find something easier and more lucrative in those moments if decision where I actually considered shifting to coding as my profession.
But here I am now. After filling in for lazy architects above me for 20 years while guiding developers to follow standards and build good habits and learning important lessons from talking to senior devs along the wa, guess what, I can magically do it myself now. The LM is the junior developer that I used to painstakingly explain the design to, and it screws it up half as much as the braindead and uncaring jr Dev used to. Maybe I'm not a typical case, but it shows a hint of where things might be going. This will only get easier as the tools become more capable and mature into something more reliable.
Don't worry about where AI is today, worry about where it will be in 5-10 years. AI is brand new bleeding edge technology right now, and adaption always takes time, especially when the integration with IDEs and such is even more bleeding edge than the underlying AI systems themselves.
And speaking about the future, I wouldn't just worry about it replacing the programmer, I'd worry about it replacing the program. The future we are heading into might be one where the AI is your OS. If you need an app to do something, you can just make it up on the spot, a lot of classic programs will no longer need to exist.
> Don't worry about where AI is today, worry about where it will be in 5-10 years.
And where will it be in 5-10 years?
Because right now, the trajectory looks like "right about where it is today, with maybe some better integrations".
Yes, LLMs experienced a period of explosive growth over the past 5-8 years or so. But then they hit diminishing returns, and they hit them hard. Right now, it looks like a veritable plateau.
If we want the difference between now and 5-10 years from now and the difference between now and 5-10 years ago to look similar, we're going to need a new breakthrough. And those don't come on command.
Right about where it is today with better integrations?
One year is the difference between Sonnet 3.5 and Opus 4.5. We're not hitting diminishing returns yet (mostly because of exponential capex scaling, but still). We're already committed to ~3 years of the current trajectory, which means we can expect similar performance boosts year over year.
The key to keep in mind is that LLMs are a giant bag of capabilities, and just because we hit diminishing returns on one capability, that doesn't say much if anything about your ability to scale other capabilities.
The depreciation schedule is debatable (and that's currently a big issue!). We've been depreciating based on availability of next generation chips rather than useful life, but I've seen 8 year old research clusters with low replacement rates. If we stop spending on infra now, that would still give us an engine well into the next decade.
It's a trope that people say this and then someone points out that while the comment was being drafted another model or product was released that took a substantial step up on problem solving power.
I use LLMs all day every day. There is no plateau. Every generation of models has resulted in substantial gains in capability. The types of tasks (both in complexity and scope) that I can assign to an LLM with high confidence is frankly absurd, and I could not even dream of it eight months ago.
> But if I was a junior I'd be very worried about the longevity I can expect as a dev. It's already easier for many/most cases to assign work to a LLM vs handholding a human through it.
This sounds kind of logical, but really isn't.
In reality you can ASSIGN a task to a junior dev and expect them to eventually complete it, and learn from the experience as well. Sure there'll likely be some interaction between the junior dev and mentor, and this is part of the learning process - something DESIREABLE since it leads to the developer getting better.
In contrast, you really cant "assign" something to an LLM. You can of course try to, and give it some "vibe coding" assignment like "build me a backend component to read the data from the database", but the LLM/agent isn't an autonomous entity that can take ownership of the assignment and be expected to do whatever it takes (e.g. coming back to you and asking for help) to get it done. With todays "AI" technology it's the AI that needs all the handholding, and the person using the AI is the one who has effectively taken the assignment, not the LLM.
Also, given the inability of LLMs to learn on the job, using an LLM as a tool to help get things done is going to be a groundhog day experience of having to micro-manage the process in the same way over and over again each time you use it... time that would have been better invested in helping a junior dev get up to speed and in the future be an independent developer that tasks can indeed be assigned to.
Doesn't matter. First, yes, a modern AI will come back and ask questions. Second, the AI is so much faster at interactions than a human is, that you can use that saved time to glance at its work and redirect it. The AI will come back with 10 prototype attempts in an hour, while a human will take a week for each, with more interupt questions for you about easy things
Sure, LLMs are a useful tool, and fast, but the point is they don't have human level intelligence, can't learn, and are not autonomous outside of an agent that will attempt to complete a narrow task (but with no ownership and guarantee of eventual success).
We'll presumably get there eventually and build "artificial humans", but for now what we've got is LLMs - tools for language task automation.
If you want to ASSIGN a task to something/someone then you need a human or artificial human. For now that means assigning the task to a human, who will in turn use the LLM as a tool. Sure there may be some productivity increase (although some studies have indicated the exact opposite), but ultimately if you want to be able to get more work done in parallel then you need more entities that you can assign tasks do, and for time being that means humans.
> the point is they don't have human level intelligence
> If you want to ASSIGN a task to something/someone then you need a human or artificial human
Maybe you haven't experienced it but a lot of junior devs don't really display that much intelligence. Their operating input is a clean task list, and they take it and convert it into code. It's more like "code entry" ("data entry", but with code).
The person assigning tasks to them is doing the thinking. And they are still responsible for the final output, so if they find a computer better and cheaper at "code entry" than a human well then that's who they're assign it to. As you can see in this thread many are already doing this.
Funny you mention this because Opus 4.5 did this just yesterday. I accidentally gave it a task with conflicting goals, and after working through it for a few minutes it realized what was going on, summarized the conflict and asked me which goal should be prioritized, along with detailed pros and cons of each approach. It’s exactly how I would expect a mid level developer to operate, except much faster and more thorough.
Yes, they continue to get better, but they are not at human level (and jr devs are humans too) yet, and I doubt the next level "AGI" that people like Demis Hassabis are projecting to still be 10 years away will be human level either.
What are your talking about? You seem to live in a parallel universe.
Every single time I tried this or someone of my colleagues, this task failed tremendously hard.
> “…exploiting our employer's lack of information…”
I agree in the sense that those of us who work in for-profit businesses have benefited from employer’s willingness to spend on dev budgets (salaries included)—without having to spend their own _time_ becoming increasingly involved in the work. As “AI” develops it will blur the boundaries of roles and reshape how capital can be invested to deliver results and have impact. And if the power dynamics shift (ie. out of the class of educated programmers to, I dunno, philosophy majors) then you’re in trouble.
I had hired 3 junior/mid lvl devs and paid them to do nothing but study to improve their skills, it was my investment in their future, I had a big project on the horizon that I needed help with. After 6 months I let them go, the improvement was far too slow. Books that should have taken a week to get through were taking 6 weeks. Since then LLM have completely surpassed them. I think it’s reasonable to think that some day, maybe soon, LLMs will surpass me. Like everyone else, I have to the best I can while I can.
But this is an issue of worker you're hiring. I've worked with senior engineers who a) did nothing (as - really not write any thing within the sprint, nor do any other work) b) worked on things they wanted to work on c) did ONLY things that they were assigned in the sprint (= if there were 10 tickets in the sprint and they were assigned 1 of these tickets then they would finish that ticket and not pick up anything else, staying quiet) d) worked only on tickets that have requirements explicitly stated step by step (open file a, change line 89 to be `checkBar` instead of `checkFoo`... - having to write this would take longer than doing the changes yourself as I was really writing in Jira ticket what I wanted the engineer to code, otherwise they would come back with "not enough spec, can't proceed"). All of these cases - senior people!
Sure - LLMs will do what they're told (to a specific value of "do" and "what they're told")
Sure there is a wide spectrum of skills, having worked in FANG and top tier research I have a pretty good idea of the capability at the top of the spectrum. I know I wasn't hiring at that level. I was paying 2x the local market rate (non-US) and pulling from the functional programming talent pool. These were not the top 1% but I think they were easily top 10% and probably in the top 5%.
I use LLMs to build isolated components and I do the work needed to specialize them for my tasks and integrate them together. The LLMs take fewer instructions to do this and handle ambiguity far better. Additionally because of the immediate feedback look on the specs I can try first with a minimally defined spec and interactively refine as needed. It takes me far less work to write specs for LLMs than it does for other devs.
And even if their progress had been faster, now they are a capable developer who can command higher compensation that statistically your company won’t give them and they are going to jump ship anyway.
One didn't even wait, they immediately tried to sub-contract the work out to a third party and make a transition from a consultant to a consultancy company. I had to be clear that they are hired as named person and I very much do care about who does the work.While not FANG comp it was ~2x the market rates, statistically I think they'd have a hard time matching that somewhere else. I think in part because I was offering these rates they got rather excited about the perceived opportunity in being a consultancy company, i.e. the appetite grows with the eating. I'm not sure if it's something that could be solved with more money, I guess in theory with FANG money but it's not like those companies are without their dysfunctions. With LLMs I can solve the same problem with far less money.
Actually it does, if you put those concepts in documentation in your repository…
Those concepts will be in your repository long after that junior dev jumps ship because your company refused to pay him at market rates as he improved so he had to jump ship to make more money - “salary compression” is real and often out of your manager’s control.
Maybe see it less as a junior and replacement for humans. See it more as a tool for you! A tool so you can do stuff you used to delegate/dump to a junior, do now yourself.
Claude gets better as Claude's managers explain concepts to it. It doesn't learn the way a human does. AI is not human. The benefit is that when Claude learns something, it doesn't need to run a MOOC to teach the same things to millions of individuals. Every copy of Claude instantly knows.
You need to hit that thumbs down with the explanation so the model is trained with the penalty applied. Otherwise your explanations are not in the training corpus
I consider the devaluation of the craft to be completely independent from the professional occupation of software.
Programming has been devalued because more people can do it at a basic level with LLM tooling. People that I do not consider smart enough or to have put enough work in to output the things that they have nor do they really understand it themselves.
It is of course the new reality and now we all have to go find new markers/things to judge peoples output by. Thats the devaluation of the craft itself.
For what its worth, this devaluation has happened many times in this field. ASM, Compilers, managed gc languages, the cloud, abstractions have continually opened up the field to people the old timers consider unworthy.
> Programming has been devalued because more people can do it at a basic level with LLM tooling
But just because more people can do something doesn't mean it's devalued, or am I misunderstanding the word? The value of programs remains the same, regardless of who composes them. The availability of computers, the internet and the web seems to have had the opposite effect so far, making entire industries much more valued than they were in the decades before.
Neither do I see ASM, compilers, and all your other examples of devalualing, it seems like it's "nichifying" the industry if anything, which requires more experts, not fewer. The more abstractions we have in reality, the more experts are needed for being able to handle those things.
> programmers should be some of the most worry-free individuals on this planet, the job is easy, well-paid, not a lot of health drawbacks if you have a proper setup and relatively easy to find a new job when you need it
Not in where I live though. Competition is fierce, both in industry and academia, for most posts being saturated and most employees face "HR optimization" in their late 30s. Not to mention working over time, and its physical consequences.
I mean, not anywhere, and the data absolutely annihilates their ridiculous claims. In subsequent posts they've retreated back to "yeah, but someone somewhere has it worse", invalidating this whole absurd thread.
Their comment has little correlation with reality, and seems to be a contrived, self-comforting fiction. Most firms have implemented hiring freezes if not actively downsizing their dev staff. Many extremely experienced devs are finding the market absolutely atrocious, getting zero bites.
And for all of the "well us senior devs are safe" sentiment often seen on here, many shops seem to be more comfortable hiring cheap and eager junior devs and foregoing seniors because LLMs fill in a lot of the "grizzled wisdom". The junior to senior ratio is rapidly increasing, and devs who lived on golden handshakes are suddenly finding their ego bruised and a market where they're fighting for low-pay jobs.
Again, compare this to other professions, don't look at in isolation, and you'll see why you're still (or will have, seems you're a student still) having a much more pleasant life than others.
This is completely irrelevant. The point is that the profession is being devalued, i.e. losing value relative to where it was. If, for example, the US dollar loses value, it's not a "counterargument" to point out that it's still much more valuable than the Zimbabwe dollar.
It isn't though, none of our lives are happening in isolation, even if you don't believe it, there are other humans out there, with real responsibilities outside of computers.
Even if the competition is fierce, do you think it isn't for other professions, or what's the point? Of course a job that is well-paid, has few drawbacks and let you sit indoors in front of computer, probably doing something you enjoy in general, is popular and has competition.
Do other professions expect you to work during personal time? At least blue collar people are done when they get told they're done
I get your viewpoint though, physically exhausting work is probably much worse. I do want to point out that 40 hours has always been above average, and right now its the default
> Do other professions expect you to work during personal time? At least blue collar people are done when they get told they're done
No, and after my first programming job, neither does it happen in development. Make sure you join the right place, have the right boss, and set expectations up front, and you too can surely avoid it if it's important to you :) Usually you can throw in "work/life balance" somehow to gauge how they feel about it.
And yes, plenty of blue collar people are expected to be available during your personal time, for various reasons. Sometimes just quick questions (especially if you're a manager and you're having time off), sometimes emergencies that requires you to head on over to the place. Ask anyone who owned or even just managed a restaurant about that specific thing, and maybe you'll be surprised.
This “compare it to other professions” thing doesn’t really work when those other professions are not the one you actually do. The idea that someone should never be miserable in their job because other more miserable jobs exist is not realistic.
It's a useful thing to look at when you feel like all hope is lost and "wow is so difficult being a programmer" strikes, because it'll make you realize how easy you have it compared to non-programmers/nom-tech people.
Realizing how supposedly “easy” you have it compared to other people is not as encouraging or motivational as you’re implying it is. And how “easy” do you have it if you can’t find a job in your field?
Might be worth investigating why it isn't if so. People stressed about their situation usually find some solace in being helped realize what their position in the world actually is, as everything is always relative, not absolute.
You sound exactly like that turkey from Nassim Taleb's books that came to the conclusion that the purpose of human beings is to make turkeys very happy with lots of food and breeding opportunities. And the turkey's thesis gets validated perfectly every day he wakes up to a delicious fatty meal.
Your comment is hyperbolic fear mongering dressed up in a cutesie story.
Our industry is being disrupted by AI. What industry in history has not been disrupted by technological progression? It's called life. And those that can adapt to life changing will continue to thrive. And those who can't will get left behind. There is no wholesale turkey slaughter.
If you read the grand parent, they seem to be denying a disruption is taking place industry wide. The adage was used to illustrate how complacency is blinded by the very conditions that enable it, and while this is unfalsifiable and not very conducive to discussion, "fear mongering" is a bit rich to levy.
Further:
> Our industry is being disrupted by AI... No wholesale turkey slaughter.
Is an entirely different position than the GP who is essentially betting on AI producing more jobs for hackers, which surely won't be so simple.
Sorry to confuse the thread. I meant to point to the original comment (embedding-shape), but blindly labeled them GP.
We share understanding of their analogy, but differ in the inferred application. I took it as the well fed turkeys are "developers who deny AI will disrupt their industry", not "developers" as a whole.
> And now, we're having a huge explosion of tools for developers, to build software that has to be maintained by developers, made by developers for developers.
What do you think they're building all those datacenters for? Why do you think so much money is pouring into AI companies?
It's not to help make developers more efficient with code assistants.
Traditional computation will be replaced with bots in every aspect of software. The goal is to devalue our labor and replace it with computation performed by machines owned by the wealthy, who can lease this out.
If you can't see this coming you lack both imagination and historical perspective.
Five years ago Claude Code would have been essentially unimaginable. Consider this.
So sure, enjoy your job churning out buggy whips while you can, but you better have a plan B for when the automobiles truly arrive.
I agree with all this, except there is no plan B. What could plan B possibly be when white collar work collapses? You can go into a trade, but who will be hiring the tradespeople?
The companies who now have piles of cash because they eliminated a huge chunk of labor will spend far more on new projects, many of which will require tradesmen.
Economic waves never hit one sector and stop. The waves continues across the entire economy. You can’t think “companies will get rid of huge amounts of labor” and then stop asking questions. You need to then ask “what will companies do with decreased labor costs?” And “what could that investment look like, who will they need to hit to fulfill it?” and then “what will those workers do after their demand increases?” And so on.
Unless they do, or are severely weakened. Consider the net worth of the 1% over the last few decades. Even corrected for inflation, its growth is staggering. The wealth gap is widening, and that wealth came from somewhere.
So yes, when there is an economic boom, investment happens. However, the growth of that top %1 tells me that they've been taking more and more off the top. Sure, some near the bottom may win with the decreased labor costs and whatnot, but my point is less and less do every cycle.
Full disclosure: I'm not an economist. Hell, I probably have a highschool-level of econ knowledge at best, so this should probably be taken as a "common-sense" take on it, which I already know often fails spectacularly when economics is at play. So I'm more than open to be corrected here.
Jeff Bezos has a 233 billion net worth. It's not because Amazon users overpaid by a 233 billion but because his share in Amazon is highly valued by investors.
My own Amazon investment in my pension has also gone up by 10x in the last 10 years, just like Jeff's. Where did the value increase come from?
Is this idea of the stock market good for us? I don't know, but it's paper money until you sell it.
I would look at the secondary consequences of the totaling of white collar labor in the same way. Without the upper-middle-class spending their disposable income, consumer spending shrivels, advertising dollars dry up, and investment in growth no longer makes sense in most industries. It looks like a path to total economic destruction to me.
I think it’s much more likely they’ll be used for mass surveillance purposes. The tech is already there, they just need the compute (and a lot of it).
Most of the economy is making things that aren’t really needed. Why bother keeping that afloat when it’s 90% trinkets for the proles? Once they’ve got the infra to ensure compliance why bother with all the fake work which is the real opium of the masses.
Likewise with experienced devs who find themselves out of work due to the neverending mass layoffs.
There's a huge difference between the perspective of someone currently employed versus that of someone in the market for a role, regardless of experience level. The job market of today is nothing like the job market of 3 years ago. More and more people are finding that out every day.
Based on conversations with peers for the last ~3 years or so, some of retrained to become programmers, this doesn't seem to as absolute as you paint it out to be.
But as mentioned earlier, the situation in the US seems much more dire than elsewhere. People I know who entered the programming profession in South America, Europe and Asia for these last years don't seem to have more troubles than I had when I got started. Yes, it requires work, just like it did before.
Literally the worst job you can find as a programmer today (if you lower you standards and particularly, stay away from cryptocurrency jobs) is 10x better than the non-programmer jobs you can find.
If you don't trust me, give a non-programming job a try for 1 year and then come back and tell me how much more comfy $JOB was :)
> Literally the worst job you can find as a programmer today (if you lower you standards and particularly, stay away from cryptocurrency jobs) is 10x better than the non-programmer jobs you can find.
This is a ridiculous statement. I know plenty of people (that are not developers) that make around the same as I do and enjoy their work as much as I do. Yes, software development is a great field to be in, but there's plenty of others that are just as good.
Huh? I'm not saying there isn't careers out there that are also good, I'm not sure what in my comment made it seem so? Of course there are many great fields out there, wasn't my intention to somehow seem to say software development is the only one.
>>Literally the worst job you can find as a programmer today (if you lower you standards and particularly, stay away from cryptocurrency jobs) is 10x better than the non-programmer jobs you can find.
A lot of non-programmer jobs have a kind of union protection, pension plans and other perks even with health care. That makes a crappy salary and work environment bearable.
There was this VP of HR, in a Indian outsourcing firm, and she something to the effect that Software jobs appear like would pay to the moon, have an employee generate tremendous value for the company and general appeal that only smart people work these jobs. None of this happens with the majority of the people. So after 10-15 years you actually kind of begin to see why some one might want to work a manufacturing job.
Life is long, job guarantee, pensions etc matter far more than 'move fast and break thing' glory as you age.
I was a lot happier in previous non-programming jobs, they just were much worse at paying the bills. If i could make my programming salary doing either of my previous jobs, i would go back in a heartbeat. Hell if i could make even 60% of my programming salary doing those jobs I'd go back.
I enjoy the practice of programming well enough but i do not at all love it as a career. I don't hate it by any means either but it's far from my first choice in terms of career.
Because tech corps overhired[0] when the interest rate was low.
Even after the layoffs, most big tech corps still have more employees today than they did in 2020.
The situation is bad, but the lesson to learn here is that a country should handle a pandemic better than "lowering interest rate to near-zero and increasing government spending." It's just kicking and snowballing the problem to the next four years.
I think it was more sandbagging than snowballing. The pain was spread out, and mostly delayed, which kept the economy moving despite everything.
Remember that most of the economy is actually hidden from the stock market, its most visible metric. Over half the business is privately-owned small businesses, and at the local level forcibly shutting down all but essential-service shops was devastating. Without government spending, it's hard to imagine how most of those business owners and their employees would have survived, let alone their shops.
Yet we had no bread lines, no (increase in) migratory families chasing cash labor markets, and demands on charity organizations were heavy, but not overwhelming.
But you claim "a country should handle a pandemic better..." - what should we have done instead? Criticism is easy.
It seems like most companies are just using AI as a convenient cover for layoffs. If you say: “We enormously over-hired and have to do layoffs.”, your stock tanks. If you instead say that you are laying off the same 20k employees ‘because AI’, your stock pumps for no reason. It’s just framing.
I've always heard this sentiment, but I've also never met one of these newly skilled job applicants who could do anything resembling the job.
I've done a lot of interviews, and inevitably, most of the devs I interview can't pass a trivial interview (like implement fizzbuzz). The ones who can do a decent job are usually folks we have to compete for.
> I'm getting paid more than ever for a job I feel like I almost shouldn't get paid for (I'm just having fun)
In my Big Tech job, I sometimes forget that some people can really enjoy what they do. It seems like you're in a fortunate position of both high pay and high enjoyment. Congratulations! Out of curiosity, what do you work on?
Right now I'm doing consulting for two companies, maybe a couple of hours per week, mostly having downtime and trying to expand on my machine learning knowledge.
But in general, every job I've had has been "high pay and high enjoyment" even when I initially had "shit pay" compared to other programmers, and the product wasn't really fun, I was still programming, an activity I still love.
Compare this to the jobs I did before, where the physical toll makes it impossible to do anything after work as you're exhausted, and even if I got paid more than my first programming job, that your body is literally unable to move once you get home, makes the pay matter less and feel less.
But for a programmer, you can literally sit still all day, have some meetings in a warm office, talk with some people, type some things into a document, sit and think for a while, and in the end of the month you get a paycheck.
If you never worked in another profession, I think you ("The Programmer") don't realize how lucky you are compared to the rest of the world.
It's a good perspective to keep. I've also worked a lot of crappy jobs. Overnights in a grocery store (IIRC, they paid an extra .50/hour to work overnights), fine dining waiter (this one was actually fun, but the partying was too much), on a landscaping crew, etc... I make more money than I ever thought possible growing up. My dad still can't believe I have job 'playing on the computer' all day, though I mostly manage now.
I too have worked in shit jobs. I too appreciate that I am currently in a 70F room of my house, wearing a T-shirt and comfy pants, and able to pet my doggos at will.
I work remote and i hate it, sitting all day is killing me, my 5 minute daily stand-up is nowhere near enough social interaction for a whole day's work. I've been looking for a role better suited to me for over a year, but the market is miserable.
I miss having jobs where at least a lot of the time i was moving around or working directly with other people. More than anything else i miss casual conversation with coworkers (which still happened with excruciating rarity even when i was doing most of my programming in an office).
I'm glad you love programming and find the career ideal. I don't mean to harp or whine, just pointing out your ideals aren't universal even amount programmers.
No, definitely some environments are less ideal, I agree. Personally, I also cannot stand working remote, if I'm working in a high-intensity project I have to work with the team in person, otherwise things just fall apart.
I understand exactly what you mean and agree, seems our ideals agree after all :)
Get a standing desk and a walking treadmill! It’s genuinely changed my life. I can focus easier, I get my steps in, and it feels like I did something that day.
Negativity spreads so much more quickly than positivity online, and I feel as though too many people live in self reinforcing negative comment sections and blog posts than in the real world, which gives them a distorted view.
My opinion is that LLMs are doing nothing but accelerating what's possible with the craft, not eliminating it. If anything, this makes a single developer MORE valuable, because they can now do more with less.
Exactly. The problem is instead of getting a raise because "you can do more now" your colleagues will be laid off. Why pay for 3 devs when the work can be done by 1 now? And we all better hope that actually pans out in whatever legacy codebase we're dealing with.
Now the job market is flooded due to layoffs, further justifying lack of comp adjustment - add inflation, and you have "de-valuing" in direct form.
The job of a programmer is, and has always been, 50% making our job obsolete (through various forms of automation) and 50% ensuring our job security (through various forms of abstraction).
Over the course of my career, probably 2/3rds of the roles I have had (as in my day to day work, not necessarily the title) just no longer exist, because people like me eliminated them. I personally was the last person that had a few of those jobs because I mostly automated them and got promoted and they didn't hire a replacement. It's not that they hired less people though, they just hired more people, paid them more money, and they focused on more valuable work.
The amount of negativity your positive comment has received looks almost overwhelming. I remember HN being a much happier place a few years ago. Perhaps I should take a break from it.
People working in one of the coolest industries on Earth really do not appreciate their lives nowadays.
Across ~10 jobs or so, mostly as a employee of 5-100 person companies, sometimes as a consultant, sometimes as a freelancer, but always with a comfy paycheck compared to any other career, and never as taxing (mental and physical) as the physical labor I did before I was a programmer, and that some of my peers are still doing.
Of course, there is always exceptions, like programmers who need to hike to volcanos to setup sensors and what not, but generally, programmers have one of the most comfortable jobs on the planet today. If you're a programmer, I think it should come relatively easy to acknowledge this.
Software engineering just comes really easily to my brain, somehow. Most of these days is spent designing, architecturing and managing various things, it takes time, but in the end of the day I don't feel like "Ugh I just wanna sleep and die" probably ever. Maybe when we've spent 10+ hours trying to bring back a platform after production downtime, but a regular day? My brain is as fine as ever when I come back home.
Contrast that with working as a out-call nurse, which isn't just physically taxing as you need to actually use your body multiple times per day for various things, but people (especially when you visit them in their homes, seemingly) can be really mean, weird and just draining on you. Not to mention when people get seriously hurt, and you need to be strong when they're screaming of pain, and finally when people die, even strangers, just is really taxing no matter what methods you use for trying to come back from that.
It's just really hard for me to complain about software development and how taxing it can be, when my life experience put me through so much before I even got to be a professional developer.
I've never done anything like road/construction work. But I've done restaurant work, being on my feet for 8+ hours per day... and mentally, it just doesn't compare to software development.
- After a long day of physical labor, I come home and don't want to move.
- After a long day of software development, I come home and don't want to think.
Comfortable and easy, but satisfying? I don't think so. I've had jobs that were objectively worse that I enjoyed more and that were better for my mental health.
Sure, it's mostly comfy and well-paid. But like with physical labor, there are jobs/projects that are easy and not as taxing, and jobs that are harder and more taxing (in this case mentally).
Yes, you'll end up in situations where peers/bosses/clients aren't the most pleasant, but compare that to any customer facing job, you'll quickly be able to shed those moments as countless people face those seldom situations on a daily basis. You can give it a try, work in a call center for a month, and you'll acquire more stress during that month than even the worst managed software project.
When I was younger, I worked doing sales and customer service at a mall. Mostly approaching people and trying to pitch a product. Didn't pay well, was very easy to get into and do, but I don't enjoy that kind of work (and many people don't enjoy programming and would actually hate it) and it was temporary anyway. I still feel like that was much easier, but more boring.
That sounds ideal! I used to be a field roboticist where we would program and deploy robots to Greenland and Antarctica. IMO the fieldwork helped balance the desk work pretty well and was incredibly enjoyable.
My experience and the ones I personally known, been in Western Europe, South America and Asia, and programmers I know have an easier time to find new jobs compared to other professions.
Don't get me wrong, it's a lot harder for new developers to enter the industry compared to a decade ago, even in Western Europe, but it's still way easier compared to the length people I know who aren't programmers or even in tech.
Software to date has been a [Jevons good](https://en.wikipedia.org/wiki/Jevons_paradox). Demand for software has been constrained by the cost efficiency and risk of software projects. Productivity improvements in software engineering have resulted in higher demand for software, not less, because each improvement in productivity unblocks more of the backlog of projects that weren't cost effective before.
There's no law of nature that says this has to continue forever, but it's a trend that's been with us since the birth of the industry. You don't need to look at AI tools or methodoligies or whatever. We have code reuse! Productivity has obviously improved, it's just that there's also an arms race between software products in UI complexity, features, etc.
If you don't keep improving how efficiently you can ship value, your work will indeed be devalued. It could be that the economics shift such that pretty much all programming work gets paid less, it could be that if you're good and diligent you do even better than before. I don't know.
What I do know is that whichever way the economics shake out, it's morally neutral. It sounds like the author of this post leans into a labor theory of value, and if you buy into that, well...You end up with some pretty confused and contradictory ideas. They position software as a "craft" that's valuable in itself. It's nonsense. People have shit to do and things they want. It's up to us to make ourselves useful. This isn't performance art.
It is a part of gaining experience and knowledge though. If you aren't a senior right now, eventually you will be, and one of the expectations will be that you can read and review more novice programmers code and help them improve it, and lend a helping hand when you can. Eventually, all you do will be to review the work others have done after you instructing them to do the thing. Not to mention reading through really great written programs is personally a great joy for me, and almost always learn something new.
But, probably remaining a developer who runs through tickets in JIRA without much care for collaboration could be feasible in some type of companies too.
Then use better software engineering paradigms in how your AI builds projects.
I find the more I specify about all the stuff I thought was hilariously pedantic hyper-analysis when I was in school, the less I have to interpret.
If you use test-driven, well-encapsulated object oriented programming in an idiomatic form for your language/framework, all you really end up needing to review is "are these tests really testing everything they should."
I came here to quote the same quote but with the opposite sentiment. If you look at the history of work, at least in the states, it’s a history of almost continual devaluation and automation. I’ve been assuming that my generation, entering the profession in the 2010s, will be the last where it’s a pathway to an upper middle class life. Just like the factory workers before us automation will come for those who do mostly repetitive tasks. Sure there will be well paid professional software devs in the future just as there are some well paid factory workers who mostly maintain machines. But the scale of the opportunity will be much smaller.
But in the end, we didn't end up with less factories that do more, we ended up with more factories that does more.
Why wouldn't the same happen here? Instead of these programmers jamming out boilerplate 24/7, why are they unable to improve their skill further and move with the rest of the industry, if that's needed? Just like other professions adopt to how society is shaped, why should programming be an exception to that?
And how is the quality of life for those factory workers? It's almost like the craft of making physical things has been devalued even if we're making more physical things than ever.
If you live in a country where worker's health and lives are valued, pretty good. 98% of them are in a union, so they can't get fired from nowhere, they have reliable salary each month, free healthcare (as everyone else in the country) and they can turn off when they come home. Most of them work on rotation, so usually you'd do one week of one station, then one week of another station, and so on, so it doesn't get too repetitive. Lots of quality of life improvements are still happening, even for these workers.
Of course, I won't claim it's glamorous or anything, but the idea that factory workers somehow will disappear tomorrow feels far out there, and I'm generally optimistic about the future.
I think comments like yours should include what salary range, industry, and company size your job entails. The last few years have been absolutely miserable for me at Series A YC startups
Salary range: 400 > 8000 EUR (monthly) over the years (starting job 10 years ago > last full-time salary)
Industry I guess would be "startups" or just "tech", it ranges across holiday related, infrastructure, distributed networks, application development frameworks and some others.
Smallest company I worked at was 4 including the CEO, largest been 300 people. Most of them I joined when it was 5-10 people, and left once they got to around 100.
Western Europe is fine, for seniors as well as newcomers, based on my own experience and friends & acquaintances. Then based on more acquaintances South America and Asia seems OK too. But again, ensure you actually understand the context here.
What does "heinous" actually mean here? I've repeated it before, but I guess one more time can't hurt: I'm not saying it isn't difficult to find a job as a developer today compared to a decade ago, but what I am saying is that it's a thing across all sectors and developers aren't hit by it worse than any other sector. Hiring freezes has been happening in not just technology companies, but across the board.
> I'm getting paid more than ever for a job I feel like I almost shouldn't get paid for (I'm just having fun), and programmers should be some of the most worry-free individuals on this planet, the job is easy
Eh?
I'm happy for you (and envious), because that is not my experience. The job is hard. Agile's constant fortnightly deadlines, a complete lack of respect by the rest of the stakeholders for the work developers do (even more so now because "ai can do that"), changing requirements but an expectation to welcome changing requirements because that is agile, incredibly egotistical assholes that seem to gravitate to engineering manager roles, and a job market that's been dead for a few years now.
No doubt some will comment and say that if I think my job is hard I should compare it to a coal miner in the 1940's. True, but as Neil Young sang: "Though my problems are meaningless, that don't make them go away."
I guess ultimately our perspectives shape how we see current situations.
When I write that, I write that with the history and experience of doing other things. Deadlines, lack of respect from stakeholders, egoists and changing requirements just don't sound so bad when you compare to "Ah yeah resident 41 broke their leg completely and we need to clean up their entire apartment from the pools of blood and pus + work with the ambulance crew to get them to the hospital".
I guess it's kind of a PTSD of sorts or something, as soldiers describe the same thing coming home to a "normal life" after spending time in a battle-zone. Everything just seems so trivial compared to the situations you've faced before.
Again, sucks to be in the US as a programmer today maybe, but this isn't true elsewhere in the world, and especially not if you already have at least some experience.
> Definitely true in western Europe, and finding a job is extremely hard for the vast majority of non expert devs.
I don't know what else to say except that hasn't been my experience personally, nor the experience of my acquaintances who've re-skilled to become programmers these last few years, in Western Europe.
> What are they talking about? What is this "devaluation"? I'm getting paid more than ever for a job I feel like I almost shouldn't get paid for (I'm just having fun)
You do realise your position of luck is not normal, right? This is not how your average Techie 2025 is.
Well, speaking just for central Europe, it is pretty average. Sure, entry-level positions are different story, but anyone with at least few years for work experience can find reasonably payed job fairly quickly.
I don't know what "position of luck" you're talking about, it's been dedicated effort to practice programming and suffer through a lot of shit until I got my first comfy programming job.
And even if I'm experienced now, I still have peers and acquaintances who are getting into the industry, I'm not sitting in my office with my eyes closed exactly.
That’s probably because the definition of ‘average techie’ has been on a rapid downward trajectory for years? You can justify the waste when money is free. Not when you need them to do something.
What is devalued is traditional labor-based ideology. The blog references Marx's theory of alienation. The Marxist labor theory of value, that the value of anything is determined by the labor that creates it, gives the working class moral authority over the owner class. When labor is reduced, the basis of socialist revolution is devalued, as the working class no longer can claim superior contributions to value creation.
If one doesn't subscribe to traditional Marxist ideology, this argument won't land the same way, but elements of these ideas have made their way into popular ideas of value.
Marx addressed exactly this sort of improvement in productivity from automation. He was writing with full hindsight on the industrial revolution after all. I hope coding LLMs give professional computer touchers a wakeup call to develop some sorely lacking class consciousness.
>the capitalist who applies the improved method of production, appropriates to surplus-labour a greater portion of the working day, than the other capitalists in the same trade […] The law of the determination of value by labour-time, a law which brings under its sway the individual capitalist who applies the new method of production, by compelling him to sell his goods under their social value, this same law, acting as a coercive law of competition, forces his competitors to adopt the new method.
I do see a shortage of entry-level positions (number of them, not salaries).
Going through the author's bio ... it seems like he's just not able to provide value in any of the high-paying positions that exist right now; not that he should be, he's just not aligned with it and that's ok.
> What are they talking about? What is this "devaluation"?
I'm not paid enough to clean up shit after an AI. Behind an intern or junior? Sure, I enjoy that because I can tell them how shit works, where they went off the rails, and I can be sure they will not repeat that mistake and be better programmers afterwards.
But an AI? Oh good luck with that and good luck dealing with the "updates" that get forced upon you. Fuck all of that, I'm out.
> I'm not paid enough to clean up shit after an AI.
I enjoy making things work better. I'm lucky in that, because there's always been more brownfield work than greenfield work. I think of it as being an editor, not an author.
Hacking into vibe code with a machete is kinda fun.
The part where writing performant, readable, resilient, extensible, and pleasing code used to actually be a valued part of the craft? I feel like I'm being gaslit after decades of being lectured on how to be a better software developer, only to be told that my craft is pointless, the only thing of value is the output, and that I should be happy spending my day babysitting agents and reviewing AI code slop.
Considering we surely have wildly different experiences and contexts, you could almost say we live on the same planet, but it looks very different to each and one of us :)
> What exactly is being de-valuated
We are being second guessed by any sub organism with little brain, but opposable thumbs, at a rate much greater than before, because now the sub organism can simply ask the LLM to type their arguments for them.
How many times have you received screenshots of an LLM output yesanding whatever bizarre request you already tried to explain and dismiss as not possible/feasible/unnecessary? the sub organism has delegated their thoughts to the LLM and i always find that extremely infuriating, because all i want to do is to shake that organism and cry "why don't you get it? think! THINK! THINK FOR YOURSELF FOR JUST A SECOND"
Also, i enjoy programming. Even typing boring shit as boilerplate because i keep my brain engaged. As much as i type i keep thinking, is this really necessary? and maybe figure out something leaner. LLMs want to deprive me of enjoyment of my work (research, learn) and of my brain. No thanks, no LLM for me. And i don't care whatever garbage it outputs, i'd much prefere if the garbage was your output, or you are useless.
The only use i have for LLMs and diffusion models is to entertain myself with stupid bullshit i come up with that i would find funny. I massively enjoy projects such as https://dumbassideas.com/
Note: Not taking into account the "classic" ML uses, my rant only going to LLMs and the LLM craze. A tool made by grifters, for grifters.
I get that some people want to be intellectually "pure". Artisans crafting high-quality software, made with love, and all that stuff.
But one emerging reality for everyone should be that businesses are swallowing the AI-hype raw. You really need a competent and understanding boss to not be labeled a luddite, because let's be real - LLMs have made everyone more "productive" on paper. Non-coders are churning out small apps in record pace, juniors are looking like savants with the amount of code and tasks they finish, where probably 90% of the code is done by Claude or whatever.
If your org is blindly data/metric driven, it is probably just a mater of time until managers start asking why everyone else is producing so much, while you're slow?
> Non-coders are churning out small apps in record pace, juniors are looking like savants with the amount of code and tasks they finish, where probably 90% of the code is done by Claude or whatever.
Honestly I think you’re swallowing some of the hype here.
I think the biggest advantages of LLMs go to the experienced coders who know how to leverage them in their workflows. That may not even include having the LLM write the code directly.
The non-coders producing apps meme is all over social media, but the real world results aren’t there. All over Twitter there were “build in public” indie non-tech developers using LLMs to write their apps and the hype didn’t match reality. Some people could get minimal apps out the door that kind of talked to a back end, but even those people were running into issues not breaking everything on update or managing software lifecycle.
The top complaint in all of the social circles I have about LLMs is with juniors submitting LLM junk PRs and then blaming the LLM. It’s just not true that juniors are expertly solving tasks with LLMs faster than seniors.
I think LLMs are helpful and anyone senior isn’t learning how to use them to their advantage (which doesn’t mean telling the LLM what to write and hoping for the best) is missing out. I think people swallowing the hype about non-tech people and juniors doing senior work is getting misled about the actual ways to use these tools effectively.
I feel sorry for juniors because they have even less incentive to troubleshoot or learn languages. At the same time, the sheer size of APIs make me relieved that I will never have to remember another command, DSL, or argument list again. Ruby has hundreds of methods, Rails hundreds more, and they constantly change. I'd rather write a prompt saying what I mean than figure out obscure incantations, especially with infrequently used tools like ffmpeg.
> Should I use AI to solve Advent of Code puzzles? No. If you send a friend to the gym on your behalf, would you expect to get stronger? Advent of Code puzzles are designed to be interesting for humans to solve - no consideration is made for whether AI can or cannot solve a puzzle. If you want practice prompting an AI, there are almost certainly better exercises elsewhere designed with that in mind.
I would advocate for Advent of Code in every workplace, but finding interest is rare. No matter how much craft is emphasized, ultimately businesses are concerned with solving problems. Even personally, sometimes I want to solve a problem so I can move on to something more interesting.
It's not just "juniors". It's people who should know better turning out LLM junk outside their actual experience areas because "They are experienced enough to use LLMs".
There are just some things that need lots of extra scrutiny in a system, and the experienced ones know where that is. An LLM rarely seems to, especially for systems of anywhere near real world production size.
I’m a garage coder and the kind of engineer that has a license. I had the capacity with my kids to make a usable application for my work about once every 6 months. Now it’s once a weekend or so. You don’t have to believe it.
In my experience I saw the complete opposite of "juniors looking like savants", there are a few pieces of code made by some juniors and som mid engineers in my company(one also involving a senior) that were clearly made with AI, and they are such a mess that they haven't been touched ever since because it's just impossible to understand, and this wasn't caught in the PR because the size of it was so large that people didn't actually bother reading it.
I did see a few good senior engineers using AI and producing good code, but for junior and mid engineers I have witnessed the complete opposite.
I work on the platform everyone builds on top of. A change here can subtlety break any feature, no matter how distant.
AI just can't cope with this yet. So my team has been told that we are too slow.
Meanwhile, earlier this week we halted a roll out because if a bug introduced by AI, as it worked around a privacy feature by just allow listing the behavior it wanted, instead of changing the code to address to policy. It wasn't caught in review because the file that was changed didn't require my teams review (because we ship more slowly, they removed us as code owners for many files recently).
> It wasn't caught in review because the file that was changed didn't require my teams review (because we ship more slowly, they removed us as code owners for many files recently).
I've lost your fight, but won mine before, you can sell this as risk reduction to your boss. I've never seen eng win this argument on quality grounds. Quality is rarely something that can be understood by company leadership. But having a risk reduction team that moves a bit slower and protects the company from extreme exposures like this, is much harder to cut from the process. "Imagine the law suits missing something like this would cause." and "we don't move slower, we do more than the other teams, the code is more visible, but the elimination of mistakes that will be very expensive legally and reputationally is what we're the best at"
Fuck it - let them reap the consequences. Ideally wait until there's something particularly destructive, then do the post-mortem as publicly as possible - call out the structures and practises that enabled that commit to get into production.
I think LLMs are net helpful if used well, but there's also a big problem with them in workplaces that needs to be called out.
It's really easy to use LLMs to shift work onto other people. If all your coworkers use LLMs and you don't you're gonna get eaten alive. LLMs are unreasonably effective at generating large volumes of stuff that resembles diligent work on the surface.
The other thing is, tools change trade-offs. If you're in a team that's decided to lean into static analysis, and you don't use type checking in your editor, you're getting all the costs and less of the benefits. Or if you're in a team that's decided to go dynamic, writing good types for just your module is mostly a waste of time.
LLMs are like this too. If you're using a very different workflow from everyone else on your team, you're going to end up constantly arguing for different trade-offs, and ultimately you're going to cause a bunch of pointless friction. If you don't want to work the same way as the rest of the team just join a different team, it's really better for everyone.
I'm interested in this. Code review, most egregiously where the "author" neglected to review the LLM output themselves, seems like a clear instance. What are some other examples?
Something that should go in a "survival guide" for devs that still prefer to code themselves.
Well, if you take "review the LLM output" in its most general way, I guess you can class everything under that. But I think it's worth talking about the problem in a bit more detail than that, because someone can easily say "Oh I definitely review the LLM output!" and still be pushing work onto other people.
The fact is that no matter whether we review the LLM output or not, no matter whether we write the code entirely by hand or not, there's always going to be the possibility of errors. So it's not some bright-line thing. If you're relatively lazier and relatively less thoughtful in the way you work, you'll make more errors and more significant errors. You'll look like you're doing the work, but your teammates have to do more to make up for the problems.
Having to work around problems your coworkers introduced is nothing new, but LLMs make it worse in a few ways I think. One is just, that old joke about there being four kinds of people: lazy and stupid, industrious and stupid, smart and lazy, and industrious and smart. It's always been the "industrious and stupid" people that kill you, so LLMs are an obvious problem there.
Second there's what I call the six-fingered hands thing. LLMs make mistakes a human wouldn't, which means the problem won't be in your hypothesis-space when you're debugging.
Third, it's very useful to have unfinished work look unfinished. It lets you know what to expect. If there's voluminous docs and tests and the functionality either doesn't work at all or doesn't even make sense when you think about it, that's going to make you waste time.
Finally, at the most basic level, we expect there to be some sort of plan behind our coworkers' work. We expect that someone's thought about this and that the stuff they're doing is fundamentally going to be responsive to the requirements. If someone's phoning it in with an LLM, problems can stay hidden for a long time.
I'm currently really feeling the pain the side bar stuff. The non "application" code/config.
Scripts, cicd, documentation etc. The stuff that gets a PR but doesn't REALLY get the same level of review because its not really production code. But when you need to go tweak the thing it does a few months or years later... its so dense and undecipherable you spend more time figuring out how the llm wrote the damn thing than doing it all over yourself.
Should you probably review it a little harsher in the moment? sure, but thats not always feasible with things that are at the time "not important" and only later become the root of other things.
I have lost several hours this week to several such occurences.
AI-generated docs, charts, READMEs, TOE diagrams. My company’s Confluence is flooded with half assed documentation from several different dev teams that either loosely matches or doesn’t match at all the behavior or configuration of their apps.
For example they ask to have networking configs put into place and point us at these docs that are not accurate and then they expect that we’ll troubleshoot and figure out what exactly they need. It’s a complete waste of time and insulting to shove off that work onto another team because they couldn’t be fucked to read their own code and write down their requirements accurately.
If I were a CTO or VP these days I think I'd push for a blanket ban on committing docs/readmes/diagrams etc along with the initial work. Teams can push stuff to a `slop/` folder but don't call it docs.
If you push all that stuff at the same time, it's really easy to get away with this soft lie, "job done". They can claim they thought it was okay and it was just an honest mistake there were problems. They can lie about how much work they really did.
READMEs or diagrams that are plans for the functionality are fine. Docs that describe finished functionality are fine. Slop that dresses up unfinished work as finished work just fucks everything up, and the incentives are misaligned so everyone's doing this.
The era of software mass production has begun. With many "devs" just being workers in a production line, pushing buttons, repeating the same task over and over.
The produced products however do not compare in quality to other industry's mass production lines. I wonder how long it takes until this comes all crashing down. Software mostly already is not a high quality product.. with Claude & co it just gets worse.
I think you'll be waiting a while for the "crashing down". I was a kid when manufacturing went off shore and mass production went into overdrive. I remember my parents complaining about how low quality a lot of mass produced things were. Yet for decades most of what we buy is mass produced, comparatively low quality goods. We got used to it, the benefits outweighed the negatives. What we thought mattered didn't in the face of a lot of previously unaffordable goods now broadly available and affordable.
You can still buy high goods made with care when it matters to you, but that's the exception. It will be the same with software. A lot of what we use will be mass produced with AI, and even produced in realtime on the fly (in 5 years maybe?). There will be some things where we'll pay a premium for software crafted with care, but for most it won't matter because of the benefits of rapidly produced software.
We've got a glimpse of this with things like Claude Artifacts. I now have a piece of software quite unique to my needs that simply wouldn't have existed otherwise. I don't care that it's one big js file. It works and it's what I need and I got it pretty much for free. The capability of things like Artifacts will continue to grow and we'll care less and less that it wasn't human produced with care.
While a general "crashing down" probably will not happen I could imagine some differences to other mass produced goods.
Most of our private data lives in clouds now and there are already regular security nightmares of stolen passwords, photos etc. I fear that these incidents will accumulate with more and more AI generated code that is most likely not reviewed or reviewed by another AI.
Also regardless of AI I am more and more skipping cheap products in general and instead buying higher quality things. This way I buy less but what I buy doesn't (hopefully) break after a few years (or months) of use.
I see the same for software. Already before AI we were flooded with trash. I bet we could all delete at least half of the apps on our phones and nothing would be worse than before.
I am not convinced by the rosy future of instant AI-generated software but future will reveal what is to come.
I think one major lesson of the history of the internet is that very few people actually care about privacy in a holistic, structural way. People do not want their nudes, browsing history and STD results to be seen by their boss, but that desire for privacy does not translate to guarding their information from Google, their boss, or the government. And frankly this is actually quite rational overall, because Google is in fact very unlikely to leak this information to your boss, and if they did it would more likely to result in a legal payday rather than any direct social cost.
Hacker news obviously suffers from severe selection bias in this regard, but for the general public I doubt even repeated security breaches of vibe coded apps will move the needle much on the perception of LLM coded apps, which means that they will still sell, which means that it doesn't matter. I doubt even most people will pick up the connection. And frankly, most security breaches have no major consequences anyway, in the grand scheme of things. Perhaps the public conscioussness will harden a bit when it comes to uploading nudes to "CheckYourBodyFat", but the truly disastrous stuff like bank access is mostly behind 2FA layers already.
There's a buge difference between possible and likely.
Maybe I'm pessimistic but I at least feel like there's a world of difference between a practice that encourages bugs and one that allows them through when there is negligence. The accountability problem needs to be addressed before we say it's like self driving cars outperforming humans. On a errors per line basis, I don't think LLMs are on par with humans yet
Knowing your system components’ various error rates and compensating for them has always been the job. This includes both the software itself and the engineers working on it.
The only difference is that there is now a new high-throughput, high-error (at least for now) component editing the software.
Yeah it’s interesting to see if blaming LLMs becomes as acceptable as “caused by a technical fault” to deflect responsibility from what is a programmer’s output.
Perhaps that’s what lead to a decline in accountability and quality.
The decline in accountability has been in progress for decades, so LLMs can obviously not have caused it.
They might of course accelerate it if used unwisely, but the solution to that is arguably to use them wisely, not to completely shun them because "think of the craft and the jobs".
And yes, in some contexts, using them wisely might well mean not using them at all. I'd just be surprised if that were a reasonable default position in many domains in 5-10 years.
Why didn't programmers think of stepping down from their ivory towers and start making small apps which solve small problems? That people and businesses are very happy to pay for?
But no! Programmers seem to only like working on giant scale projects, which only are of interest to huge enterprises, governments, or the open source quagmire of virtualization within virtualization within virtualization.
There's exactly one good invoicing app I've found which is good for freelancers and small businesses. While the amount of potential customers are in the tens of millions. Why aren't there at least 10 good competitors?
My impression is that programmers consider it to be below their dignity to work on simple software which solves real problems and are great for their niche. Instead it has to be big and complicated, enterprise-scale. And if they can't get a job doing that, they will pretend to have a job doing that by spending their time making open source software for enterprise-scale problems.
Instead of earning a very good living by making boutique software for paying users.
I don't think programmers are the issue here. What you describe sounds to me more like the typical product management in a company. Stuff features into the thing until it bursts of bugs and is barely maintainable.
I would love to do something like what you describe. Build a simple but solid and very specialized solution. However I am not sure there is demand or if I have the right ideas for what to do.
You mention invoicing and I think: there must be hundreds of apps for what you describe but maybe I am wrong. What is the one good app you mention? I am curious now :)
There's a whole bunch of apps for invoicing, but if you try them, you'll see that they are excessively complicated. Probably because they want to cover all bases of all use cases. Meaning they aren't great for any use case. Like you say.
The invoicing app in particular I was referring to is Cakedesk. Made by a solo developer who sells it for a fair price. Easy to use and has all the necessary functions. Probably the name and the icon is holding him back, though. As far as I understand, the app is mostly a database and an Electron/Chromium front-end, all local on your computer. Probably very simple and uninteresting for a programmer, but extremely interesting for customers who have a problem to solve.
I'm curious: why don't YOU create this app? 95% of a software business isn't the programming, it's the requirements gathering and marketing and all that other stuff.
Is it beneath YOUR dignity to create this? What an untapped market! You could be king!
Also it's absurd to an incredible degree to believe that any significant portion of programmers, left to their own devices, are eager to make "big, complicated, enterprise-scale" software.
What makes you think that I know how to program? It's not beyond my dignity, it's beyond my skills. The only thing I can do is support boutique programmers with my money as a consumer, and I'm very happy to do that.
But yes, sometimes I have to AI code small things, because there's no other solution.
Many people actually are becoming more productive. I know you're using quotes around productive to insulate yourself from the indignity of admitting that AI actually is useful in specific domains.
> Many people actually are becoming more productive. I know you're using quotes around productive to insulate yourself from the indignity of admitting that AI actually is useful in specific domains.
Equally, my read is you're fixating on the syntax used in their comment to insulate yourself from actually engaging with their idea and point. You refuse to try to understand the parts of the system that negate the surface level popularity, eer productivity gains.
People who enjoy the productivity boost of AI are right, you can absolutely, without question build a house faster with AI.
The people who claim there's not really any reasonable productivity gains from AI are also right, because using AI to build a multistory anything, requires you to waste all that time starting with a house, to then raze it to the ground and rebuild a usable foundation.
yes, "but its useful in specific domains" is technically correct statement, but whataboutism is rarely a useful conversational response.
I had a software engineering job before AI. I still do, but I can write much more code. I avoid AI in more mission-critical domains and areas where it is more important that I understand the details intimately, but a lot of coding is repetitive busywork, looking for "needles in haystacks", porting libraries, etc. which AI makes 10x easier.
My experience with using AI is that it's a glorified stack overflow copy paster. It'll even glue a handful of SO answers together!
But then you run into classic SO problems... Like the first solution doesn't work. Nor the second one. And the third one introduces a completely different coding style. The last one is implemented in pure sh/GNU utils.
One thing it is absolutely amazing at: digesting things that have bad documentation, like openssl C api. Even then you still gotta be on the watch for hallucinations, and audit it very thoroughly.
> If your org is blindly data/metric driven, it is probably just a mater of time until managers start asking why everyone else is producing so much, while you're slow?
It’s a reasonable question, and my response is that I’ve encountered multiple specific examples now of a project being delayed a week because some junior tried to “save” a day by having AI write bad code.
Good managers generally understand the concept of a misleading productivity metric that fails to reflect real value. There’s a reason, after all, why most of us don’t get promoted based on lines of code delivered. I understand why people who don’t trust their managers to get this would round it off to artisanship for its own sake.
Most early stage startups I've been in weren't metric driven. It's impossible when everyone is just working as hard as they can to get it built, to suddenly slow down and start measuring everyone's output.
It's not until later. When it's gotten to a larger size, do you have the resources to be metric driven.
If you stare at your GPS and don’t pay attention to what’s in the real world outside your windshield until you careen off a cliff that would be “blindly” following your GPS. You had data but you didn’t sufficiently hedge against your data being incomplete.
Likewise sticking dogmatically to your metrics while ignoring nuance or the human factor is blindly following your metrics.
> You can’t be data driven and also blind to the data
"Tickets closed" is an amazing data driven & blind to the data metric. You can have someone closing an insane number of tickets, looking amazing on the KPIs, but no one's measuring "Tickets reopened" or "Tickets created for the same issue a day later".
It's really easy to set up awful KPIs and lose all sight of what is actually happening while having data to show your bosses
> You really need a competent and understanding boss to not be labeled a luddite, because let's be real - LLMs have made everyone more "productive" on paper.
I am actually less productive when using LLMs because now I have to read another entities code and be able to judge wether this fits my current business problem or not. If it doesn't, yay refactoring prompts instead of tackling the actual problem.
Also I can write code for free, LLMs coding assistants aren't free.
I can fit business problems amd edge cases into my brain given some time, a LLM is unaware about edge cases, legal requirements, decoupled dependencies, potential refactors or the occasional call of boss asking for something to be sneaked into the code right now.
If my job forced me to use these tools, congrats, I'll update my address to some hut in a forrest eating cold canned ravioli for the rest of my life because I for sure dont wanna work in a world where I am forced to use dystopian big tech machines I cant look into.
> I am actually less productive when using LLMs because now I have to read another entities code and be able to judge wether this fits my current business problem or not.
You don’t have to let the LLM write code for you. They’re very useful as a smart search engine for your code base, a smart refactoring tool, a suggestion generator, and many other ways.
I rarely have LLMs write code for me from scratch that I have to review, but I do give them specific instructions to do what I want to the codebase. They can do it much faster than I can search around the codebase and type out myself.
There are so many ways to make LLMs useful without having them do all the work while you sit back and judge. I think some people are determined to get no value out of the LLM because they feel compelled to be anti-hype, so they’re missing out on all the different little ways they can be used to help. Even just using it as a smarter search engine (in the modes where they can search and find the right sections of right articles or even GitHub issues for you) has been very helpful. But you have to actually learn how to use them.
> If my job forced me to use these tools, congrats, I'll update my address to some hut in a forrest eating cold canned ravioli for the rest of my life because I for sure dont wanna work in a world where I am forced to use dystopian big tech machines I cant look into.
Okay, good luck with your hut in the forest. The rest of us will move on using these tools how we see fit, which for many of us doesn’t actually include this idea where the LLM is the author of the code and you just ask nicely and reject edits until it produces the exact code you want. The tools are useful in many ways and you don’t have to stop writing your own code. In fact, anyone who believes they can have the LLM do all the coding is in for a bad surprise when they realize that specific hype is a lie.
This probably is the issue for me, I am simply not willing to do so. To me the whole AI thing is extremely dystopian so even on a professional level I feel repulsed by it.
We had an AWS and a Cloudflare outage recently, which has shown that maybe it isn't a great idea to rely on a few companies for a single _thing_. Integrating LLMs and using all these tools is just another bridge people depend on at some point.
I want to write software that works, preferably even offline. I want tools that do not spy on me (referring to that new Google editor, forgot the name). Call me once these tools work offline on my 8GB RAM laptop with a crusty CPU and I might put in the effort to learn them.
> We had an AWS and a Cloudflare outage recently, which has shown that maybe it isn't a great idea to rely on a few companies for a single _thing_.
I share that concern about massive, unforced centralization. If there were any evidence for the hypothesis that LLM inference would always remain viable in datacenters only, I'd be extremely concerned about their use too.
But from all I've seen, it seems overwhelmingly likely that we'll have very powerful ones in our phones in at most a few years, and definitely in midrange laptops and above.
> This probably is the issue for me, I am simply not willing to do so.
Thanks for being honest at least. So many HN arguments start as a desire to hate something and then try to bridge that into something that feels like a takedown of the merits of that thing. I think a lot of the HN LLM hate comes from people who simply want to hate LLMs.
> We had an AWS and a Cloudflare outage recently, which has shown that maybe it isn't a great idea to rely on a few companies for a single _thing_. Integrating LLMs and using all these tools is just another bridge people depend on at some point.
For an experienced dev using LLMs as another tool, an LLM outage isn’t a problem. You just continue coding.
It’s on the level of Google going down so you have to use another search engine or try to remember the URL for something yourself.
The main LLM players are also easy to switch between. I jump between Anthropic, Google, and OpenAI almost month to month to try things out. I could have subscriptions to all 3 at the same time and it would still be cheap.
I think this point is overblown. It’s not a true team dependency like when GitHub stop working a few days back.
> I find it particularly disillusioning to realize how deep the LLM brainworm is able to eat itself even into progressive hacker circles.
Anything worth reading beyond this transparent and hopefully unsuccessful appeal to tribalism?
Hackers have always tried out new technologies to see how they work – or break – so why would LLMs be any different?
> the devaluation of our craft, in a way and rate we never anticipated possible. A fate that designers, writers, translators, tailors or book-binders lived through before us
What is it with this perceived right to fulfilling, but also highly paid, employment in software engineering?
Nobody is stopping anyone from doing things by hand that machines can do at 10 times the quality and 100 times the speed.
Some people will even pay for it, but not many. Much will be relegated to unpaid pastime activities, and the associated craftspeople will move on to other activities to pay the bills (unless we achieve post-scarcity first). That's just human progress in a nutshell.
If the underlying problem is that many societies define a person's worth via their employability, that seems like a problem best fixed by restructuring said societies, not by artificially blocking technological progress. "progressive hackers"...
> I personally don’t touch LLMs with a stick. I don’t let them near my brain. Many of my friends share that sentiment.
FTA.
I know tons of people where "tried it out" means they've seen Google's abysmal search summary feature, or merely seen the memes and read news articles about how it's wrong sometimes, and haven't explored any further.
Personally I'm watching people I used to respect start to rely on AI more and more and their skills and knowledge are declining rapidly while their reliance is growing, so I'm really not interested in following that path
They seem just as enthusiastic as many of the pro AI voices here on HN, while the quality of their work declines. It makes me extremely skeptical of anyone who is enthusiastic about AI. It seems to me like it's a delusion machine
I could definitely see that happen. Besides people simply getting out of practice (or never getting any to being with), automation complacency is a real problem.
We'll need to be even more intentional about when to use LLMs than we should arguably already be about any type of automation.
> How do you know their skills and knowledge are declining rapidly
I was describing anecdotally what I have witnessed. Devs that I used to have a reasonably high opinion of struggling to explain or understand the PRs they are making
> Does using an LLM cause one to suddenly forget everything?
I think we can probably agree that when you stop using skills, those skills will atrophy to some extent
Can we also agree that using LLMs to generate code is different from the skill of writing code?
If so, it stands to reason that the more people rely on LLMs to generate things for them, the more their skills of creating those things by hand will atrophy
I don't think it should be very controversial to think that LLMs are making people worse at things
It is also entirely possible that people are becoming better (or faster, anyways. Extremely debatable if faster = better imo) at building software using LLMs while also becoming worse at actually writing code
Various people have been wrong on various predictions in the past, and it seems to me that any implied strong overlap is anecdotal at best and wishful (why?) thinking at worst.
The only really embarrassing behavior is never updating your priors when your predictions are wrong. Also, if you're always right about all your prognoses, you should probably also not be in the HN comments but on a prediction market, on-chain or traditional :)
- crypto was massively hyped and then crashed (although it's more than recovered),
- many grifters chase hypes, and
- there's undeniably an AI hype going on at the moment
doesn't necessarily imply that AI is full of grifters or confirms any adjacent theories (as in, could be true, could be false, but the argument does not hold).
I'm sorry, but the idiocy that was crypto-hype can't be dismissed this easily. It's hard to make a prediction on AI because things are moving so fast and the technology is actually useful, so I wouldn't fault anyone for being wrong in retrospect. But when it comes to NFTs: if you bought into that stuff you are either a sucker or a scammer and in both cases your future opinions can be safely discarded.
> the idiocy that was crypto-hype can't be dismissed this easily.
Maybe so, but would it be possible to not dismiss it elsewhere? I just don't see the causal relation between AI and crypto, other than that both might be completely overhyped, world-changing, or boringly correctly estimated in their respective impact.
> I was surprised how hard many here fell for the NFT thing, too.
Did they? I'm not saying you're wrong but I'd like to see some evidence, because NFTs were always obvious nonsense. I'm sure there were some grifters posting here, and others playing devil's advocate or refuting anti-NFT arguments that somehow went too far, but I'd be genuinely surprised if the general sentiment was not overwhelmingly negative/dismissive.
> AI systems exist to reinforce and strengthen existing structures of power and violence.
Exactly. You can see that with the proliferation of chickenized reverse centaurs[1] in all kinds of jobs. Getting rid of the free-willed human in the loop is the aim now that bosses/stakeholders have seen the light.
If you are a software engineere, you can leverage AI a lot better to write code than anyone else.
The complexity of good code, is still complicated.
which means 1. if software development is really solved, everyone else also gets a huge problem (ceo, cto, accountants, designers, etc. etc.) so we are in the back of the ai doomsday line.
And 2. it allows YOU to leverage AI a lot better which can enable you to create your own product.
In my startup, we leverage AI and we are not worried that another company just does the same thing because even if they do, we know how to write good code and architecture and we are also using AI. So we will always be ahead.
I've seen the argument that computers let us prop up and even scale governmental systems that would have long since collapsed under their own weight if they’d remained manual more than once. I'm not sure I buy it, but computation undoubtedly shapes society.
The author does seem quite keen on computers, but they've been "getting rid of the free-willed human in the loop" for decades. I think there might be some unexamined bias here.
I'm not even saying the core argument's wrong, exactly - clearly, tools build systems ("...and systems kill" - Crass). I guess I'm saying tools are value neutral. Guns don't kill people. So this argument against LLMs is an argument against all tools, unless you can explain how LLMs are a unique category of tool?
(Aside: calling out the lever sounds silly, but I think it's actually a great example. You can't do monumental architecture without levers, and the point in history where we start doing that is also the point where serious surplus extraction kicks in. I don't think that's coincidence).
In my third world country, motorbikes, scooters, etc have exploded in popularity and use in the past decade. Many people riding these things have made the roads much more dangerous for all, but particularly for them. They keep dying by the hundreds per month, not only just due to the fact that they choose to ride them at all, but how they ride them: on busy high speed highways, weaving between lanes all the time, swerving in front of speeding cars, with barely any protective equipment whatsoever. A car crash is frequently very survivable; motorcycle crash, not so much. Even if you survive the initial collision, the probability of another vehicle running you over is very high on a busy highway.
On would think, given the clear evidence for how dangerous these things are, why do people (1) ride them at all on the highway, and (2) in such a dangerous manner? One might excuse (1) by recognizing that many are poor and can't buy a car, and the motorbikes represent economic possibility: for use in courier business, of being able to work much further from home, etc.
But here is the thing about (2), A motorbike wants to be ridden that way. No matter how well the rider recognizes the danger, there is only so much time can pass before the sheer expediency of riding that way overrides any sense of due caution. Where it would be safer to stop or keep to a fixed lane without any sudden movements, the rider thinks of the inconvenience of stopping, does a quick mental comparison it to the (in their minds) the minuscule additional risk, and carries on. Stopping or keeping to a proper lane in a car require far less discipline than doing that on a motorbike.
So this is what people mean when they say tech is not value neutral. The tech can theoretically be used in many ways. But some forms of use are so aligned with the form of the tech that in practice it shapes behavior.
That's a lovely example. But is the dangerous thing the bike, or the infrastructure, or the system that means you're late for work?
I completely get what you're saying. I was thinking of tools in the narrowest possible way - of the tool in isolation (I could use this gun as a doorstop). You're thinking of the tool's interface with its environment (in the real world nobody uses guns as doorstops). I can't deny that's the more useful way to think about tools ("computation undoubtedly shapes society").
there is no safe way to ride a motorbike. even with save infrastructure, all the amount of protection that you can wear, no stress riding away from traffic, a freak accident can still kill you. there is no adequate protection for riding at that speed.
But this is just your own personal value judgment, of which clearly you don't like motorcycles. Not everybody shares the same opinion. I.e. there are plenty of people who ride motorcycles safely and legally, you just never hear about them because they never have any incidents. You have just instilled your own value into the tool, one that is not universally shared, the tool itself is still neutral and can even be seen as a positive by somebody else.
> The author does seem quite keen on computers, but they've been "getting rid of the free-willed human in the loop" for decades. I think there might be some unexamined bias here.
Certainly it's biased. I'm not the author, but to me there's a huge difference between computer/software as a tool, designed and planned, with known deterministic behavior/functionality, then put in the hands of humans, vs automating agency. The former I see as a pretty straightforward expansion of humanity's long-standing relationship with tools, from simple sticks to hand axes to chainsaws. The sort of automation AI-hype seems focused on doesn't have a great parallel in history. We're talking about building a statistical system to replace the human wielding the tool, mostly so that companies don't have to worry about hiring employees. Even if the machine does a terrible job and most of humanity, former workers and current users, all suffer, the bet is that it will be worth the cost savings.
ML is very cool technology, and clearly one of the major frontiers of human progress. At this stage though, I wish the effort on the packaging side was being spent on wrapping the technology in the form of reliable capabilities for humans to call on. Stuff like OCR at the OS level or "separate tracks" buttons in audio editors. The market has decided instead that the majority of our collective effort should go towards automated liability-sinks and replacing jobs with automation that doesn't work reliably.
And the end state doesn't even make sense. If all this capital investment does achieve breakthroughs and creat true AGI, do investors really think they'll see returns? They'll have destroyed the entire concept of an economy. The only way to leverage power at that point would be to try to exercise control over a robot army or something similarly sci-fi and ridiculous.
"Automating agency" it's such a good way to describe what's happening. In the context of your last paragraph, if they succeed in creating AGI, they won't be able to exercise control over a robot army, because the robot army will have as much agency as humans do. So they will have created the very situation they currently find themselves in. Sans an economy.
It’s a good thing that there’s centuries of philosophy on that subject and the general consensus is that no, tools are not “neutral” and do shape the systems they interact with, sometimes against the will of those wielding these tools.
I'm actually thinking of Marshall McLuhan. Maybe you're right, and tools aren't neutral. Does this mean that computation necessitates inequality? That's an uncomfortable conclusion for people who identify as hackers.
I am surprised (and also kind of not) to see this kind of tech hate on HN of all places.
Would you prefer we heat our homes by burning wood, carry water from the nearby spring, and ride horses to visit relatives?
Progress is progress, and has always changed things. Its funny that apparently, "progressive" left-leaning people are actually so conservative at the core.
So far, in my book, the advancements in the last 100 or even more years have mostly always brought us things I wouldn't want to miss these days. But maybe some people would be happier to go back to the dark ages...
> Progress is progress, and has always changed things. Its funny that apparently, "progressive" left-leaning people are actually so conservative at the core.
I am surprised (and also kind of not) to see this lack of critical reflection on HN of all places.
Saying "progress is progress" serves nobody, except those who drive "progress" in directions that benefits them. All you do by saying "has always changed things" is taking "change" at face value, assuming it's something completely out of your control, and to be accepted without any questioning it's source, it's ways or its effects.
> So far, in my book, the advancements in the last 100 or even more years have mostly always brought us things I wouldn't want to miss these days. But maybe some people would be happier to go back to the dark ages...
Amazing depiction of extremes as the only possible outcomes. Either take everything that is thrown at us, or go back into a supposed "dark age" (which, BTW, is nowadays understood to not have been that "dark" at all) . This, again, doesn't help have a proper discussion about the effects of technology and how it comes to be the way it is.
> I am surprised (and also kind of not) to see this lack of critical reflection on HN of all places
I'm not surprised at all anymore.
I constantly feel like the majority of voices on this site are in favor of maximizing their own lives no matter the cost to everyone else. After all, that's the ethos that is dominating the tech industry these days
I know I'm bitter. All I ever wanted was to hang out with cool people working on cool stuff. Where's that website these days? It sure isn't this one
Dark age was dark. Human rights, female! rights, hunger, thirst, no progress at all, hard lifes.
So are you able, realisticly, to stop progress around a whole planet? Tbh. getting an alignment across the planet to slow down or stop AI would be the equivilent of stoping capitalism and actually building a holistic planet for us.
I think ai will force the hand of capitalism but i don't think we will be able to create a star trek universe without getting forced
> Dark age was dark. Human rights, female! rights, hunger, thirst, no progress at all, hard lifes.
There was progress in the Middle Ages, hence the difference between the early and late Middle Ages. Most information was mouth to mouth instead of written down.
"The term employs traditional light-versus-darkness imagery to contrast the era's supposed darkness (ignorance and error) with earlier and later periods of light (knowledge and understanding)."
"Others, however, have used the term to denote the relative scarcity of written records regarding at least the early part of the Middle Ages"
> Would you prefer we heat our homes by burning wood, carry water from the nearby spring, and ride horses to visit relatives?
I'm more surprised that seemingly educated people have such simplistic views as "technology = progress, progress = good hence technology = good". Vaccines and running water are tech, megacorps owned "AI" being weaponised by surveillance obsessed governments is also tech.
If you don't push back on "tech" you're just blindingly accepting whatever someone else decided for you. Keep in mind the benefits of tech since the 80s have mostly been pocketed by the top 10%, the pleb still work as much, retire as old, &c. despite what politicians and technophiles have been saying
Tech enabled the horrors of WWI and II, tech directly enabled the Holocast -- IBM built special computers to help the Nazi's more effectively round up the Jews.
Tech also gave us vaccines and indoor plumbing and the clothes I am wearing.
It's the morals and courage to live by those morals which creates good. Progress is by definition towards a goal. If that goal is say
> to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity
and ensure our basic inherent (not government-given) rights to
> life, liberty, and pursuit of happiness
then all good.
If it is to enrich me at the cost of thee, create a surveillance state that rounds up and kills undesirables at scale, destroys our basic inherent rights, then tech not good
A tool is a tool. These AI critics sound to me like people who have hit their finger with a hammer, and now advocate against using them altogether. Yes, tech has always had two sides. Our "job" as humans is to pick the good parts, and avoid the bad. Nothing new, nothing exceptional.
> A tool is a tool. These AI critics sound to me like people who have hit their finger with a hammer, and now advocate against using them altogether.
Speaking of wonky analogies, have you considered that other people have access to these hammers and are aiming for your head ? And that some people might not want to be hit on the head by a hammer
More lazy analogies... Yes a hammer is a tool, so is a machine gun, a nuke, or the guy with his killdozer. So what are you gonna do? Nothing to see here, discussion closed.
"I personally don’t touch LLMs with a stick. I don’t let them near my brain. Many of my friends share that sentiment."
Any software engineer who shares this sentiment is doing their career a disservice. LLMs have their pitfalls, and I have been skeptical of their capabilities, but nevertheless I have tried them out earnestly. The progress of AI coding assistants over the past year has been remarkable, and now they are a routine part of my workflow. It does take some getting used to, and effectively using an AI coding assistant is a skill in and of itself that is worth mastering.
I feel AI now is good enough to follow the same pattern as with internet usage. The quality ranges from useless to awesome based on how you use it. Blanked statements that “it is terrible and uesless” reveals more about the person than the tech at this point.
I've used AI assitance in coding for a year before I quit. The hardest part was a day when the services where unexpectedly down, and working felt like I was amputated in some way. Nothing works, my usual movement does not produce code. That day I realised these AI integrations take away my knowledge and skill of the matter and is just maximising the easiest and fastest part of software development: writing code.
It’s some mixture of luddites, denial, ignorance, and I don’t know what else.
I’m not sure what these people are NOT seeing. Maybe I’m somehow fortunate with visibility into what AI can do today, and what it will do tomorrow. But I’m not doing anything special. Just paying attention and keeping an open mind.
I’ve been at this for 40 years, working professionally for more than 30. I’ve seen lots.
One pattern I’ve seen repeating is folks who seem to stop leaning at some point. I don’t understand this, because for me learning everyday is what fuels me. And those folks eventually die on the vine, or they become the last few greybeards working on COBOL.
We are alive at a very interesting time in tech. I am excited about that. I am here for it.
it already tells me enough to stay away from using AI tools for coding. and that's just one reason, if i consider all the others, then that's more than enough.
And then there is the moderate position: Don't be the person refusing the use a calculator / PC / mobile phone / AI. Regularly give the new tool a chance and check if improvements are useful for specific tasks. And carry on with your life.
Don't be the person refusing the 4GL/Segway/3D TV/NFT/Metaverse. Regularly give the new tool a chance and check if improvements are useful for specific tasks.
Like, I mean, at a certain point it runs out of chances. If someone can show me compelling quantitive evidence that these things are broadly useful I may reconsider, but until then I see no particular reason to do my own sampling. If and when they are useful, there will be _evidence_ of that.
(In fairness Segways seem to have a weird afterlife in certain cities helping to make tourists more annoying; there are sometimes niche uses for even the most pointless tech fads.)
Like, I mean, at a certain point it runs out of chances. If someone can show me compelling quantitive evidence that these things are broadly useful I may reconsider, but until then I see no particular reason to do my own sampling. If and when they are useful, there will be _evidence_ of that.
My relative came to me to make a small business website for her. She knew I was a "coder". She gave me a logo and what her small business does.
I fed all of it into Vercel v0 and out came a professional looking website that is based on the logo design and the business segment. It was mobile friendly too. I took the website and fed it to ChatGPT and asked it to improve the marketing copy. I fed the suggestions back to v0 to make changes.
My relative was extremely happy with the result.
It took me about 10 minutes to do all of this.
In the past, it probably would have taken me 2 weeks. One week to design, write copy, get feedback. Another week to code it, make it mobile friendly, publish it. Honestly, there is no way I could have done a better job given the time constraint.
I even showed my non-tech relative how to use v0. Since all changes requested to v0 was in english, she had no trouble learning how to use it in one minute.
Okay, I mean if that’s the sort of thing you regularly have to do, cool, it’s useful for that, maybe, I suppose? To be clear I’m not saying LLMs are totally useless.
These things are wicked, and unlike some new garbage javascript framework, it's revolutionary technology that regular people can actually use and benefit from. The mobility they provide is insane.
While that video looks cool from a "Red Bull Video of crazy people doing crazy things" type angle, that looks extremely dangerous for day to day use. You're one pothole or bad road debris away from a year in the hospital at best, or death at worst.
There is something to be said for the protective shell of a vehicle.
lol! I thought this was going to link to some kind of innovative mobility scooter or something. I was still going to say "oh, good; when someone uses the good parts of AI to build something different which is actually useful, I'll be all ears!", because that's all you would really have been advocating for if that was your example.
But - even funnier - the thing is an urbanist tech-bro toy? My days of diminishing the segway's value are certainly coming to a middle.
I mean sure but none of these even claimed to help you do things you were already doing. If your job is writing code none of these help you do that.
That being said the metaverse happened but it just wasn't the metaverse those weird cringy tech libertarians wanted it to be. Online spaces where people hang out are bigger than ever. Segways also happened they just changed form to electric scooters.
Being honest, I don't know what a 4GL is. But the rest of them absolutely DID claim to help me do things I was already doing. And, actually, NFTs and the Metaverse even specifically claimed to be able to help with coding in various different flavors. It was mostly superficial bullshit, but... that's kind of the whole tech for those two things.
In any case, Segways promised to be a revolution to how people travel - something I was already doing and something that the marketing was predicated on.
3DTVs - a "better" way to watch TV, which I had already been doing.
NFTs - (among other things) a financially superior way to bank, which I had already been doing.
Metaverse - a more meaningful way to interact with my team on the internet, which I had already been doing.
A 4GL is a "fourth generation language"; they were going to reduce the need for icky programmers back in the 70s. SQL is the only real survivor, assuming you're willing to accept that it counts at all. "This will make programmers obsolete" is kind of a recurrent form of magic tech; see 4GLs, 5GLs, the likes of Microsoft Access, the early noughties craze for drag-and-drop programming, 'no-code', and so forth. Even _COBOL_ was kind of originally marketed this way.
Sorry you're being downvoted even though you're 100% correct. There are use cases where the poor LLM reliability is as good or better than the alternatives (like search/summarization), but arguing over whether LLMs are reliable is silly. And if you need reliability (or even consistency, maybe) for your use case, LLMs are not the right tool.
You can have this position, but the reality is that the industry is accepting it and moving forward. Whether you’ll embrace some of it and utilize it to improve your workflow, is up to you. But over-exaggerating the problem to this point is kinda funny.
"You exaggerate, and the evidence is PMs are pushing it. PMs can't be wrong, can they?" Somebody really has to know what makes developers tick to write ragebait this good.
I can't even get the most expensive model on Claude to use "ls" correctly, with a fresh context window. That is a command that has been unchanged in linux for decades. You exaggerate how reliable these tools are. They are getting more useless as more customers are added because there is not enough compute.
Honestly, LLMs are about as reliable as the rest of my tools are.
Just yesterday, AirDrop wouldn't work until I restarted my Mac. Google Drive wouldn't sync properly until I restarted it. And a bug in Screen Sharing file transfer used up 20 GB of RAM to transfer a 40 GB file, which used swap space so my hard drive ran out of space.
My regular software breaks constantly. All the time. It's a rare day where everything works as it should.
LLMs have certainly gotten to the point where they seem about as reliable as the rest of the tools I use. I've never seen it say 2+2=5. I'm not going to use it for complicated arithmetic, but that's not what it's for. I'm also not going to ask my calculator to write code for me.
What I want from my tools is autonomy/control. LLMs raise the bar on being at the mercy of the vendor. Anything you can do with an LLM today can silently be removed or enshittified tomorrow, either for revenue or ideological reasons. The forums for Cursor are filled with people complaining about removed features and functional regressions.
Except it's more a case of "my phone won't teleport me to Hawaii sad faec lemme throw it out" than anything else.
There are plenty of people manufacturing their expectations around the capabilities of LLMs inside their heads for some reason. Sure there's marketing; but for individuals susceptible to marketing without engaging some neurons and fact checking, there's already not much hope.
Imagine refusing to drive a car in the 60s because they haven't reach 1kbhp yet. Ahaha.
> Imagine refusing to drive a car in the 60s because they haven't reach 1kbhp yet. Ahaha.
That’s very much a false analogy. In the 60s, cars were very reliable (not as much as today’s cars) but it was already an established transportation vehicle. 60s cars are much closer to todays cars than 2000s computers are to current ones.
It's even worse, because even with an unreliable 60s car you could at least diagnose and repair the damn thing when it breaks (or hire someone to do so). LLMs can be silently, subtly wrong and there's not much you can do to detect it let alone fix it. You're at the mercy of the vendor.
> What I want from my tools is reliability. Which is a spectrum, but LLMs are very much on the lower end.
"reliability" can mean multiple things though. LLM invocations are as reliable (granted you know how program properly) as any other software invocation, if you're seeing crashes you're doing something wrong.
But what you're really talking about is "correctness" I think, in the actual text that's been responded with. And if you're expecting/waiting for that to be 100% "accurate" every time, then yeah, that's not a use case for LLMs, and I don't think anyone is arguing for jamming LLMs in there even today.
Where the LLMs are useful, is where there is no 100% "right or wrong" answer, think summarization, categorization, tagging and so on.
I’m not a native English speaker so I checked on the definition of reliability
the quality of being able to be trusted or believed because of working or behaving well
For a tool, I expect “well” to mean that it does what it’s supposed to do. My linter are reliable when it catches bad patterns I wanted it to catch. My editor is reliable when I can edit code with it and the commands do what they’re supposed to do.
So for generating text, LLMs are very reliable. And they do a decent job at categorizing too. But code is formal language, which means correctness is the end result. A program may be valid and incorrect at the same time.
It’s very easy to write valid code. You only need the grammar of the language. Writing correct code is another matter and the only one that is relevant. No one hire people for knowing a language grammar and verifying syntax. They hire people to produce correct code (and because few businesses actually want to formally verify it, they hire people that can write code with a minimal amount of bugs and able to eliminate those bugs when they surface).
> For a tool, I expect “well” to mean that it does what it’s supposed to do
Ah, then LLMs are actually very reliable by your definition. They're supposed to output semi-random text, and whenever I use them, that's exactly what happens. Except for the times I create my own models and software, I basically never see any cases where the LLM did not output semi-random text.
They're not made for producing "correct code" obviously, because that's a judgement only a human can do, what even is "correct" in that context? Not even us humans can agree what "correct code" is in all contexts, so assuming a machine could do so seems foolish.
I'm a native English speaker. Your understanding and usage of the word "reliability" is correct, and that's the exact word I'd use in this conversation. The GP is playing a pointless semantics game.
It's not semantics, if the definition is "it does what it’s supposed to do" then probably all of the currently deployed LLMs are reliable according to that definition.
That's the crux of the problem. Many proponents of LLMs over promise the capabilities, and then deny the underperformance through semantics. LLMs are "reliable" only if you're talking about the algorithms behind the scene and you ignore the marketing. Going off the marketing they are unreliable, incorrect, and do not do what they're "supposed to do".
But maybe we don't have to stoop down to the lowest level of conversation about LLMs, the "marketing", and instead do what most of us here do best, focus on the technical aspects, how things work, and how we can make them do our bidding in various ways, you know like the OG hacker.
FWIW, I agree LLMs are massively over-sold for the average person, but for someone who can dig into the tech, use it effectively and for what it works for, I feel like there is more interesting stuff we could focus on instead of just a blanket "No and I won't even think about it".
The biggest change in my career was when I got promoted to be a linux sysadmin at a large tech company that was moving to AWS. It was my first sysadmin job and I barely knew what I was doing, but I knew some bash and python. I had a chance to learn how to manage stuff in data centers by logging into servers with ssh and running perl scripts, or I could learn cloudformation because that was what management wanted. Everybody else on my team thought AWS was a fad and refused to touch it, unless absolutely forced to. I wrote a ton of terrible cloudformation and chef cookbooks and got promoted twice times and my salary went from $50,000 a year to $150,000 a year in 3 years after I took a job elsewhere. AFAIK, most of the people on that team got laid off when that whole team was eliminated a few years after I left.
I was once in your camp, thinking there was some sort of middle-ground to be had with the emergence of Generative AI and it's potential as a useful tool to help me do more work in less time, but I suppose the folks who opposed automated industrial machinery back in the day did the same.
The problem is that, historically speaking, you have two choices;
1. Resist as long as you can, risking being labeled a Luddite or whatever.
2. Acquiesce.
Choice 1 is fraught with difficulty, like a dinosaur struggling to breathe as an asteroid came and changed the atmosphere it had developed lungs to use. Choice 2 is a relinquishment of agency, handing over control of the future to the ones pulling the levers on the machine. I suppose there is a rare Choice 3 that only the elite few are able to pick, which is to accelerate the change.
My increased cynicism about technology was not something that I started out with. Growing up as a teen in the late-80's/early-90's, computers were hotly debated as being either a fad that would die out in a few years or something that was going to revolutionize the way we worked and give us more free time to enjoy life. That never happened, obviously. Sure, we get more work done in less time, but most of us still work until we are too broken to continue and we didn't really gain anything by acquiescing. We could have lived just fine without smartphones or laptops (we did, I remember) and all the invasive things that brought with it such as surveillance, brain-hacking advertising and dopamine burnout. The massive structures that came out of all the money and genius that went into our tech became megacorporations that people like William Gibson and others warned us of, exerting a level of control over us that turned us all into batteries for their toys, discarded and replaced as we are used up. It's a little frightening to me, knowing how hyperbolic that used to sound 30 years ago, and yet, here we stand.
Generative AI threatens so much more than just altering the way we work, though. In some cases, its use in tasks might even be welcomed. I've played with Claude Code, every generative model that Poe.com has access to, DeepSeek, ChatGPT, etc...they're all quite fascinating, especially when viewed as I view them; a dark mirror reflecting our own vastly misunderstood minds back to us. But it's a weird place to be in when you start seeing them replace musicians, artists, writers...all things that humanity has developed over many thousands of years as forms of existential expression, individuality, and humanness because there is no question that we feel quite alone in our experience of consciousness. Perhaps that is why we are trying to build a companion.
To me, the dangers are far too clear and present to take any sort of moderate position, which is why I decided to stop participating in its proliferation. We risk losing something that makes us us by handing off our creativity and thinking to this thing that has no cognizance or comprehension of its own existence. We are not ready for AI, and AI is not ready for us, but as the Accelerationists and Broligarchs continue to inject it into literally every bit of tech they can, we have to make a choice; resist or capitulate.
At my age, I'm a bit tired of capitulating, because it seems every time we hand the reigns over to someone who says they know what they are doing, they fuck it up royally for the rest of us.
Maybe the dilemma isn’t whether to “resist” or “acquiesce”, but rather whether to frame technological change as an inherently adversarial and zero sum struggle, versus looking for opportunities to leverage those technologies for greater productivity, comfort, prosperity, etc. Stop pushing against the idea of change. It’s going to happen, and keep happening, forever. Work with it.
And by any metric, the average citizen of a developed country is wildly better off than a century or two ago. All those moments of change in the past that people wrung their hands over ultimately improved our lives, and this probably won’t be any different.
Your profile: Former staff software engineer at big tech co, now focused on my SaaS app, which is solo, bootstrapped, and profitable.
Yep. Makes sense.
> And by any metric
Can you cite one? Just curious. I enjoy when people challenge the idea that the advancement of tech doesn't always result in a better world for all because I grew up in Detroit, where a bunch of car companies decided that automation was better than paying people, moved out and left the city a hollowed out version of itself. Manufacturing has returned, more or less, but now Worker X is responsible for producing Nx10 Widgets in the same amount of time Worker Y had to produce 75 years ago, but still gets paid a barely livable wage because the unchecked force of greed has made it so whatever meager amount of money Worker X makes is siphoned right back out of their hands as soon as the check clears. So, from where I'm standing, your version of "improvement" is a scam, something sold to us with marketing woo and snake oil labels, promising improvement if we just buy in.
The thing is, I don't hate making money. I also don't hate change. Quite the opposite, as I generally encourage it, especially when it means we grow as humans...but that's generally not the focus of what you call "change," is it? Be honest with yourself.
What I hate is the argument that the only way to make it happen is by exploiting people. I have a deep love technology and repair it in my spare time for people to help keep things like computers or dishwashers out of landfills, saving people from having to buy new things in a world that treats technology as increasingly disposable, as though the resources used to create are unlimited. I know quite a bit about what makes it tick, as a result, and I can tell you first hand that there's no reason to have a microphone on a refrigerator, or a mobile app for an oven. But you and people like you will call that change, selling it as somehow making things more convenient while our data is collected, sorted and we spend our days fending of spam phone calls or contemplating if what we said today is tomorrow's thought crime. Heck, I'm old enough to remember when phone line tapping was a big deal that everyone was paranoid about, and three decades later we were convinced to buy listening devices that could track our movements. None of this was necessary for the advancement of humanity, just the engorgement of profits.
So what good came of it all? That you and I can argue on the Internet?
It's just exhausting to read the 1000th post of people saying "If we replace jobs with AI, we will all be having happy times instead of doing boring work." It's like reading a Kindergartner's idea of how the world works.
People need to pay for food. If they are replaced, companies are not going to make up jobs just so they can hire people. They are under no responsibility or incentive to do that.
It's useless explaining that here because half of the shills likely have ulterior reasons to be obtuse about that. On top of that, many software developers are so outside the working class that they don't really have a concept of financial obligation, some refusing to have friends that aren't "high IQ", which is their shorthand for not poor or "losers".
I think the dangers that LLMs pose to the ability of engineers to earn a living is overstated, while at the same time the superpowers that they hand us don't seem to get much discussion. When I was starting out in the 80's I had to prowl dial-up BBSs or order expensive books and manuals to find out how to do something. I once paid IBM $140 for a manual on the VGA interface so I could answer a question. The turn around time on that answer was a week or two. The other day I asked claude something similar to this: "when using github as an OIDC provider for authentication and assumption of an AWS IAM role the JWT token presented during role assumption may have a "context" field. Please list the possible values of this field and the repository events associated with them." I got back a multi-page answer complete with examples.
I'm sure github has documents out there somewhere that explain this, but typing that prompt took me two minutes. I'm able daily to get fast answers to complex questions that in years past would have taken me potentially hours of research. Most of the time these answers are correct, and when they are wrong it still takes less time to generate the correct answer than all that research would have taken before. So I guess my advice is: if you're starting out in this business worry less about LLMs replacing you and more about how to efficiently use that global expert on everything that is sitting on your shoulder. And also realize that code, and the ability to write working code, is a small part of what we do every day.
I’m glad you listed the manual example. Usually when people are solving problems, they’re not asking the kind of super targeted question in you second example. Instead it’s an exploration. You read and target the next concept you need to understand. And if you do have this specific question, you want the surrounding context because you’ll likely have more questions after the first.
So what people do is collecting documentations. Give them a glance (or at least the TOC), the start the process to understand the concepts. Sure you can ask the escape code for setting a terminal title, but will it says that not all terminals support that code? Or that piping does not strip out escape codes? That’s the kind of gotchas you can learn from proper manuals.
> So I guess my advice is: if you're starting out in this business worry less about LLMs replacing you and more about how to efficiently use that global expert on everything that is sitting on your shoulder.
There's a real danger in that they use so many resources though. Both in the physical world (electricity, raw materials, water etc.) as well as in a financial sense.
All the money spent on AI will not go to your other promising idea. There's a real opportunity cost there. I can't imagine that, at this point, good ideas go without funding because they're not AI.
I don't agree. LLMs don't have to completly replace software developers, it is enough to reduce the need for them by 30% or so and the salaries will nosedive making this particular career path unattractive.
I really enjoyed how your words made me _feel._ They encouraged me to "keep fighting the good fight" when it comes to avoiding social media, et. al.
I do Vibe Code occasionally, Claude did a decent job with Terraform and SaltStack recently, but the words ring true in my head about how AI weakens my thinking, especially when it comes to Python or any programming language. Tread carefully indeed. And reading a book does help - I've been tearing through the Dune books after putting them off too long at my brother's recommendation. Very interesting reflections in those books on power/human nature that may apply in some ways to our current predicament.
At any rate, thank you for the thoughtful & eloquent words of caution.
You could make the same argument for any language. It still requires you to think and implement the solution yourself, just at a certain level of abstraction.
I feel like in a sci-fi world with robots, teleportation and holodecks these people would decide to stay at home and hand wash the dishes.
If an amazing world changing technology like LLMs shows up on your doorstep and your response is to ignore it and write blog posts about how you don't care about it then you aren't curious and you aren't really a hacker.
I feel like the hacker response would be to roll your own models and move away from commercial offerings. Stuff like eleuther.ai is pretty inspirational, but that movements seems to have died down a bit. At least we still have a couple companies believing in doing open-weight stuff.
I don't touch dishwashers with a stick. No matter how well they work. I find it particularly disillusioning to realize how deep the dishwasher brainworm is able to eat itself even into progressive cleaning circles.
Edit: Ha I see you edited "empty the dishwasher" to "hand wash the dishes". My thoughts exactly.
> We programmers are currently living through the devaluation of our craft.
Valuation is fundamentally connected to scarcity. 'Devaluation' is just negative spin for making it plentyful.
When cicumstances changed to make something less scarce, one cannot expect to get the same value for it because of past valuation. That is just rent-seeking.
I view current LLMs as new kinds of search engines. Ones where you have to re-verify their responses, but on the other hand can answer long and vague queries.
I really don't see the harm in using them this way that can't also be said about traditional search engines. Search engines already use algorithms, it's just swapping out the algorithm and interface. Search engines can bias our understanding of anything as much as any LLM, assuming you attempt to actually verify information you get from an LLM.
I'm of the opinion that if you think LLMs are bad without exception, you should either question how we use technology at all or question this idea that they are impossible to use responsibly. However I do acknowledge that people criticize LLMs while justifying their usage, and I could just be doing the same thing.
Exactly. Using them to actually “generate content” is a sure fire way to turn your brain into garbage, along with whatever you “produce” - but they do seem to have fulfilled Google’s dream of making the Star Trek computer reality.
Unbelievably stale take. You can criticize the future effects of LLM's on critical thinking skills and cognitive degradation of N number metrics, but this is an incredibly jaded and emotional take on what is a freight train of technology.
"AI systems exist to reinforce and strengthen existing structures of power and violence."
I still can barely believe a human being could write this, though we have all read this sort of sentence countless times. Which "structure of power and violence" replicated itself into the brains of people, making them think like this? Everything "exists to reinforce and strengthen existing structures of power and violence" with these people, and they will not rest until there's anything left to attack and destroy
I recently had to write a simple web app to search through a database, but full-text searching wasn't quite cutting it. The underlying data was too inconsistent and the kind of things people would ask for would mean searching across five or six columns.
Just the job for an AI agent!
So what I did is this - I wrote the app in Django, because it's what I'm familiar with.
Then in the view for the search page, I picked apart the search terms. If they start with "01" it's an old phone number so look in that column, if they start with "03" it's a new phone number so look in that column, if they start with "07" it's a mobile, if it's a letter followed by two digits it's a site code, if it's numeric but doesn't have a 0 at the start it's an internal number, and if it doesn't match anything then see if it exists as a substring in the description column.
There we go. Very fast and natural searching that Does What You Mean (mostly).
No Artificial Intelligence.
All done with Organic Home-grown Brute Force and Ignorance.
I'm really excited that the current AI tools will help lots of people build small and useful projects. Normal people who would otherwise be subject to their OS. Subject to vendor options. Help desk, HR, or finance folks will be able to compose and build tools to help them do their jobs (or hobbies) better. Just like we do.
I think of it like frozen dinners. Frozen dinners are not the same as home cooked meals. There is a place for frozen dinners, fast foods, home cooked meals, and nice restaurants. Plus, many of us spend extra time and money making specialty food that may be as good as anything. Frozen dinners don't take away from that.
I think it's the same for coding and AI use. It might eventually enhance coding overall and help bring an appreciation to what engineers are doing.
Hobby or incidental coders have vastly expanded capabilities. Think of the security guy that needs one program to parse through files for a single project. Those tasks are reasonably attainable today without buying and studying the sed/awk guide. (Of course, we should all do that)
Professionals might also find value using AI tools like they would use a spell checker or auto-complete that can also lookup code specs or refer to other project files for you.
The most amazing and useful software, the software that wows us and moves us or inspires us, is going to be crafted and not vibed. The important software will be guided by the hands of an engineer with care and competence to the end.
So, you want to rebel and stay as organic-minded human? But the what exactly is "being a human"?
The biological senses and abilities were constantly augmented throughput the centuries, pushing the organic human to hide inside deeper layers of what you call as yourself.
What's yourself without your material possessions and social connections? There is no such thing as yourself without these.
Now let's wind back. Why resist just one more layer of augmentation of our senses, mind and physical abilities?
Capacity for intention and will were already driven by augmentations that were knowledge and reasoning. Knowledge was sourced externally and reasoning was developed from externally recorded memory of past. Even the instincts get updated by experiences and knowledge.
I'm not sure if you wrote this with AI, but could you provide examples?
Knowledge is shaped by constraints which inform intention, it doesn't "drive it."
"I want to fly, I intend to fly, I learn how to achieve this by making a plane."
not
"I have plane making knowledge therefore I want and intend to fly"
However, I totally understand that constraints often create a feedback loop where reasoning is reduced to the limitations which confine it.
My Mom has no idea that "her computer" != "windows + hp + etc", and if you were to ask her how to use a computer, she would be intellectually confined to a particular ecosystem.
I argue the same is true for capitalism/dominant culture. If you can't "see" the surface of the thing that is shaping your choices, chances are your capacity for "will" is hindered and constrained.
Going back to this.
> What's yourself without your material possessions and social connections? There is no such thing as yourself without these.
I don't think my very ability to make choices comes from owning stuff and knowing people.
I agree that you are an agent capable of having an intention, but that capability needs inputs from outside. Your knowledge and reasoning doesn't entirely reside inside you. Having ability of intention is like a car engine, waiting for inputs or triggers for action.
And no, I don't need AI for this level of inquiry.
I've tried it multiple times, but even after spending 4 hours on a fresh project I don't feel like I know what the hell is going on anymore.
At that point I'm just guessing what the next prompt is to make it work.
I have no critical knowledge about the codebase that makes me feel like I could fix an edge case without reading the source code line by line (which at that point would probably take longer than 4 hours).
I don't understand how anyone can work like that and have confidence in their code.
> I have no critical knowledge about the codebase that makes me feel like I could fix an edge case without reading the source code line by line (which at that point would probably take longer than 4 hours).
Peter Naur argues that programming is fundamentally an activity of theory building, not just program text production. The code itself is merely the artifact of the real work.
You must not confuse the artifact (the source code) with the mind that produced the artifact. The theory is not contained in the text output of the theory-making process.
The problems of program modification arise from acting on the assumption that programming is just text production; the decay of a program is a result of modifications made by programmers without a proper grasp of the underlying theory. LLMs cannot obtain Naur's Ryleian "theory" because they "ingest the output of work" rather than developing the theory by doing the work.
LLMs may _appear_ to have a theory about a program, but this is an illusion.
To believe that LLMs can write software, one must mistakenly assume that the main activity of the programmer is simply to produce source code, which is (according to Naur) inaccurate.
I agree with this take, but I'm wondering what vibe coders are doing differently?
Are they mainly using certain frameworks that already have a rigid structure, thus allowing LLMs to not worry about code structure/software architecture?
Are they worry-free and just run with it?
Not asking rhetorically, I seriously want to know.
This is one of the most insightful thoughts I've read about the role of LLM's in software development. So much so, indeed, its pertinence would remain pristine after removing all references to LLM's
It's interesting that this is a similar criticism to what was levelled at Ruby on Rails back in the day. I think generating a bunch of code - whether through AI or a "framework" - always has the effect of obscuring the mental model of what's going on. Though at least with Rails there's a consistent output for a given input that can eventually be grokked.
I recently made a few changes to a small personal web app using an LLM. Everything was 100% within my capabilities to pull off. Easily a few levels below the limits of my knowledge. And I’d already written the start of the code by hand. So when I went to AI I could give it small tasks.
Create a React context component, store this in there, and use it in this file. Most of that code is boilerplate.
Poll this API endpoint in this file and populate the context with the result. Only a few lines of code.
Update all API calls to that endpoint with a view into the context.
I can give the AI those steps as a list and go adjust styles on the page to my liking while it works. This isn’t the kind of parallelism I’ve found to be common with LLMs. Often you are stuck on figuring out a solution. In that case AI isn’t much help. But some code is mostly boilerplate. Some is really simple. Just always read through everything it gives you and fix up the issues.
After that sequence of edits I don’t feel any less knowledgeable of the code. I completely comprehend every line and still have the whole app mapped in my head.
Probably the biggest benefit I’ve found is getting over the activation energy of starting something. Sometimes I’d rather polish up AI code than start from a blank file.
For me LLMs have been an incredible relief when it comes to software planning—quickly navigating the paralyzing quantity of choices when it comes to infrastructure, deployment, architecture and so on. Of course, this only highlights how crushingly complex it all is now, and I get a sinking feeling that instead of people solving technical complexity where it needs solving, these tools will be an abstraction layer over ever-rolling balls of mud that no one bothers to clean up anymore.
I learned to code in the late 70s on computers using BASIC, then got into Z80 assembly language. Sure, the games were wrote back then were nothing like today's 10GB, $100M+ multi-year projects, but they were still extremely exciting because expectations were much lower back then.
Anyway, the point I'm getting to was it was glorious to understand what every bit of every register and every I/O register did. There were NO interposing layers of software that you didn't write yourself or didn't understand completely. I even wrote a disassembler for the BASIC ROM and spend many hours studying it so I could take advantage of useful subroutines. People even published books that had that all mapped out for you (something like "Secrets of the TRS-80 ROM Decoded").
Recently I have been helping a couple teenagers in my neighborhood learn Python a couple hours a week. After installing Python and going through the foundational syntax, you bet I had them write many of those same games. Even though it was ASCII monsters chasing their character on the screen, they loved it.
It was similar to this, except it was real-time with a larger playfield:
I'm currently coding a Gameboy (which kinda has a Z80) emulator and it's so much fun! (I'm in my mid-20s for context)
I've never really worked on such a low level, the closest I've gotten before is bytecode - which while satisfying - just isn't as satisfying as having to imagine the binary moving around the CPU and registers (and busses too).
I'm even finding myself looking at computers in a totally different way, it's a similar feeling to learning a declarative, or functional language (coming from a procedural language) - except with this amazing hardware component too.
Hats off to you though, I'm not sure I'd have had the patience to code under those conditions!
Most of this debate misses the real shift. AI isn't replacing programmers, it's replacing the parts of programming that were never craft in the first place. In the future, most people will prompt code they barely understand while a small minority who keep real depth will end up owning the hard problems. If anything collapses the culture, it won't be AI but our willingness to trade mastery for convenience.
In graphics there is the uncanny valley effect: when the object approaches reality the experience degrades. A similar mode holds for AI: the more the agent resembles human thinking, feeling and (in the future) touch, the more distress creates. Because it is not and probably never be real.
Maybe because I came into software not from an interest in software itself but from wanting to build things, I can't relate to the anti-LLM attitude. The danger in becoming a "crafter" rather than a "builder" is you lose the forest for the trees. You become more interested in the craft for the craft's sake than for its ability to get you from point A to point B in the best way.
Not that there's anything wrong with crafting, but for those of us who just care about building things, LLM's are an absolute asset.
These hyper paranoid statements like "I personally don’t touch LLMs with a stick. I don’t let them near my brain", are fairly worrisome for a technical person who claims to have any understanding of AI and undermines the credibility of the critique. There is some truth in here but it's beneath a lot of paranoia that's hard to sift through.
"Hacker" of course, has overwhelmingly mostly lost the plot. Especially here, but elsewhere too.
"Hacker" was a recognition that there existed a crusty old entrenched system (mostly not through any fault of any individual) and that it is good to poke and chip away at it, though exploring the limits of new technology.
Whatever we're doing now here, it's emphatically not that.
I think there might be a culture divide here. That person is very likely from Germany/Berlin based on their attitudes and descriptions and I feel like the hacker/tech scene is very different from bay area vibes.
FAANG is not really a thing here and people are much more tech-luddite, privacy paranoid.
>> No matter how well “AI” works, it has some deeply fundamental problems, that won’t go away with technical progress.
Without an explanation of what they author is calling out as flaws, it is hard to take this article seriously.
I know engineers I respect a ton who have gotten a bunch of productivity upgrades using "AI". My own learning curve has been to see Claude say "okay, these integration tests aren't working. Let me write unit tests instead" and go on when it wasn't able to fix a jest issue.
In general using natural language to feed into AI to generate code to compile to runnable software seems like the long way around to designing a more usable programming language.
It seems that most people preferring natural language over programming languages don't want to learn the required programming language and ending up reinventing their own worse one.
There is a reason why we invented programming languages as an interface to instruct the machine and there is a reason why we don't use natural language.
As a crappy programmer I love AI! Right now I'm focusing on building up my Math knowledge, general CS knowledge and ML knowledge. In the future, knowing how to read code and understanding it may be more important than writing it.
I think its amazing what giant vector matrices can do with a little code.
The thing about reading code and understanding is logical reasoning, which you can do by knowing the semantic of each tokens. But the semantics are not universal. You have the Turing Machine, the lambda calculus, horn clauses, etc… Then there are more abstractions (and new semantics) built on top of those.
Writing code is very easy if you know the solution and the semantics of the coding platform. But knowing the solution is a difficult task, even in a business settings where the difficulty are more communication issues. Knowing the semantics of the coding platform is also a difficult one, because you’ll probably be using others’ code and you’ll face the same communication issue (lack of documentation, erroneous documentation, etc…)
So being good at programming does not really means knowing code. It’s more about knowing how to bypass communication barriers to get the knowledge you need.
AI is not one solution to all the problems in the world. But neither is it worthless. There's a proper balance to be had in knowing how useful AI is to an individual.
Sure, it can be overdone. But at the same time, it shouldn't be undersold.
If as the author suggests AI is inherently designed to further concentrate control and capital, that may be so, but that is also the aim of every business.
I'm under the impression that AI is still negative ROI. Creating absolute value is different from creating value greater than the cost. A tool is a tool, but could you continue performing professionally if it was suddenly no longer available?
well, maybe adopt an outlook that things you think are real aren't, and just maybe it will work just as fine if you completely ignore them. going forward ignoring ai that are smarter than autocomplete may be just the way to go
I see this play out everywhere actually be it code, thoughts, even intent, atomized for the capital engine.
Its more than a productivity hack, its a subtle power shift decisions getting abstracted, agency getting diluted
Opting in to weirdness and curiosity is the only bug worth keeping which will eventually become a norm
> I personally don’t touch LLMs with a stick. I don’t let them near my brain. Many of my friends share that sentiment.
> [...] making it increasingly hard to learn things [...]
I find chatting with AI and drilling it for details is often more effective than other means of searching for the same information, or even asking random co-workers. It's all about how you use it.
> No matter how well “AI” works, it has some deeply fundamental problems, that won’t go away with technical progress. I’d even go as far and say they are intentional.
> AI systems being egregiously resource intensive is not a side effect — it’s the point.
WTF? There's nothing for me to learn from this post.
HN loves this "le old school" coder "fighting the good fight" speak but it seems sillier and sillier the better and better LLM's get. Maybe in the GPT 4 era this made sense but Gemini 3 and Opus 4.5 are substantively different, and anyone who can extrapolate a few years out sees the writing on the wall.
A year ago, no reasonable person would use AI for anything but small-scoped autocomplete. Now it can author entire projects without oversight. Inevitably every failure case for LLM's is corrected, everything people said LLM's "could never do" they start doing within 6 months of that prognostication.
Great comment. I'll add that despite being a bit less powerful, the Composer 1 model in Cursor is also extremely fast - to the point where things that Claude would take 10+ minutes of tool calls now takes 30 seconds. That's the difference between deciding to write it yourself, or throwing a few sentences in Cursor and having it done right away. A year ago I'd never ask AI to do tasks without being very specific about which files and methodologies I want it to use, but codebase search has improved a ton and it can gather this info on it's own, often better than I can (if I haven't worked on particular feature or domain in a few months and need to re-familiarize myself with how it's structured). The bar for what AI can do today is a LOT higher than the average AI skeptic here thinks. As someone who has been using this since the GPT4 era, I'd say that I find a prompt about once a week that I figured LLMs would choke on and screw up - but they actually nail it. Whatever free model is running in Github Copilot is not going to do as well, which is probably where a lot of frustration comes from if that is all someone has experienced.
Yeah the thing about having principles is that if the principle depends on a qualitative assessment, then the principle has to be flexible as the quality that you are assessing changes. If AI was still at 2023 levels and was improving very gradually every few years like versions of Windows then I'd understand the general sentiment on here, but the rate of improvement in AI models is alarmingly fast, and assumptions about what AI "is good for" have 6-month max expiration dates.
Most "low hanging fruits" have been taken. The thing with AI is that it gets worse in proportion to how new of a domain it is working in (not that this is any different than humans). However the scale of apps made that utilize AI have exploded in usefulness. What is funny is that some of the ones making a big dent are horrible uses of AI and overpromise its utility (like cal.ai)
I couldn't care less about that pseudo-Marxist mumbo-jumbo about fascists redefining truth. I feel happier and less alienated (to speak in author's terms) due to LLMs. And no rhetoric about control and power can change the fact that lots of software engineering tasks are outright boring for many people.
For example, I spent a bunch of dollars to let Claude figure out how to setup a VSCode workspace with a multi-environment uv monorepo with a single root namespace and an okayish VSCode linting support (we still failed to figure out how to enable a different python interpreter for each folder for Ruff, but that seems to be a Ruff extension limitation).
Everytime I read one of these "I don't use AI" posts, the content is either "my code is handcrafted in a mountain spring and blessed by the universe itself, so no AI can match it", or "everything different from what I do is technofascism or <insert politics rant here>". Maybe Im missing something, but tech is controlled by a handful of companies - always have been; and sometimes code is just code, and AI is just a tool. What am I missing?
I was embarrassed recently to realize that almost all the code I create these days is written by AIs. Then I realized that’s OK. It’s a tool, and I’m making effective use of it. My job was to solve problems, not to write code.
I have a little pet theory brewing. Corporate work claims that we hire junior devs who become intermediate devs, who then become senior devs. The doomsday crowd claim that AI has replaced junior and intermediate devs, and is coming for the senior devs next.
This has felt off to me because I do way more than just code. Business users don’t want get into the details of building software. They want a guy like me to handle that.
I know how to talk to non-technical SMEs and extract their real requirements. I understand how to translate this into architecture decisions that align with the broader org. I know how to map it into a plan that meets those org objectives. And so on.
I think that really what happens is nerds exist and through osmosis a few of them become senior developers. They in turn have junior and intermediate assistant developers to help them deliver. Sometimes those assistants turn out to be nerds themselves, and they spontaneously transmute into senior developers!
AI is replacing those assistant human developers, but we will still need the senior developers because most business people want to sit with a real human being to solve their problem.
I will, however, get worried when AIs start running businesses. Then we are in trouble.
I’ve been tempted to define my life in a big prompt and then do something like: it’s 6:05. Ryan has just woke up. What action (10min or less) does he take? I wonder where I’ll end up if I follow it to a T.
I suggest you have a look at Bell Labs, Xerox and Berkeley, as a simple introduction to the topic - if you thing OSS came from "the goodness of their hearts" instead of a practical business necessity, I have a bridge to sell you.
I would also recommend you to peruse the last 50 years for completely reproductible, homegrown or open computing hardware systems you can build yourself from scratch without requiring overly expensive or exotic hardware. Yes, homegrown CPUs exist, but they "barely work" and often still rely on logic gates. Can you produce 74xx series ICs reliably in a homelab setting? Maybe, but for most of us, probably not. And certainly not for the guys ranting about "companies taking over".
If can't build your computing devices from scratch, store bought is fine. If you can, you're the exception and not the rule.
You are not missing much. Yes there will be situations where AI won’t be helpful, but that’s not a majority
Used right, Claude Code is actually very impressive. You just have to already be a programmer to use it right - divide the problem into small chunks yourself, instruct it to work on the small chunks.
Second example - there is a certain expectation of language in American professional communication. As a non native speaker I can tell you that not following that expectation has real impact on a career. AI has been transformational, writing an email myself and asking it to ‘make this into American professional english’
? Maybe Im missing something, but tech is controlled by a handful of companies - always have been
I guess it depends on what you define as "tech", but the '80s, '90s, and early '00s had an explosion of tiny hardware and software startups. Some even threatened Intel with x86 clones.
It wasn't until the late '90s that NVIDIA was the clear GPU winner, for instance. It had serious competition from 3DFX, ATI, and a bunch of other smaller companies.
> but the '80s, '90s, and early '00s had an explosion of tiny hardware and software startups
Most of them used intel, motorola or zilog tech at some capacity. Most of them with a clock used dallas semiconductor tech; Many of them with serial ports also used either intel or maxim/analog devices chips.
Many of those implementations are patented, and their inner designs were generically, "trade secrets". Most of the clones and rebrands were actually licensed (most of 80x51 microcontrollers and z80 chips are licensed tech, not original). As a tinkerer, you'd receive a black box (sometimes literally) with a series of pins and a datasheet.
If anything, i'd say you have much more choice today than in the 80s/90's.
Not much. Even the argument that AI is another tool to strip people of power is not that great.
It's possible to use AI chatbots against the system of power, to help detect and point out manipulation, or lack of nuance in arguments, or political texts. To help decipher legalese in contracts, or point out problematic passages in terms of use. To help with interactions with the sate, even non-trivial ones like FOI requests, or disputing information disclosure rejections, etc.
AI tools can be used to help against the systems of power.
There's a lot of overlap between "AI is evil megacapitalism" and "AI is ineffective", and I never understood the latter, but I am increasingly arriving to the understanding that the latter claim isn't real, it's just a soldier in the war being fought over the former.
We shape the world through our choices, generally under the umbrella of deterministic systems. AI is non-deterministic, but instead amplifies the concerns by a few wealthy corporations / individuals.
So is AI effective at generating marketing material or propagating arguably vapid value systems in the face of ecological, cultural, and economic crisis? I'd argue yes. But effective also depends on an intention, and that's not my intention, so it's not as effective for me.
I think we need more "manual" choice, and more agency.
Open source library development has to follow very tight sets of style adherence because of its extremely distributed nature, and the degree to which feature development is as much the design of new standards as it is writing working code. I would imagine that it is perhaps the kind of programming least well suited to AI assistance.
AI speeds me up a tremendous amount in my day job as a product engineer.
Ineffective at what? Writing good code, or producing any sort of valuable insight? Yes, it's ineffective. Writing unmaintainable slop at line rate? Or writing internet-filling spam, or propagating their owners' points of view? Very effective.
I just think the things they are effective at are a net negative for most of us.
I am getting tilted by both corp AI hype and the luddites like this. If you don't think that term is appropriate, then I am not sure if it's ever appropriate to use it in the general sense. The "I know you will say you use it appropriately but others don't" pre-emption is something I have seen before, and it isn't convincing.
This article lacks nuance, and could be summarized as "LLMs are bad" Later, I suspect this author (and others of this archetype) will moderate and lament "What I really meant was: I don't like corporations lying about LLMs, or using them maliciously; I didn't imply they don't have uses". The words in the article do not support this.
I believe this pattern is rooted in social-justice-oriented (Is that still the term?) USA left politics. I offer no explanation for this conflation, but an observation.
I think we can all agree AI is a bubble, and is over-hyped. I think we can ignore any pieces that say "AI is all bad" or "AI is all good" or "I've never used AI but...".
It's nuanced, can be abused, but can be beneficial when used responsibly in certain ways. It's a tool. It's a powerful tool, so treat it like a powerful tool: learn about it enough to safely use it in a way to improve your life and those around you.
Avoiding it completely whilst confidently berating it without experience is a position formed from fear, rather than knowledge or experience. I'm genuinely very surprised this article has so many points here.
Commenting on the internet points (this article is having), I realised I was reading most of the popular things here for some time, months, and it was such a huge and careless waste of my time…
So I’m not even surprised it’s having so many internet points. As if they were the sign of quality, then the opposite. Bored not very smart people thinking the more useless junk they consume, the better off they’ll become. Doesn’t work that way.
I was interested until I got to the "fascist" line where the author reveals his motives for avoiding AI. Bummer, I was hoping for a level headed technical argument. This post doesn't belong on the front page.
We will have decades of AI slop that needs to be cleaned up. Many startups will fail hard when the AI code bugs all creep up when a certain scale is reached. There will be massive dataloss, lots of hacking attempts that will succeed because of poor AI code no one understands. I dont see a dev staying in the same place for many years when its just soulless AI day in day out.
Where's the popcorn? I don't really care either way about so-called AI. I find the talk about AGI quite ridiculous, but I can imagine LLMs have their utility just like anything else. I don't vibe code because I don't find it useful. I'm fine coding by myself thank you very much.
When the AI hype is over and the bubble has burst, I'll still be here, writing quality software using my brain and my fingers, and getting paid to do it.
One day long time ago, I decided that hex grid coordinates lovingly described by redblobgames and used by every developer delivering actual games are inelegant. I wanted to store them in a rectangular array, and also store all edges and vertices in rectangular arrays with simple numerical addressing for the usual algos (distance, neighbors etc) between all 3. I messed around with it and a map generator for a few weeks. Needless to say it was as elegant as a glass hammer, 3 simple arrays beautiful to look at. I didn't finish anything close to a game. But it was a great fun.
If I ever want to deliver a game I might outsource my hex grid to AI. But back in those days I could have probably used a library.
Is hacking about messing around with things? You can still do it, ignore AI, ignore prior art. You can reimplement STL because std vector is "not fast enough". Is hacking about making things? Then again, AI boilerplate is little different than stitching together libraries in practice.
The big tech will build out compute in a never seen speed and we will reach 2e29 Flops faster than ever.
Big tech is competing with each other and they are the ones with the real money in our capitalistic world but even if they would find some slow down between each others, countries are also now competing.
In the next 4 years and the massive build out of compute, we will see a lot clearer how the progress will go.
And either we hit obvous limitations or not.
If we will not see an obvious limitation, fionas opinion will have 0 relevance.
The best chance for everyone is to keep a very very close eye on AI to either make the right decisions (not buying that house with a line of credit; creating your own product a lot faster thanks to ai, ...) or be aware what is coming.
I vaguely agree with the fake conclusions at the end, which are vapid and do not arise from the arguments. "Be kind to babies, brush your teeth twice a day, always tip the waitstaff, blah blah blah whatever Bernie said, etc..."
The real conclusion is:
> No matter how well “AI” works, it has some deeply fundamental problems, that won’t go away with technical progress.
which you can tell from the title. There are zero arguments made to support this. It's just faux-radical rambling. I'm amazed how people are impressed by this privileged middle-class babble. This is an absolutely empty-headed article that AI could spit out dozens of versions of.
I care how well your AI works. I also care how it works, like I care about how transistors work. I do not want to build my own transistors*, although I like to speculate about how different ones could be built, just like I like to think about different machine learning architectures. The skills that I learned when I learned computers put me in an ideal position to understand, implement, and use machine learning.
The reason I care about how well your AI works is because I am going to use it to accomplish my own goals. I am not going to fetishize being a technician in an art most people don't know, I am not a middle-class profession worshiper. I get it, your knowledge of a rare art guarantees that you eat. If your art becomes obviated by technology (like the art of doing math by hand, which you could once live very well on from birth to death), you have to learn something else.
But I care how well your AI works because I am trying to accomplish things in the world, not build an identity. I think AI is bad, and I'm a bit happy that it's bad, because it means that I can use it to bridge myself to the next place before it gets good enough not to need me. The fact that I know how computers work means that I can make the AI do what I want in a way that somebody who didn't have my background couldn't. The first people that were dealing with computers were people who were good at math.
Life is not going to be good for the type of this year's MBP js programmer who learned it because the web was paying, refused to learn anything else so only gradually became a programmer after node came around, and only used the trendy frameworks that it seemed they were hiring for, who still has no idea how a computer works. AI is actually going to give everything back to the nerds, because AI assistance might eventually mean you're only limited by your imagination (within the context of computers.) Nerds are imaginative. The kind of imagination that has been actively discouraged in tech for a long time, since it became a profession for marketers and middlemen.
I almost guarantee this call for craftsmen against AI is coming from someone who builds CRUD apps for a living. To not be excited about what AI can do for the things that you already wanted to create, the things you dream of and couldn't find enough people with enough skills to dream with you to get it done; to me that's a sign that you're just not into computers.
My fears of AI is that it will be nerfed, made so sycophantic that it sucks down credits and gets distracted so often that it makes it impossible to work, be used to extract my ideas and give them to someone with more capital and manpower who can jump in front of me (the Amazon problem), that governments will be bribed into making it impossible to run them locally, that governments will be bribed into letting corporations install them on all our computers so they can join in on the surveillance and control. I'm worried about the speakwrite. I'm worried about how it will make dreams possible for evil men. I am not worried about losing my identity. I'm not insecure like that.
* although I have of course, in school, by stringing a bunch of NANDs together. I was a pioneer of the WAS-gate, which is when you turn on the power and a puff of smoke comes out of one of your transistors.
>In a world where fascists redefine truth, where surveillance capitalist companies, more powerful than democratically elected leaders, exert control over our desires, do we really want their machines to become part of our thought process? To share our most intimate thoughts and connections with them?
Generally speaking people just cannot really think this way. People broadly are short term thinkers. If something is convenient, people will use it. Is it easier to spray your lawn with pesticides? Yep, cancer (or biome collapse) is a tomorrow problem and we have a "pest" problem today. Is it difficult to sit alone with your thoughts? Well good news, Youtube exists and now you don't have to. What happens next (radicalization, tracking, profiling, propaganda, brain rot) is a tomorrow problem. Do you want to scroll at the end of the day and find out what people are talking about? Well, social media is here for you. Whether or not it's accidentally part of a privatized social credit system? Well again, that
's a problem for later. I _need_ to feel comfortable _right now_. It doesn't matter what I do to the world so long as I'm comfortable _right now._
I don't see any way out of it. People can't seem to avoid these patterns of behavior. People asking for regulation are about as realistic as people hoping for abstinence. It's a correct answer in principle but just isn't going to happen.
> I _need_ to feel comfortable _right now_. It doesn't matter what I do to the world so long as I'm comfortable _right now._
I think that can be offset if you have a strong motivation, a clear goal to look forward to in a reasonable amount of time, to help you endure through the discomfort:
Before I had enough financial independence to be able to travel at will, I was often stuck in a shit ass city, where the most fun to be had was video games and fantasizing about my next vacation coming up in a month or 2, and that helped me a lot in coping with my circumstances.
Too few people are allowed or can afford even this luxury of a pleasant future, a promise of a life different/better than their current.
I wonder how much of that is "nature vs. nurture"?
Like the Tolkienesque elves in fantasy worlds, would humans be more chill too if our natural lifespans were counted in centuries instead of decades?
Or is it the pace of society, our civilization, that always keeps us on edge?
I mean I'm not sure if we're born with a biological sense of mortality, an hourglass of doom encoded into our genes..
What if everybody had 4 days of work per week, guaranteed vacation time every few months, kids didn't have to wake up at 7/8 in the morning every day, and progress was measured biennially, e.g. 2 years between school grades/exams, and economic performance was also reviewed in 2 year periods, and so on, could we as a species mellow the fuck out?
I've wondered about this a lot, and I think it's genetic and optimized for survival in general.
Dogs barely set food aside; they prefer gorging, which is a good survival technique when your food spoils and can be stolen.
Bees, at the other end of the spectrum, spend their lives storing food (or "canning", if you will - storing prepared food).
We first evolved in areas that were storage-adverse (Africa), and more recently many of us moved to areas with winters (both good and needful storage). I think "finish your meal, you might not get one tomorrow" is our baseline survival instinct; "Winter is coming!" is an afterthought, and might be more nurture-based behavior than the other.
Yes, and it's barely been 100 years, probably closer to 50, since we have had enough technology to make the daily lives of most (or half the) humans in the world comfortable enough that they can safely take 1-2 days off every week.
For the first time in human history most people don't have to worry about famine, wars, disasters, or disease upending their lives; they can just wait it out in their homes.
Will that eventually translate to a more relaxed "instinct"?
>I find it particularly disillusioning to realize how deep the LLM brainworm is able to eat itself even into progressive hacker circles.
This is such a bizarre sentiment for any person interested in technology. AI is, without any doubt, the most fascinating and important technology I have seen developed in my lifetime. A decade ago the idea of a computer not only holding a reasonable conversation with a human, but being able to talk with a human on deep and complex subjects seemed far out of reach.
No doubt there are many deep running problems with it, any technology with such a radical breakthrough will have them. But none of that takes away from how monumental of an achievement it is.
Looking down at people for using it or being excited about it is such an extreme position. Also the insinuation that the only reason anybody uses it because they are forced into it, is completely bizarre.
> My rate of thinking is faster than typing, so the bottleneck has switched from typing to thinking!
Unless you're neuralinking to AI, you're still typing.
What changed is what you type. You type less words to solve your problem. The machine does the conversion from less words to more words. At the expense of less precision: the machine can do the conversion to the incorrect sequence of more words.
Well, there are two aspects from which I can react to this post.
The first aspect is the “I don’t touch AI with a stick”. AI is a tool. Nobody is obligated to touch it obviously, but it is useful in certain situations. So I disagree with the author’a position to avoid using AI. It reads like stubbornness for the sake of avoiding new tech.
The second angle is the “bigtech corporate control” angle. And honestly, I don’t get this argument at all. Computers and the digital world has created the biggest distopian world we have ever witnessed. From absurd amounts of misinformation and propaganda fueled by bot farms operated at government levels, all the way to digital surveillance tech. You have that strong of an opinion against big tech and digital surveillance, blaming AI for that, while enjoying the other perils of big tech, is virtue signaling.
Also, what’s up with the overuse of “fascism” in places where it does not belong?
This piece started relatively well but devolved by the end.
Is AI resource-intensive by design? That doesn’t make any sense to me. I think companies are furiously working toward reducing AI costs.
Is AI a tool of fascism? Well, I’d say anything that can make money can be a tool of fascism.
I can sort of jive with the argument that AI is/will be reinforcing the ideals of those in power, although I think traditional media and the tooling that AI intends to replace like search engines accomplished that just fine.
What we are left with is, I think, an author who is in denial about their special snowflake status as a programmer. It was okay for the factory worker to be automated away, but now that it’s my turn to be automated away I’m crying fascism and ethics.
Their friends behave the way they do about AI because they know it’s useful but know it’s unpopular. They’re trying to save face while still using the tool because it’s so obviously useful and beneficial.
I think the analogy is similar to the move from film to digital. There will be a tiny amount of people who never buy in, there will be these “ashamed” adopters who support the idea of film and hope it continues on, but for themselves personally would never go back to film, and then the majority who don’t see the problem with letting film die.
> AI systems exist to reinforce and strengthen existing structures of power and violence. They are the wet dream of capitalists and fascists.
Persuasion tip: if you write comments like this, you are going to immediately alienate a large portion of your audience who might otherwise agree with you.
The author may not care but I doubt people care that a software has been developed by AI instead of a human. Just like nobody cares whenever a hole has been dug by hand using a shovel or by an excavator.
people care if there is noone able to fix the software or adjust it.
Think of old SAP systems with a million obscure customization - any medium to large codebase that is mostly vibe coded is instantly legacy code.
In your hole analogy: People don't care if a mine is dug by a bot or planned by humans until there is structural integrity issues or tunnels that are collapsing and nobody is able to read the map properly.
Once I saw the use of “FaCiSm” and “capitalist corporate control” I tuned out. I wouldn’t trust this persons opinion on trimming my nails let alone future tech.
> LLM brainworm is able to eat itself even into progressive hacker circles
What a loaded sentence lol. Implying being a hacker has some correlation with being progressive. And implying somehow anti-AI is progressive.
> AI systems being egregiously resource intensive is not a side effect — it’s the point.
Really? So we're not going to see AI users celebrating over how much less power DeepSeek used, right?
Anyway guess what else is resource intensive? Making chips. Follow the line of logic you will find computers consolidate powers and real progressive hackers should use pencil and paper only.
Back to the first paragraph...
> almost like a reflex, was a self-justification of why the way they use these tools is fine, while other approaches were reckless.
The irony is off the roof. This article is essentially: when I use computational power how I like, it's being a hacker. When others use computational power their way, it's being fascists.
> Implying being a hacker has some correlation with being progressive
I didn't read it that way. "Progressive hacker circles" doesn't imply that all hackers are progressive, it can just be distinguishing progressive circles from conservative ones.
Pro/regressive are terms that are highly contextual. Progress for progress’ sake alone can move anything forward. I would argue the progression of the attention economy has been extremely negative for most of the human race, yet that is “progressing.”
In this instance, it’s just claiming turf for the political movement in the US that has spent the last century:
- inventing scientific racism and (after that was debunked) reinventing other academic pretenses to institutionalize race-base governance and society
- forcibly sterilizing people with mental illnesses until the 1970s, through 2005 via coercion, and until the present via lies, fake studies, and ideological subversion
- being outspokenly antisemitic
Personally, I think it’s a moral failing we allow such vile people to pontificate about virtues without being booed out of the room.
The typical CCC / Hackerspace - circle is kinda progressive / left leaning. At least in my experience. Which I think she(or he?) was implying. Of course not every hacker is :)
> Implying being a hacker has some correlation with being progressive
I mean, yeah, that kind of checks out. The quoted part doesn't make much sense to me, but that most hackers are progressives (as in "enact progress by change", not the twisted American version) should hardly come as a surprise. The opposite would be that a hacker could be a conservative (again, not the US version, but the global definition; "reluctant to change"), which is pretty much a oxymoron. Best would be to eschew political/ideological labels in total, and just say we hackers are unpolitical :)
Personally I use AI for most of my work, launder it a bit to adhere to my own personal style, and don't tell anyone most of the time.
In the end? no one cares. I get just as much done (maybe more), while doing less work. Maybe some of my skills will atrophy, but I'll strengthen others.
I'm still auditing everything for quality as I would my own code before pushing it. At the end of the day, it usually makes fewer typos than I would. It certainly searches the codebase better than I do.
All this hype on both ends will fade away, and the people using the tools they have to get things done will remain.
Luddism is a reaction to the current situation as it pertains to labor. Marx had this to say about it:
"It took both time and experience before the workers learned to distinguish between machinery and its employment by capital, and to direct their attacks, not against the material instruments of production, but against the mode in which they are used."
- Karl Marx. Das Kapital Vol 1 Ch 15: Machinery and Modern Industry, 1867
Tech can always be good, how its used is what makes it bad, or not.
I think the author makes some very good points, but it's perhaps worth noting that they are the current status quo. I do find myself wondering if the author re-evaluates in a future where the technology gets cheaper, the executable AI engine fits on a standalone Raspberry Pi, and retraining the engine is done by volunteer co-ops.
... but, it is definitely worth considering whether the status quo is tolerable and whether we as technical creatives are willing to work with tools that live within it.
I don't think I'm going to take seriously an argument that uses Marx as its foundation but I'm glad that the pronouns crowd has had to move on from finger wagging as their only rhetorical stance.
This post raises genuine concerns about the integration of large language models into creative and technical work, and the author writes with evident passion about what they perceive as a threat to human autonomy and craft. BUT… the piece suffers from internal contradictions, selective reasoning, and rhetorical moves that undermine its own arguments in ways worth examining carefully.
My opinion: This sort of low-evidence writing is all too common in tech circles. It makes me wish computer science and engineering majors were forced to spend at least one semester doing nothing but the arts.
The most striking inconsistency emerges in how the author frames the people who use LLM tools. Early in the piece, colleagues experimenting with AI coding assistants are described in the language of addiction and pathology: they are “sucked into the belly of the vibecoding grind,” experiencing “existential crisis,” engaged in “harmful coping.” The comparison to watching a friend develop a drinking problem is explicit and damning. This framing treats AI adoption as a personal failure, a weakness of character, a moral lapse. Yet only paragraphs later, the author pivots to acknowledging that people are “forced to use these systems” by bosses, UI patterns, peer pressure, and structural disadvantages in school and work. They even note their own privilege in being able to abstain. These two framings cannot coexist coherently. If using AI tools is coerced by material circumstances and power structures, then the addiction metaphor is not just inapt but cruel — it assigns individual blame for systemic conditions. The author wants to have it both ways: to morally condemn users while also absolving them as victims of circumstance.
This tension extends to the author’s treatment of their own social position. Having acknowledged that abstention from LLMs requires privilege, they nonetheless continue to describe AI adoption as a “brainworm” that has infected even “progressive hacker circles.” The disgust is palpable. But if avoiding these tools is a luxury, then expressing contempt for those who cannot afford that luxury is inconsistent at best and self-congratulatory at worst. The acknowledgment of privilege becomes a ritual disclaimer rather than something that actually modifies the moral judgments being rendered.
The author’s claims about intentionality represent another significant weakness. The assertion that AI systems being resource-intensive “is not a side effect — it’s the point” is presented as revelation, but it functions as an unfalsifiable claim. No evidence is offered that anyone designed these systems to be resource-hungry as a mechanism of control. The technical requirements of training large models, competitive market pressure to scale, and the emergent dynamics of venture capital investment all offer more parsimonious explanations that don’t require attributing coordinated malicious intent. Similarly, the claim that “AI systems exist to reinforce and strengthen existing structures of power and violence” is stated as though it were established fact rather than contested interpretation. This is the central claim of the piece, and yet it receives no argument — it is simply asserted and then built upon, which amounts to begging the question.
The essay also suffers from a pronounced selection bias in its examples. Every person described using AI tools is in crisis, suffering, or compromised. No one uses them mundanely, critically, or with benefit. This creates a distorted picture that serves rhetorical purposes but does not reflect the range of actual use cases. The author’s friends who share their anti-AI sentiment are mentioned approvingly, establishing clear in-group and out-group boundaries. This is identity formation masquerading as analysis — good people resist, compromised people succumb.
There is a false dichotomy running through the piece that deserves attention. The implied choice is between the author’s total abstention, not touching LLMs “with a stick,” and being consumed by the pathological grind described earlier. No middle ground exists in this telling. The possibility of critical, limited, or thoughtful engagement with these tools is never acknowledged as legitimate. You are either pure or contaminated.
Reality doesn’t work this way! It’s not black and white. My take: AI is a transformative technology and the spectrum of uses and misuses of AI is vast and growing.
The philosophical core of their argument also contains an unexamined equivocation. The author invokes the extended cognition thesis — the idea that tools become part of us and shape who we are — to make AI seem uniquely threatening. But this same argument applies to every tool mentioned in the piece: hammers, pens, keyboards, dictionaries. The author describes their own fingers “flying over the keyboard, switching windows, opening notes, looking up words in a dictionary” as part of their extended cognitive process. If consulting a dictionary shapes thought and becomes part of our cognitive process, what exactly distinguishes that from asking a language model to check grammar or suggest a word? The author never establishes what makes AI categorically different from the other tools that have already become part of us. The danger is assumed rather than demonstrated.
There is also a genetic fallacy at work in the argument about power. The author suggests AI is bad partly because of who controls it — surveillance capitalists, fascists, those with enormous physical infrastructure. But this argument conflates the origin and ownership of a technology with its inherent properties. One could make identical arguments about the printing press, the telephone, or the internet itself. The question of whether these tools could be structured differently, owned differently, or used toward different ends is never engaged. Everything becomes evidence of a monolithic system of control.
Finally, there is an unacknowledged irony in the piece’s medium and advice. The author recommends spending less time on social media and reading books instead, while writing a blog post clearly designed for social sharing, complete with the vivid metaphors, escalating moral stakes, and calls to action that characterize viral content. The post exists within and depends upon the very attention economy it criticizes. This is not necessarily hypocrisy — we all must operate within systems we find problematic — but the lack of self-awareness about it is notable given how readily the author judges others for their compromises.
The essay is most compelling when it stays concrete: the phenomenology of writing as discovery, the real pressures workers face, the genuine concerns about who controls these systems and toward what ends. It is weakest when it reaches for grand unified theories of intentional domination, when it mistakes assertion for argument, and when it allows moral contempt to override the structural analysis it claims to offer. The author clearly cares about human flourishing and autonomy, but the piece would be stronger if that care extended more generously to those navigating these technologies without the privilege of refusal.
Your reading of the addiction angle is much different than mine.
I didn't hear the author criticizing the character of their colleagues. On the contrary, they wrote a whole section on how folks are pressured or forced to use AI tools. That pressure (and fear of being left behind) drives repeated/excessive exposure. That in turn manifests as dependence and progressive atrophy of the skills they once had. Their colleagues seem aware of this as evidenced by "what followed in most of them, almost like a reflex, was a self-justification of why the way they use these tools is fine". When you're dependent on something, you can always find a 'reason'/excuse to use. AA and other programs talk about this at length without morally condemning addicts or assigning individual blame.
> For most of us, self-justification was the maker of excuses; excuses, of course, for drinking, and for all kinds of crazy and damaging conduct. We had made the invention of alibis a fine art. [...] We had to drink because at work we were great successes or dismal failures. We had to drink because our nation had won a war or lost a peace. And so it went, ad infinitum. We thought "conditions" drove us to drink, and when we tried to correct these conditions and found that we couldn't to our entire satisfaction, our drinking went out of hand
Framing something as addictive does not necessarily mean that those suffering from it are failures/weak/immoral but you seem to have projected that onto the author.
Their other analogy ("brainworm") is similar. Something that no-one would willingly sign up for if presented with all the facts up front but that slips in and slowly develops into a serious issue. Faced with mounting evidence of the problem, folks have a strong incentive to downplay the issue because it's cognitively uncomfortable and demands action. That's where the "harmful coping" comes in: minimizing the severity of the problem, avoiding the topic when possible, telling yourself or others stories about how you're in control or things will work out fine, etc.
and demonstrate your mastery ,to the muterings of the golly gee's
it will last several more months untill the , GASP!!!, bills ,maintenance costs, regulatory burdens, and various legal issues
combine to, pop AI's balloon, where then AI will be left automating all of the tedious,
but chair filling, beurocratic/secretarial/appretice positions
through out the white collar world.
technology is slowly pushing into other sectors, where legacy methods and equipment
can now be reduced to a free app on a phone, more to the point, a free, local only app.
fact is that we are way over siliconed going forward and that will bite as well, terra bite phones for $100, what then?
The increasingly rough tone against "AI" critics in the comments and the preposterous talking points ("you are not a senior developer if you do not get value from 'AI'") is an indication that the bubble will burst soon.
It is the tool obsessed people who treat everything like a computer game that like "AI" for software engineering. Most of them have never written anything substantial themselves and only know the Jira workflow for small and insignificant tickets.
Harsh but fair. In short, some people are upset about change happening to them. They think it's unfair and that they deserve better. Maybe that's true. But unfair things happen to lots of people all the time. And ultimately people move on, mostly. There's a futility to being very emotional about it.
I don't get all the whining of people about having to adapt. That's a constant in our industry and always has been. If what you were doing was so easy that it fell victim to the first generation of AI tools that are doing a decent enough job of it, then maybe what you were doing was a bit Ground Hog day to begin with. I've certainly been involved with a lot of projects where a lot of the work felt that way. Customer wants a web app thing with a log in flow and a this and a that. 99% of that stuff is kind of very predictable. That's why agentic coding tools are so good at this stuff. But lets be honest, it was kind of low value stuff to begin with. And it's nice that people over-payed for that for a while but it was never going to be forever.
There's still plenty of stuff these tools are less good at. It gets progressively harder if you are integrating lots of different niche things or doing some non standard/non trivial things. And even those things where it does a decent job, it still requires good judgment and expertise to 1) be able to even ask for the right thing and then 2) judge if what comes back is fit for purpose.
There's plenty of work out there supporting companies with decades of legacy software that are not going to be throwing away everything they have overnight. Leveling up their UIs with AI powered features, cross integrating a lot of stuff, etc. is going to generate lots of work and business. And most companies are very poorly equipped to do that in house even if they have access to agentic coding tools.
For me AI is actually generating more work, not less. I'm now taking on bigger things that were previously impossible to take on without involving more people. I have about 10x more things I want to do than I have bandwidth for. I have to take decisions about doing things the stupid old way because it's better/faster or attempting to generate some code. All new tools do is accelerate the pace and raise the ambition levels. That too is nothing new in our industry. Things that were hard are now easy, so we do more of them and find yet harder things to do next. We're not about to run out of hard things to do any time soon.
Adapting is hard. Not everyone will manage. Some people might burn out doing that or change career. And some people are in denial or angry about that. And you can't really expect others to loose a lot of sleep over this. Whether that's unfair or not doesn't really matter.
I always thought years of experience in a language was a silly job requirement. LLMs allow me to write Rust code as a total Rust beginner and allows me to create a valuable SaaS while most experienced Rust developer never built anything that made $1 outside of their work. I wouldn't say devaluation, my programming experience definitely helps with debugging. LLMs eliminate boilerplate, not engineering judgement and product decisions.
> “We programmers are currently living through the devaluation of our craft”
my interpretation of what the author means by devaluation is the general trend that we’re seeing in LLMs
The theory that I hear from investors is as LLMs generally improve, there will exist a day where a LLMs default code output, coupled with continued hardware speeds, will become _good enough_ for the majority of companies - even if the code looks like crap and is 100x slower than it needs to be
This doesn’t mean there won’t be a few companies that still need SWEs to drop down and do engineering, but tbh, the majority of companies today just need a basic web app - and we’ve commoditized web app dev tools to oblivion. I’d even go as far to argue that what most programmers do today isn’t engineering, it’s gluing together an ecosystem of tooling and or API’s.
Real engineering seems to happen outside of work on open source projects, at the mav 7 on specialized teams, or at niche deeply technical startups
EDIT: I’m not saying this is good or bad, but I’m just making the observation that there is a trend towards devaluing this work in the economy for the majority of people, and I generally empathize with people who just want stability and to raise a family within reasonable means
I really love LLMs for Rust. Before them I was an intermediate Rust dev, and only used it in specific circumstances where the extra coding overhead paid off.
Now I write just about everything in Rust because why not? If I can vibe code Rust about as fast as Python, why would I ever use Python outside of ML?
> I find it particularly disillusioning to realize how deep the LLM brainworm is able to eat itself even into progressive hacker circles.
That's the thing, hacker circles didn't always have this 'progressive' luddite mentality. This is the culture that replaced hacker culture.
I don't like AI, generally. I am skeptical of corporate influence, I doubt AI 2027 and so-called 'AGI'. I'm certain we'll be "five years away" from superintelligence for the forseeable future. All that said, the actual workday is absolutely filled with busy work that no one really wants to do, and the refusal of a loud minority to engage with that fact is what's leading to this. It's why people can't post a meme, quote, article, whatever could be interpreted (very often, falsely) as AI-generated in a public channel, or ask a chatbot to explain a hand-drawn image without the off chance that they get an earful from one of these 'progressive' people. These people bring way more toxicity to daily life than who they wage their campaigns against.
> This is the culture that replaced hacker culture.
Somewhere along the lines of "everybody can code," we threw out the values and aesthetics that attracted people in the first place. What began as a rejection of externally imposed values devolved into a mouthpiece of the current powers and principalities.
This is evidenced by the new set of hacker values being almost purely performative when compared against the old set. The tension between money and what you make has been boiled away completely. We lean much more heavily on where someone has worked ("ex-Google") vs their tech chops, which (like management), have given up on trying to actually evaluate. We routinely devalue craftsmanship because it doesn't bow down to almighty Business Impact.
We sold out the culture, which paved the way for it to be hollowed out by LLMs.
There is a way out: we need to create a culture that values craftmanship and dignifies work done by developers. We need to talk seriously and plainly about the spiritual and existential damage done by LLMs. We need to stop being complicit in propagating that noxious cloud of inevitability and nihilism that is choking our culture. We need to call out the bullshit and extended psyops ("all software jobs are going away!") that have gone on for the past 2-3 years, and mock it ruthlessly: despite hundreds of billions of dollars, it hasn't fully delivered on its promises, and investors are starting to be a bit skeptical.
In short, it's time to wake up.
"There is a way out: we need to create a culture that values craftmanship and dignifies work done by developers. We need to talk seriously and plainly about the spiritual and existential damage done by LLMs"
This is the exact sentiment when said about some other profession or craft, countless people elsewhere and on HN have noted that it's neither productive not wise to be so precious about a task that evolved as a necessity into the ritualized, reified, pedestal-putting that prevents progress. It conflates process with every single other thing about whatever is being spoken about.
Also: Complaining that a new technology bottlenecked by lack of infrastructure, pushback from people with your mindset, poorly understood in its best use because the people who aren't of your mindset are still figuring out and creating the basic tooling we currently lack?
That is a failure of basic observation. A failure to see the thing you don't like because you don't like and decide not to look. Will you like it if you look? I don't know, sounds like your mind is made up, or you might find good reasons why you should maintain your stance. In the later case, you'd be able to make a solid contribution to the discussion.
I'm firmly in the “don't want to use it; if you want to, feel free, but stop nagging me to” camp.
Oh, and the “I'm not accepting 'the AI did it' as an excuse for failures” camp. Just like outsourcing to other humans: you chose the tool(s), you are responsible for verifying the output.
I got into programming and kicking infrastructure because I'm the sort of sad git who likes the details, and I'm not about to let some automaton steal my fun and turn me into its glorified QA service!
I'd rather go serve tables or stack shelves, heck I've been saying I need a good long sabbatical from tech for a few years now… And before people chime in with “but that would mean dropping back to minimum wage”: if LLMs mean almost everybody can program, then programming will pretty soon be a minimum wage job anyway, and I'll just be choosing how I earn that minimum (and perhaps reclaiming tinkering with tech as the hobby it was when I was far younger).
“Don’t want… “not accepting”
Now this, putting aside my thoughts above, i find a compelling argument. You just don’t want to. I think that should go along with a reasonable understanding of what a person is choosing to not use, but I’ll presume you have that.
Then? Sure, the frustrating part is to see someone making that choice tell other people that theirs is invalid, especially when we don’t know what the scene will look like when the dust settles.
There’s no reason to think there wouldn’t be room for “pure code” folks. I use the camera comparison— I fully recognize it doesn’t map in all respect to this. But the idea that painters should have given up paint?
There were in fact people at the time who said, “Painting is dead!”. Gustav Flaubert, famous author, said painting was obsolete. Paul Delaroche Actually said it was dead. Idiots. Amazingly talented and accomplished, but short sighted, idiots. Well like be laughing at some amazing and talented people making such statement about code today in the same light.
Code as art? Well, two things: 1) LLM’s have tremendous difficulty parsing very dense syntax, and then addressing the different pieces and branching ideas. Even now. I’m guessing this transfers to code that must be compact, embedded, and optimized to a precision such that sufficient training data, generalizable to the task with all the different architectures of microcontrollers and embedded systems… not yet. My recommendation to coders who want to look for areas where AI will be unsuitable? There’s plenty of room at the bottom. Career has never taken me there, but the most fun I’ve had coding has been homebrew microcontrollers.
2) code as art. Not code to produce art, or not something separable from the code that created it. Think Thing minor things from the past like the obfuscated C challenges. Much of that older hacker ethos is fundamentally an artistic mindset. Art has a business model, some enterprising person aught to crack the code of coding code into a recognized art form where aesthetic is the utility.
I don’t even mean the visual code, but that is viable: Don’t many coders enjoy the visual aesthetic of source code, neatly formatted, colored to perfect contrasts between types etc? I doubt that’s the limit of what could be visually interesting, something that still runs. Small audience for it sure— same with most art.
Doesn’t matter, I doubt that will be something masses of coders turn to, but my point is simply that there are options there are options that involve continuing the “craft” aspects you enjoy, whether my napkin doodle of an idea above holds or not. The option, for many, may simply not include keeping the current trajectory of their career. Things change: not many professional coders that began at 20 in 1990 have been able— or willing— to stay in the narrow area they began in. I knew some as a kid that I still know, some that managed to stay on that same path. He’s a true craftsman at COBOL. When I was a bit older in one of my first jobs he helped me learn my way around a legacy VMS cluster. Such things persist, reduced in proportion to the rest is all. But that is an aspect of what’s happening today.
> prevents progress
"progress" is doing a lot of work here. Progress in what sense, and for whom? The jury is still out on whether LLMs even increase productivity (which is not the same as progress), and I say this as a user of LLMs.
Man, there is something true in what he is saying though. Can't you see it? I like the idea of some of this technology. I think its cool you can use natural language to create things. I think there is real potential in using these tools in certain context, but the way in which these tools got introduced, no transparency, how its being used to shape thought, the over-reliance on it and how its use to take away our humanity is a real concern.
If this tech was designed in an open way and not put under paywalls and used to develop models that are being used to take away peoples power, maybe I'd think differently. But right now its being promoted by the worst of the worst, and nobody is talking about that.
What’s your solid contribution to the discussion?
Responding to and enumerating, in this case, the viewpoint of someone. It's the general process by which discussions take place and progress.
If the thread were about 1) the current problems and approaches AI alignment, 2) the poorly understood mechanisms of hallucination, 3a) the mindset the doesn't see the conflict whey they say "don't anthropomorphize" but runs off to create a pavlovian playground in post-training, 3b) the mindsets that do much the reverse and how both these are dangerous and harmful, 4) the poorly understood trade off of sparse inference optimizations. But it's not, so I hold those in reserve.
> we need to create a culture that values craftmanship and dignifies work done by developers.
Mostly I agree with you. But there's a large group of people who are way too contemptuous of craftsmen using AI. We need to push back against this arrogant attitude. Just as we shouldn't be contemptuous of a woodworking craftsman using a table saw.
>Just as we shouldn't be contemptuous of a woodworking craftsman using a table saw.
Some tools are table saws, and some tools are subcontracting work out to lowest cost bidders to do a crap job. Which of the two is AI?
I've been programming for 20 years and GPT-4 (the one from early 2023) does it better than me.
I'm the guy other programmers I know ask for advice.
I think your metaphor might be a little uncharitable :)
For straightforward stuff, they can handle it.
For stuff that isn't straightforward, they've been trained on pattern matching some nontrivial subset of all human writing. So chances are they'll say, "oh, in this situation you need an X!", because the long tail is, mostly, where they grew up.
--
To really drive the point home... it's easy to laugh at the AI clocks.[0] But I invite you, dear reader, to give it a try! Try making one of those clocks! Measure how long it takes you, how many bugs you write. And how well you'd do it if you only had one shot, and/or weren't allowed to look at the output! (Nor Google anything, for that matter...)
I have tried it, and it was a humbling experience.
https://news.ycombinator.com/item?id=45930151
Now tell the AI to distill a bunch of user goals into a living system which has to evolve over time, integrate with other systems, etc etc. And deliver and support that system
I use Claude code every day and it is a slam dunk for situations like the one above, fiddly UIs and the like. Seriously , some of the best money I spend. But it is not good at more abstract stuff. Still a massive time saver for me and does effectively do a lot of work that would have gotten farmed out to junior engineers.
Maybe this will change in a few years and I'll have to become a potato farmer. I'm not going to get into predictions. But to act like it can do what an engineer with 20 years of experience can do means the AI brain worm got you or it says something about your abilities.
right, but this is akin to arguing why the table saw also does not do x/y/z — I don't know why we only complain about AI and how it does NOT do everything well yet.
Maybe it's expectations set by all the AI companies, idk, but this kind of mentality seems very particular to AI products and nothing else.
I'm OK pondering the right use for the tool for as long as it'll take for the dust to settle. And I'm OK too trying some of it myself. What I resent is the pervasive request/pressure to use it everywhere right now, or 'be left out'.
My biggest gripe with the hype, as there's so much talk of craftmanship here, is: most programmers I've met hate doing code reviews and a good proportion prefer rewriting to reading and understanding other people's code. Now suddenly everyone is to be a prompter and astute reviewer of a flood of code they didn't write and now that you have the tool you should be faster faster faster or there's a problem with you.
I'm not complaining about it, I said in my post that it's a huge time saver. It's here to stay, and that's pretty clear to see. It has mostly automated away the need for junior engineers, which just 5 years ago would have been a very unexpected outcome, but it's kind of the reality now.
All that being said:
There's a segment of the software eng population that has their heads in the sand about it and the argument basically boils down to "AI bad". Those people are in trouble because they are also the people who insist on a whole committee meeting and trail of design documents to change the color of a button on a website that sells shoes. Most of their actual hard skills are pretty easy to outsource to an AI.
There's also a techbro segment of the population, who are selling snake oil about AGI being imminent, so fire your whole team and hire me in order to outsource your entire product to an army of AI agents. Their thoughts basically boil down to "I'm a grifter, and I smell money". Nevermind the fact that the outcome of such a program would be a smoldering tire fire, they'll be onto the next grift by then.
As with literally everything, there are loud, crazy people on either side and the truth is in the middle somewhere.
AI doesn’t program better than me yet. It can do some things better than me and I use it for that but it has no taste and is way too willing to write a ton of code. What is great about it compared to an actual junior is if i find out it did something stupid it will redo the work super fast and without getting sad
Too willing to write a ton of code - this is absolutely one of the things that drives me nuts. I ask it to write me a stub implementation and it goes and makes up all the details of how it works, 99% of which is totally wrong. I tell it to rename a file and add a single header line, and it does that - but throws away everything after line 400. Just unreliable and headache-inducing.
For me, AI is definitely a table saw. YMMV.
That's because there's nothing "craftsman" about using AI to do stuff for you. Someone who uses AI to write their programs isn't the equivalent of a carpenter using a table saw, they are the equivalent of a carpenter who subcontracts the piece out to someone else. And we wouldn't show respect to the latter person either.
I’m a hacker and I’d show respect to that latter person if they did the subcontracting and reviewed their craft well.
But you wouldn't call them a craftsperson because they didn't do any craft other than "be a manager". Reviewing work is not on the same plane as actually creating something.
Why are we crafting code?
Simply put most industries started moving away from craftsmanship starting in the late 1700s to the mid 1900s. Craftsmanship does make a few nice things but it doesn't scale. Mass production lead to most people actually having stuff and the general condition of humanity improving greatly.
Software did kind of get a cheat code here though, we can 'craft' software and then endlessly copy it without the restrictions of physical objects. With all that said, software is rarely crafted well anyway. HN has an air about it that software developers are the craftsman of their gilded age, but most software projects fail terribly and waste huge amounts of money.
Does Steve Jobs deserve any respect for building the iPhone then? What is this "actually creating"? I'm sure he wasn't the one to do any of the "actually creating" and yet, there's no doubt in my mind that he deserves credit for the iPhone existing and changing the world.
I honestly don’t understand why you’re presuming to tell me what I think.
I consider myself a craftsman. I craft tools. I also am a manager. I also am a consultant. I am both a subcontractor and I subcontract out.
Above all else I’m a hacker.
I also use LLM’s daily and rather enjoy incorporating this new technology into what I consider my craft.
Please stop arrogantly presuming you know what is best for me to think and feel about all of this.
Nothing craftsman? The detail required to setup a complex image gen pipeline to produce something the has the consistent style, composition, placement, etc, and quite a bit more-- for things that will go into production and need a repeatable pipeline-- it's huge. Take every bit as much creative vision.
Taking just images, consider AI merely a different image capture mechanism, like the camera is vs. painting. (You could copy.paste many critiques about this sort of ai and just replace it with "camera") Sure it's more accessible to a non professional, in AI's case much more so than cameras wear to years of learning painting. But there's a world of difference between what most people do in a prompt online and how professionals integrating it into their workflow are doing. Are such things "art"? That's not a productive question, mostly, but there's this: when it occurs, it has every bit as much intention, purpose and from a human behind it as that which people complain is lacking, but are referring to the one-shot prompt process in their mind when they do.
I'm no fan of "AI" but I think it could be argued that if we're sticking to the metaphor, the carpenter can pick up the phone and subcontract out work to the lowest bidder, but perhaps that "work" doesn't actually require high craftsmanship. Or we could make the comparison that developers building systems of parts need to know how they all fit together, not that they built each part themselves, i.e., the carpenter can buy precut lumber rather than having to cut it all out of a huge trunk themselves.
What about an architect who outsources the bricklaying? A designer who outsources manufacturing?
I'm not implying a hierarchy of value or status here, btw. And the point about difficulty is interesting too. I did manual labor and it was much harder than programming, as you might expect!
You can certainly outsource "up", in terms of skill. That's just how business works, and life... I called a plumber not so long ago! And almost everyone outsources their health...
Brick laying isn’t architecture and manufacturing isn’t design. Those are separate fields and crafts.
It's very telling when someone invokes this comparison..I see it fairly often. It implies there is this hirearchy of skill/talent between the "architect" and the "bricklayer" such that any architect could be a bricklayer but a bricklayer couldn't be an architect. The conceit is telling.
Masonry is hard work but not low-skill, FYI.
Almost every bit of work I've hired people to do has been through an intermediary of some sort. Usually one with "contractor" or "engineer" as a title. They are the ones who can organize others, have connections, understand talent, plan and keep schedules, recognize quality, and can identify and troubleshoot problems. They may also be craftsmen, or have once been, but the work is not necessarily their craft. If you want anything project-scoped, you have a team, there is someone in a leadership role (even if informally), someone handling the money, etc. Craftsmanship may or may not happen within that framework, they are somewhat orthogonal concerns, and I don't see any reason to disrespect the people that make room for it to happen.
Of course you can also get useless intermediaries, which may be more akin to vibe coding. Not entirely without merit, but the human in the loop is providing questionable value. I think this is the exception rather than the norm.
> And we wouldn't show respect to the latter person either.
Not respect as a carpenter, but perhaps respect as a businessperson or visionary.
I respectfully disagree, but disagree hard.
a) Nothing about letting AI do grunt work for you is "not being a craftsman". b) Things are subcontracted all the time. We don't usually disrespect people for that.
Where do you draw the arbitrary line of what is craftmanship and what's not?
Using that line of reasoning I could also argue "Using libraries isn't craftmanship, a real craftsman implements all functionality themselves."
A LLM is more like a CNC panel saw; feed a sheet in one end, stack up parts from the other.
It reduces craftsmanship to unskilled labor.
The design work and thinking happen somewhere else. The operator comes in, punches a clock, and chokes on MDF dust for 8 hours.
No, the idea is that such a CNC saw shouldn't need an operator at all. To the extent it still does, the operator doesn't even need to be in the same town, much less the same building.
This is a GOOD thing.
Good or bad, converting craft work to production work is not making the craft worker more productive, it's eliminating the craft worker.
The unskilled operator's position is also precarious, as you point out, but while it lasts, it's a different and (arguably) less satisfying form of work.
The LLM is not a table saw that makes a carpenter faster, it's an automatic machine that makes an owner's capital more efficient.
(Shrug) I don't know about "owners" and "capital," but used properly, they make me more efficient.
>Somewhere along the lines of "everybody can code," we threw out the values and aesthetics that attracted people in the first place.
At some point people started universally accepting the idea that any sort of gatekeeping was a bad thing.I think by now people are starting to realize that this was a flawed idea. (at best, gatekeeping is not a pure negative; it's situational) But, despite coming to realize this I think parts of our culture still maintain this as a default value. "If more people can code, that's a _good_ thing!" Are we 100% sure that's true? Are there _no_ downsides? Even if it's a net positive, we should be able to have some discussion about the downsides as well.
Your point is that the hacker ethos involved ... Fewer people being excited about programming? I don't think we experienced this on the same planet.
Web 1.0 was full of weirdos doing cool weird stuff for the pure joy of discovery. That's the ethos we need back, and it's not incompatible with AI. The wrong turn we took was letting business overtake joy. That's a decision we can undo today by opting out of that whole ecosystem.
You get a very different crowd if something is a (unprofitable but) fun hobby vs being a well-paying profession.
This is because in Web 1.0 times, only weird hacker types were capable of using the internet effectively. Normies (and weirdos who were weird in ways not related to familiarity with and interest in personal computer technology) were simply not using the internet in earnest, because it wasn't effective for their needs yet. Then people made that happen and now everyone is online, including boring normies with boring interests.
If you want a space where weird hacker values and doing stuff for the pure joy of discovery reign, gatekeep harder.
I think that the ratio of weirdos doing stuff remained constant through the population, it's just that the whole population is now on the web, so they are harder to find.
Not to mention 20 years ago I personally (and probably others my age) had much more time to care about random weird stuff.
So, I am skeptical without some actual analysis or numbers that things really are so bad.
> That's a decision we can undo today by opting out of that whole ecosystem.
Ah yes, we'll also skip out on eating too.
There's a mountain of software work you can do that doesn't involve participating in this rat race. There's nothing that says you need to make 500k and live in silicon valley. It's possible to be perfectly happy working integrating industrial control systems in a sleepy mountain town where cost of living is practically nothing. I am well qualified to make that statement.
We need to change the underlying system.
We do not need to do things no one needs. We do not need a million differen webshops, and the next CRUD application.
We need a system which allows the earth resources being used as efficient and fair as possible.
Then we can again start apprechiating real craftmanship but not for critical things and not because we need to feed ourselves but because we want to do it.
Each time someone says "we" without asking me I find it at least insulting. With this mindset the next step might be to tell me what I need, without considering my opinion.
Yes, the current system seems flawed, but is the best we came up with and is not fixed either, it is slowly evolving.
Yes, some resources are finite (energy from the sun seems quite plenty though), but don't think we will be ever able to define "fair". I would be glad with "do not destroy something completely and irremediably".
> We need a system which allows the earth resources being used as efficient and fair as possible.
To what goals? Who gets to decide what is fair?
The number of not-so-secretly centralized-economy types on HN has actively surprised me whenever I see this.
Who is we? and how do we decide?
> We do not need a million differen webshops, and the next CRUD application.
The thing about capitalism is that unecessary webshop isn't getting any customers if it's truly unecessary, and will soon be out of business. We can appreciate Ghostty, because why? Because the guy writing it is independently wealthy and can fly jets around for fun, and has deigned to grace us with his coding gifts once again? Don't get me wrong, it's a nice piece of software, but I don't know that system's any better.
Capitalism looks like it does because humans don't have perfect knowledge (we don't know the best shop).
Also competition is a core driver for cost reduction and progress in capitalism.
And on a big picture pov: There is only one Amazon, Alibaba etc.
"We routinely devalue craftsmanship because it doesn't bow down to almighty Business Impact."
I actually disagree with this pretty fundamentally. I've never seen hacker culture as defined by "craftsmanship" so much as about getting things done. When I think of our culture historically, it's cleverness, quick thinking, building out quick and dirty prototypes in weekend "hackathons", startup culture that cuts corners to get an MVP product out there. I mean, look at your URL bar: do you think YC companies are prioritizing artisanal lines of code?
We didn't trade craftsmanship for "Business Impact". The latter just aligns well with our culture of Getting Shit Done. Whether it's for play (look at the jank folks bring out to the playa that's "good enough") or business, the ethos is the same.
If anything, I feel like there has been more of an attempt to erase/sideline our actual culture by folks like y'all as a backlash against AI. But frankly, while a lot of us scruffy hacker types might have some concerns about AI, we also see a valuable tool that helps us move faster sometimes. And if there's a good tool that gets a thing done in a way that I deem satisfactory, I'm not going to let someone's political treatise get in my way. I'm busy building.
YES. The "This is evidenced by the new set of hacker values being almost purely performative" is so incredibly true. I went to a privacy event about Web3, and the event organisers hired a photographer who took photos of everyone (no "no photo" stickers available), and they even flew a drone above our heads to take overarching videos of everyone :D I guess "privacy" should have been in quotes. All the values and aesthetics of the original set of people who actually cared about privacy (and were attracted to it) has been evaporated. All that remained are the hype. It was wild.
I am not going to tell you I am a coding god, but I have been doing this for nearly 30 years and I feel I'm pretty competent craftsman.
AI has helped me to be a better craftsman. The big picture ideas are mine, but AI has helped immensely with some details.
I realized recently that if you want to talk about interesting topics with smart people, if you expect things like critical thinking and nuanced discussion, you're currently much better off talking literature or philosophy than anything related to tech. I mean, everyone knows that discussing politics/economics is rather hopelessly polarized, everyone has their grievances or their superstitions or injuries that they cannot really put aside. But this is a pretty new thing that discussing software/engineering on merits is almost impossible.
Yes, I know about the language / IDE / OS wars that software folks have indulged in before. But the reflexive shallow pro/anti takes on AI are way more extreme and are there even in otherwise serious people. And in general anti-intellectual sentiment, mindless follow-the-leader, and proudly ignorant stances on many topics are just out of control everywhere and curiosity seems to be dead or dying.
You can tell it's definitely tangled up with money though and this remains a good filter for real curiosity. Math that's not maybe related to ML is something HN is guaranteed to shit on. No one knows how to have a philosophy startup yet (WeWork and other culty scams notwithstanding!). Authors, readers, novels, and poetry aren't moving stock markets. So at least for now there's somewhere left for the intellectually curious to retreat
I don't really see it any different than the Windows/Unix, Windows/Mac, etc, flame wars that boiled even amongst those with no professional stake it in for decades. Those were otherwise serious people too, parroting meaningless numbers and claims that didn't actually make much of a difference to them.
If anything, the AI takes are more much more meaningful. A Mac/PC flame war online was never going to significantly affect your career. A manager who either is all-in on AI or all-out on it can.
OS and IDE wars are something people take pretty seriously in their teens and very early careers, and eventually become more agnostic about after they realize it's not going to be the end-all predictor of coworker code quality. It predicts something for sure, but not strictly skill-level.
Language-preference wars stick around until mid-career for some, and again it predicts something. But still, serious people are not likely to get bogged down in pointless arguments about nearly equivalent alternatives at least (yaml vs json; python vs ruby).
Shallow takes on AI (whether they are pro or anti) are definitely higher stakes than all this, bad decisions could be more lasting and more damaging. But the real difference to my mind is.. AI "influencers" (again, pro or anti) are a very real thing in a way that doesn't happen with OS / language discussions. People listen, they want confirmation of biases.
I mean there's always advocates and pundits doing motivated reasoning, but usually it's corporate or individuals with clear vested interests that are trying to short-circuit inquiry and critical thinking. It's new that so many would-be practitioners in the field are eager to sabotage and colonize themselves, and forcing a situation where honest evaluations and merit-based discussion of engineering realities are impossible
This is classically framed as philosophy vs sophistry. The truth is that both are necessary, but only one makes money. When your entire culture assigns value with money it's obvious which way the scales will tip.
> But the reflexive shallow pro/anti takes on AI are way more extreme
But this is philosophy (and ethics/morality)
My feelings about AI, about its impact on every aspect of our lives, on the value of human existence and the purpose of the creative process, have less to do with what AI is capable of and more to do with the massive failures of ethics and morality that surround every aspect of its introduction and the people who are involved.
Humans will survive. Humanity is on the ropes.
> Math that's not maybe related to ML is something HN is guaranteed to shit on.
Eh, I mean here's one about the Ulam spiral that did pretty well: https://news.ycombinator.com/item?id=2047857
The fast inverse sqrt that John carmack did not. write also does well. I know there's many more. Are you sure that's not just a caricature of Hacker News you've built up in your head?
Visualizations and code always help. But to name two recent disappointments, stuff like https://news.ycombinator.com/item?id=46049932 and https://news.ycombinator.com/item?id=45957911 comes to mind as not meeting a high standard. To be clear, no expertise is fine, but no curiosity is bad.
"Dignifies work done by developers?"
Hmm. No. Not really. I don't think "hacker" ever much meant this at all; mostly because "hacker" never actually was much connected to "labor for money."
"Going to work" and "being a hacker" were overwhelmingly mutually exclusive. Hacking was what you don't do on company time (in favor of the company.)
This is the fate that befalls any wildly successful subculture: the MOPs start showing up, fascinated by it, and the sociopaths monetize it to get rich. The original geeks who created the scene become increasingly powerless.
Relevant article: https://meaningness.com/geeks-mops-sociopaths
I’ve been a “software engineer” or closely adjacent for 30 years. During that time, I’ve worked for small and medium “lifestyle companies”, startups, boring Big Enterprise, $BigTech and over the past 5 years (including my time at $BigTech) worked as a customer facing cloud consultant where I’ve seen every type of organization imaginable and how they work. No one ever gave a rip about “craftsmanship”. They hire you for one reason - to make them more money than they are paying you for or to save them more money than you are costing them. As far as me, I haven’t written a single line of code for “enjoyment” since the day I stepped into college. For the next four years it was about getting a degree and for the next 30, it was about exchanging my labor for money to support my addictions to food and shelter - that’s the transaction. I don’t dislike coding or dread my job. But at the end of the day (and at the beginning of the day) I’ve found plenty of things I enjoy that don’t involve computers - working out, teaching fitness classes part time, running, spending time with family and friends, traveling, etc. If an LLM helps me exchange my labor for money more efficiently, I’m going to use it just like I graduated from writing everything in assembly in 1987 on my Apple //e to using a C compiler or even for awhile using Visual Basic 6.
> If an LLM helps me exchange my labor for money more efficiently
Except that's unproven. It might make you more productive, but whether you get any of that new value is untested.
Right now its just a tool you can use or not and if you are smart enough, you figure out very quickly when to use a tool for efficency and when not.
I do not vibe code my core architecture because i control it and know it very well. I vibe code some webui i don't care about or a hobby idea in 1-4h on a weekend because otherwise it would take me 2 full weekends.
I fix emails, i get feedback etc.
When I do experiemnts with vibe coding, i'm very aware what i'm doing.
Nonetheless, its 2025. Alone 2026 we will add so much more compute and the progress we see is just crazy fast. In a few month there will be the next version of claude, gpt, gemini and co.
And this progress will not stop tomorrow. We don't know yet how fast it will progress and when it will be suddenly a lot better then we are.
Additionally you do need to learn how to use these tools. I learned through vibe coding that i have to specify specific things i just assume the smart LLM will do right without me telling for example.
Now i'm thinking about doing an experiemnt were i record everything about a small project i want to do, to then subscribe it into text and then feeding it into an llm to strucuture it and then build me that thing. I could walk around outside with a headset to do so and it would be a fun experiemnt how it would feel like.
I can imagine myself having some non intrusive AR Google and the ai sometimes shows me results and i basically just give feedback .
Well I have personally tested it on the green field projects I mostly work on and it does the grunt work of IAC (Terraform) and even did a decently complicated API with some detailed instructions like I would give another developer.
I’ve done literally dozens of short term quick turn around POCs from doing the full stack from an empty AWS account to “DevOps” to the software development -> training customers how to fish and showing them the concepts -> move on to next projects between working at AWS ProServe and now a third party consulting company. I’m familiar with the level of effort for these types of projects. I know how many fewer man hours it takes me now.
I have avoided front end work for well over a decade. I had to modify the front end part of the project we released to the customer that another developer did to remove all of the company specific stuff to make it generic so I could put it in our internal repo. I didn’t touch one line of front end code to make the decently extensive modifications, honestly I didn’t even look at the front end changes. I just made sure it worked as expected.
> I know how many fewer man hours it takes me now.
But how much has your hourly rate risen?
If you are “consulting” on an hourly rate, you’re doing it wrong. The company and I get paid for delivering projects not the number of hours we work. A smaller project may just say they have me for 6 weeks with known deliverable. I’m rarely working 40 hours a week.
When I did do one short term project independently, I gave them the amount I was going to charge for the project based on the requirements.
All consulting companies - including the division at AWS - always eventually expand to the staff augmentation model where you assign warm bodies and the client assigns the work. I have always refused to touch that kind of work with a ten foot pole.
All of my consulting work has been working full time and salaries for either the consulting division of AWS where I got the same structured 4 year base + RSUs as every other employee or now making the same amount (with a lot less stress and better benefits) in cash.
I’m working much less now than I ever have in my life partially because I’m getting paid for my expertise and not for how much code I can pump out.
You are kind of dodging the question. It sounds like you are not making more money or working fewer hours because of AI.
I am working fewer hours. I at most work 4 hours a day unless it’s a meeting heavy day. I haven’t typed a line of code in the last 8 months yet I’ve produced just as much work as I did before LLMs.
I really agree with your point. I think that this forum being hackernews and all though lends itself to a slightly different kind of tech person. Who really values for themselves and their team, the art of getting stuck in with a deeply technical problem and being able to overcome it.
You really think that people at BigTech are doing it for the “enjoyment” and not for the $250K+ they are making 3 years out of college? From my n=1 experience, they are doing it for the pay + RSUs.
If you see what it takes to get ahead in large corporations, it’s not about those who are “passionate”, it’s about people who know how to play the game.
If you look at the dumb AI companies that YC is funding, those “entrepreneurs” aren’t doing 996 because they enjoy it. They are looking for the big exit.
I don't know, look at someone like https://news.ycombinator.com/user?id=dmbaggett he seems to be an entrepreneur who enjoys what he's doing.
Now compare that to these founders.
https://docs.google.com/spreadsheets/d/1Uy2aWoeRZopMIaXXxY2E...
How many of them do you think started their companies out of “passion”?
Some of the ones I spotted checked had a couple of non technical founders looking for a “founding engineer” that they could underpay with the promise of “equity” that would probably be worthless.
I'm not disagreeing with the fact that there's a shit ton of founders out there looking for a quick pay day (I'd guess the majority fall into that category). Just pointing out there are exceptions, and the exceptions can be quite successful.
We need to talk seriously and plainly about the spiritual and existential damage done by LLMs.
I'm tempted to say "You're not helping," as my eyes roll back in their sockets far enough to hurt. But I can also understand how threatening LLMs must appear to programmers, writers, and artists who aren't very good at their jobs.
What I don't get is why I should care.
The question about why you should care about others and not just yourself has literature stretching back thousands of years. Maybe start with one of the major world religions?
Which one do you suggest? Many of them come with a nasty non-compete clause.
Have you seen the latest AI slop in game design lately, destroying human creativity?
Have you seen how this tech is being used to control narratives to subjugate populations to the will of authoritarian governments?
This shit is real. We are slowly sliding into a world where every aspect of our lives are going to be dictated by people in power with tools that can shape the future by manipulating what people think about.
If you don't care that the world is burning to the ground, good luck with that. Im not saying the tech is necessarily bad, its the way in which we are allowing it to be used. There has to be controls is place to steer this tech in the right direction or we are heading for a world I don't want to be apart of.
Have you seen the latest AI slop in game design lately, destroying human creativity?
This just in: 90% of everything is crap. AI does not, cannot, and will not change that.
Have you seen how this tech is being used to control narratives to subjugate populations to the will of authoritarian governments?
Can't say as I have.
The only authoritarians in this thread are the ones telling us what we should and should not be allowed to do with AI.
> All that said, the actual workday is absolutely filled with busy work that no one really wants to do, and the refusal of a loud minority to engage with that fact is what's leading to this.
The attitude and push back from this loud minority has always been weird to me. Ever since I got my hands on my first computer as a kid, I've been outsourcing parts of my brain to computing so that I can focus on more interesting things. I no longer have to remember phone numbers, I no longer have to carry a paper notepad, my bookshelf full of reference books that constantly needed to be refreshed became a Google search away instead. Intellisense/code completion meant I didn't have to waste time memorizing every specific syntax and keyword. Hell, IDEs have been generating code for a long time. I was using Visual Studio to automatically generate model classes from my database schema for as long as I can remember, and even generating CRUD pages.
The opportunity to outsource even more of the 'busywork' is great. Isn't this was technology is supposed to do? Automate away the boring stuff?
The only reasoning I can think of is that the most vocal opponents work in careers where that same busywork is actually most of their job, and so they are naturally worried about their future.
> Ever since I got my hands on my first computer as a kid, I've been outsourcing parts of my brain to computing so that I can focus on more interesting things. I no longer have to remember phone numbers, I no longer have to carry a paper notepad, my bookshelf full of reference books that constantly needed to be refreshed became a Google search away instead. Intellisense/code completion meant I didn't have to waste time memorizing every specific syntax and keyword. Hell, IDEs have been generating code for a long time. I was using Visual Studio to automatically generate model classes from my database schema for as long as I can remember, and even generating CRUD pages.
I absolutely agree with you, but I do think there's a difference in kind between a deterministic automation you can learn to use and get better at, and a semi-random coding agent.
The thing I'm really struggling with is that unlike e.g. code completion, there doesn't seem to be a clear class of tasks that LLMs are good at vs bad at. So until the LLMs can do everything, how do I keep myself in the loop enough that I'll have the requisite knowledge to step in when the LLM fails?
You mention how technology means we no longer have to remember phone numbers. But what if all digital contact lists had a very low chance of randomly deleting individual contacts over time? Do you keep memorizing phone numbers? I'm not sure!
Like the almost but not quite self driving cars.
Thank you for expressing well what I was thinking. I derive intense joy from coding. Like you my over my 40 year career I've been exploiting more and more ways to outsource work to computers. The space of software is so vast that I've never worried for a second that I'd not have work to do. Coding is a means to solving interesting problems. It is not an end in itself.
When you off load that stuff to a computer, you loose cognitive abilities. Heck Im even being careful how much I use mapping tools now because I want to know where I am going and how I get there.
FYI: I do not work for any corporations, I provide technical services directly to the public. So there is really concerns about this tech by everyday people that do not have a stake in keeping a job.
The marketing for AI is that it will soon replace THE INTERESTING PARTS too. Because it will be better than humans at everything.
For you, what are “the interesting parts”, and why do you believe in principle a machine won’t do those parts better than you?
What are the "interesting parts" is hard to quantify because my interests vary, so even if a machine can do those parts better than me, doesn't necessarily mean I'll use the machine.
The arts is a good example. I still enjoy analog photography & darkroom techniques. Digital can (arguably) do it better, faster, and cheaper. Doesn't change the hobby for me.
But, at least the option is there. Should I need to shoot a wedding, or some family photos for pay, I don't bust out my 35mm range finder and shoot film. I bring my R6, and send the photos through ImagenAI to edit.
In that way, the interesting parts are whatever I feel like doing myself, for my own personal enjoyment.
Just the other day I used AI to help me make a macOS utility to have a live wallpaper from an mp4. Didn't feel like paying for any of the existing "live wallpaper" apps. Probably a side project I would never have done otherwise. Almost one shot it outside of a use-after-free bug I had to fix myself, which ended up being quite enjoyable. In that instance, the interesting part was in the finding a problem and fixing it, while I got to outsource 90% of the rest of the work.
I'm rambling now, but the TL;DR is I'm more so excited about having the option to outsource portions of something rather than always outsourcing. Sometimes all you need is a cheap piece of mass produced crap, and other times you want to spend more money (or more time) making it yourself, or buying handmade from an expert craftsman.
This was very insightful. It made me think about how "hacker culture" has changed.
I'm middle-aged. 30 years ago, hacker culture as I experienced it was about making cool stuff. It was also about the identity -- hackers were geeks. Intelligent, and a little (or a lot) different from the rest of society.
Generally speaking, hackers could not avoid writing code. Whether it was shell scripts or HTML or Javascript or full-blown 3D graphics engines. To a large extent, coding became the distinguishing feature of "hackers" in terms of identity.
Nearly anybody could install Linux or build a PC, but writing nontrivial code took a much larger level of commitment.
There are legitimate functional and ethical concerns about AI. But I think a lot of "hackers" are in HUGE amounts of denial about how much of their opposition to AI springs from having their identities threatened.
> opposition to AI springs from having their identities threatened.
I think there's definitely some truth to this. I saw similar pushback from the "learn to code" and coding bootcamp era, and you still frequently see it in Linux communities where anytime the prospect of more "normies" using Linux comes up, a not insignificant part of the community is actively hostile to that happening.
The attitude goes all the way back to eternal september.
And it's "the bootcamp era" rather than the new normal because it didn't work out as well as advertised. Because of the issues highlighted in that pushback.
Well there are a lot of us very clear that our identities are being threatened and scared shitless we will lose the ability to pay our rent or buy food because of it.
This kind of nails it; hacker culture folks grew up and got families and mortgages, so changes comes with the territory
As somebody currently navigating the brutal job market, I'm scared shitless about that too. I have to tell you though, that the historical success rate of railing against "technologies that make labor more efficient" is currently at 0.0000000%.
We've survived and thrived through inflection points like this before, though. So I'm doing my best to have an adapt-or-die mindset.
"computers are taking away human jobs"
"visual basic will eliminate the need for 'real coders'"
"nobody will think any more. they'll 'just google it' instead of actually understanding things"
"SQL is human readable. it's going to reduce the need for engineers" (before my time, admittedly)
"offshoring will larely eliminate US-based software development"
etc.
Ultimately (with the partial exception of offshoring) these became productivity-enhancers that increased the expectations placed on the shoulders of engineers and expanded the profession, not things that replaced the profession. Admittedly, AI feels like our biggest challenge yet. Maybe.
I consider myself progressive and my main issue with the technology is that it was created by stealing from people who have not been compensated in any way.
I wouldn’t blame any artist that is fundamentally against this tech in every way. Good for them.
Every artist and creator of anything learned by engaging with other people's work. I see training AI as basically the same thing. Instead of training an organic mind, it's just training a neural network. If it reproduces works that are too similar to the original, that's obviously an issue, but that's the same as human artists.
This is a bad-faith argument, but even if I were to indulge it: human artists can/do get sued for mimicing the works of others for profit, which AI precisely does. Secondly, many of the works in question have explicit copyright terms that prohibit derivative works. They have built a multi-billion dollar industry on scaled theft. I don't see a more charitable interpretation.
It's "unauthorized use" rather than "stealing", since the original work is not moved anywhere. It's more like using your creative work to train a software system that generates similar-looking, competing works, for pennies, at industrial scale and speed.
Obtaining without payment or consent and then using to create derivative works at scale?
And the pedantry matters only because the entities criming are too big and rich and financed by the right people.
It is basically a display of the societal threshold beyond which laws are not enforced.
> Obtaining without payment or consent
Usually "obtaining" is just making a bunch of HTTP requests - which is kind of how the web is designed to work. The "consent" (and perhaps "desired payment" when there is no paywall) issue is the important bit and ultimately boils down to the use case. Is it a human viewing the page, a search engine updating its index, or OpenAI collecting data for training? It is annoying when things like robots.txt are simply ignored, even if they are not legally or technically binding.
The legal situation is unsurprisingly murky at the moment. Copyright law was designed for a different use case, and might not be the right tool or regulatory framework to address GenAI.
But as I think you are suggesting, it may be an example of regulatory entrepreneurship, where (AI) companies try to move forward quickly before laws and regulations catch up with them, while simultaneously trying to influence new laws and regulations in their favor.
[Copyright law itself also has many peculiarities, for example not applying to recipes, game rules, or fashion designs (hence fast fashion, knockoffs, etc.) Does it, or should it, apply to AI training and GenAI services? Time will tell.]
Ok Mr. (Or Ms.) Pedant you know what the intended meaning was.
The pedantry matters for the same reason it mattered when the music industry did this to Napster: because the truth is important.
everybody knows this, you are being uselessly pedantic
> hacker circles didn't always have this 'progressive' luddite mentality
Richard Stallman has his email printed out on paper for him to read, and he only connects to the internet by using wget to fetch web pages and then has them printed off.
You're probably thinking about Donald Knuth, not Stallman.
https://www-cs-faculty.stanford.edu/~knuth/email.html
Stallman does similar, just without the printing step, and he checks his email entirely within emacs.
https://stallman.org/stallman-computing.html
In a way, the busy work is padding. If the day becomes entirely difficult, I want more reward or time away.
I understand how LLMs may improve the situation for the employer, personally or with peers: no.
It's Turing's Law:
Any person who posts a sufficiently long text online will be mistaken for an AI.
> It's why people can't post a meme, quote, article, whatever could be interpreted (very often, falsely) as AI-generated in a public channel, or ask a chatbot to explain a hand-drawn image without the off chance that they get an earful from one of these 'progressive' people.
It happens, but I think it's pretty uncommon. What's a lot more common is people getting called out for offloading tasks to LLMs in a way that just breaches protocol.
For example, if we're having an argument online and you respond with a chatbot-generated rebuttal to my argument, I'm going to be angry. This is because I'm putting an effort and you're clearly not interested in having that conversation, but you still want to come out ahead for the sake of internet points. Some folks would say it's fair game, but consider the logical conclusion of that pattern: that we both have our chatbots endlessly argue on our behalf. That's pretty stupid, right?
By extension of this, there's plenty of people who use LLMs to "manage" their online footprint: write responses to friends' posts, come up with new content to share, generate memes, produce a cadence of blog posts. Anyone can ask an LLM to do that, so what's the point of generating this content in the first place? It's not yours. It's not you. So what's the game, other than - again - trying to come out on top for internet points?
Another fairly toxic pattern is when people use LLMs to produce work output without the effort to proofread or fact-check it. Over the past year or so, I've gotten so many LLM-generated documents that simply made no sense, and the sender considered their job to be done and left the QA to me.
The reason to be angry about the chatbot generated argument is that without sources it’s likely to have hallucinated a few things.
unfortunately, it will be less and less purely human generated content any more. it will be more and more AI generated or AI assisted content in the future.
We are angry because we grow up in an age that content are generated by human and computer bot are inefficient. however, for newer generation, AI generated content will be a new normal, like how we see people from a big flat box (TV)
I'm looking at code at my tech job right now where someone AI outsourced it, didn't proofread it, and didn't realize a comparison table it built is just running over the same dataset twice causing every comparison to look "identical" even when the data isn't
IDK, to me it looks that hacker culture has always been progressive, it's just definition of what is progressive has changed somewhat.
But hacker culture always sought to empower an individual (especially a smart, tech-savvy individual) against corporations, and rejection of gen AI seems reasonable in this light.
If hacker culture wasn't luddite, it's because of the widespread belief that the new digital technology does empower the individual. It's very hard to believe the same about LLMs, unless your salary depends on it
People assume programmers have the same motivations as luddites but "smashing the autolooms" presumably requires firebombing a whole bunch of datacenters, whereas it's pretty easy to download and run an open-source Chinese autoloom.
I largely agree with this, but at the same time, I empathize with the FA's author. I think it's because LLMs feel categorically different from other technological leaps I've been exited about.
The recent results in LLMs and diffusion models are undeniably, incredibly impressive, even if they're not to the point of being universally useful for real work. However they fill me with a feeling of supreme dissapointment, because each is just this big black box we shoved an unreasonable amount of data into and now the black box is the best image processing/natural language processing system we've ever made, and depending on how you look at it, they're either so unimaginably complex that we'll never understand how they really work, or they're so brain-dead simple that there's nothing to really understand at all. It's like some cruel joke the universe decided to play on people who like to think hard and understand the systems around them.
> It's like some cruel joke the universe decided to play on people who like to think hard and understand the systems around them.
Yeah. This cruel joke even has a name: The Bitter Lesson.
https://en.wikipedia.org/wiki/Bitter_lesson
But think about it: if digital painting were solved not by a machine learning model, but human-readable code, it would be an even more bleak and cruel joke, isn't it?
> if digital painting were solved not by a machine learning model, but human-readable code, it would be an even more bleak and cruel joke, isn't it?
On the contrary, I'm certain such a program would be filled with fascinating techniques, and I have no dread for the idea that humans aren't special.
Interesting that people seem to have this assumption.
"The lesson is considered "bitter" because it is less anthropocentric than many researchers expected and so they have been slow to accept it."
I mean we are so many people on the planet, its easy to feel useless when you know you can get replaced by millions of other humans. How is that different being replaced by a computer?
I was not sure how AGI would come to us, but I assumed there will be AGI in the future.
Weirdest thing for me is mathematics and physics: I assumed that would be such an easy field to find something 'new' through brute force alone, im more shocked that this is only happening now.
I realized with DeepMind and Alphafold that the smartest people with the best tools are in the industry and specificly in the it industry because they are a lot better using tools to help them than normal researchers who struggle writing code.
A good start (albeit the most basic one) would be to encourage budding hackers to read through the Jargon File.
I think that's going to become like asking a child to read Shakespeare; surely valuable, but requiring a whole parallel text to give modern translation and context.
I think you're missing that a lot of what we call "learning" would be categorized as "busy work" after the fact. If we replace this "busy work" with AI, we are becoming collectively more stupid. Which may be a goal on itself for our AI overlords.
As mr Miyagi said: "Wax on. Wax off."
This may turn out very profitable for the pre-AI generations, as the junior to senior pipeline won't churn seniors at the same rate. But following generations are probably on their way to digital serfdom if we don't act.
> If we replace this "busy work" with AI, we are becoming collectively more stupid.
I've seen this same thing said about Google. "If you outsource your memory to Google searching instead, you won't be able to do anything without and you'll become dumber."
Maybe that did happen, but it didn't seem to result in any meaningful change on the whole. Instead, I got to waste less time memorizing things, or spending time leafing through thousand page reference manuals, to find something.
We've been outsourcing parts of our brains to computers for decades now. That's what got me interested and curious about computers when I got my first machine as a kid (this was back in the late 90s/early 00s). "How can I automate as much of the boring stuff as possible to free myself up for more interesting things."
LLMs are the next evolution of that to an extent, but I also think they do come with some harms and that we haven't really figured out best practices yet. But, I can't help but be excited at the prospect of being able to outsource even more to a computer.
Indeed, this line of reasoning goes all they way back to Socrates who argued that outsourcing your memory to writing would make you stupider [1]:
> For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise.
I, for one, am glad we have technologies -- like writing, the internet, Google, and LLMs -- that let us expand the limits of what our minds can do.
[1] https://www.perseus.tufts.edu/hopper/text?doc=Perseus%3Atext...
If you extend Google to include social networks the damage to human mental health and well being is difficult to calculate.
Pathway to Idiocracy.
> Which may be a goal on itself for our AI overlords.
That doesn't seem exactly likely.
Well there's more than just one hacker circle. That was never really the case and it's less and less the case as the earth's technologically-inclined population increases.
Culture is emergent. The more you try to define it, the less it becomes culture and the more it becomes a cult. Instead of focusing on culture I prefer to focus on values. I value craftsmanship, so I'm inclined to appreciate normal coding more than AI-assisted coding, for sure. But there's also a craftsmanship to gluing a bunch of AI technologies together and observing some fantastic output. To willfully ignore that is silly.
The OP's rant comes across as a wistful pining for the days of yore, pinning its demise on capitalists and fascists, as if they had this AI thing planned all along. Focusing on boogeymen isn't going to solve anything. You also can't reverse time by demanding compliance with your values or forming a union. AI is here to stay and we're going to have to figure out how to live with it, like it or not.
I have only experienced the exact opposite - AI tools being forced on employees left and right, and infinite starry eyed fake enthusiasm amongst a rising ocean of slop poisoning all communication and written human knowledge at scale.
I am yet to see issues caused by restrain.
> It's why people can't post a meme, quote, article, whatever could be interpreted (very often, falsely) as AI-generated in a public channel, or ask a chatbot to explain a hand-drawn image without the off chance that they get an earful from one of these 'progressive' people. These people bring way more toxicity to daily life than who they wage their campaigns against.
> This is the culture that replaced hacker culture.
Breathless hustlecore tech industry culture is a place where finance bros have turned programmers into dogs that brag to one another about what a good dog they are. We should reject at every turn the idea that such a culture represents the totality of programming. Programming is so much more than that.
That's because AI-generated memes are lame, not saying that memes are smart, generally speaking, but the AI-generated ones are even lamer. And nothing wrong with being a luddite, to the contrary, in this day and age still thinking that technology is the way forward no matter what is nothing short of criminal.
Ironically, the actual luddites weren't anti-technology at all. Mechanized looms at the time produced low-quality, low-durability cloth at low prices. The luddite pushback was more about the shift from durable to disposable.
It's a message that's actually pretty relevant in an age of AI slop.
They were anti-technology in the sense that they destroyed the machines, because of the machines' negative effects on pay and quality. Maybe you could debate whether they were anti-technology absent its effects, but all technologies have effects. https://en.wikipedia.org/wiki/Luddite
The only thing more insufferable than the "AI do everything and replace everyone" crowd is the "AI is completely useless" crowd. It's useful for some things and useless for others, just like any other tool you'll encounter.
The proposition that AI is completely is trivially nullified. For example, it is provably useful for large-scale cheating on course assignments - a non-trivial task that had previously used human-operated "essay mills" and other services.
Hackers in the '80s were taking apart phone hardware and making free long-distance calls because the phone company didn't deserve its monopoly purely for existing before they were born. Hackers in the '90s were bypassing copyright and wiping the hard drive of machines they cobbled together out of broken machines to install an open source OS on it so that Redmond, WA couldn't dictate their computing experience.
I think there's a direct through-line from hacker circles to modern skepticism of the kind of AI discussed in this article: the kind where rules you don't control determine the behavior of the machine and where most of the training and operation of the largest and most successful systems can, currently, only be accessed via the cloud portals of companies with extremely questionable ethics.
... but I don't expect hackers to be anti-AI indefinitely. I expect them to be sorting out how many old laptops with still-serviceable graphics cards you have to glue together to build a training engine that can produce a domain-specific tool that rivals ChatGPT. If that task proves impossible, then I suspect based on history this may be the one place where hackers end up looking a little 'luddite' as it were.
... because "If the machine cannot be tamed it must be destroyed" is very hacker ethos.
The whole point was to take these things apart, figure out how they work, and make them things we want them to do instead of being bound by arbitrary rules.
Bypassing arbitrary (useless, silly, meaningless, etc) rules has always been a primary motiving factor for some of us :D
I agree. I think this is what happens when a persons transitions from a progressive mindset to a conservative one, but has made being "progressive" a central tenant of their identity.
Progressiveness is forward looking and a proponent of rapid change. So it is natural that LLM's are popular amongst that crowd. Also, progressivism should be accepting of and encouraging the evolution of concepts and social constructs.
In reality, many people define "progressiveness" as "when things I like happen, not when things I don't like happen." When they lose control of the direction of society, they end up just as reactionary and dismissive as the people they claim to oppose.
>AI systems exist to reinforce and strengthen existing structures of power and violence. They are the wet dream of capitalists and fascists.
>Craft, expression and skilled labor is what produces value, and that gives us control over ourselves
To me, that sums up the author's biases. You may value skilled labor, but generally people don't. Nor should they. Demand is what produces value. The later half of the piece falls into a diatribe of "Capitalism Bad".
Just seeing that sentence fragment about "structures of power and violence" told me so much about the author. Its the sort of language that brings with it a whole host of stereotypes, some of which were immediately confirmed with a little more digging (and others would require way too much effort to confirm, but likely could be).
And yes, this whole "capitalism bad" mentality I see in tech does kinda irk me. Why? Because it was capitalism that gave them the tools to be who they are and the opportunities to do what they do.
> And yes, this whole "capitalism bad" mentality I see in tech does kinda irk me. Why? Because it was capitalism that gave them the tools to be who they are and the opportunities to do what they do.
It's not hard to see why that mentality exists though. That same capitalism also gave rise to the behemoth, abusive monopolies we have today. It gave rise to the over financialization of the sector and declining product quality because you get richer doing stock buybacks and rent-seeking instead of making a better product.
Early hacker culture was also very much not pro-capitalism. The core principle of "Information should be free" itself is a statement against artificial scarcity and anti-proprietary systems, directly opposed to the capitalist ethos of locking up knowledge for profit. The FOSS we use and love rose directly from this culture, which is fundamentally communal, not capitalist.
You're completely ignoring the huge amount of public money that went into building the Internet and doing research.
Capitalism didn't build the internet: public spending did.
Capitalism is bad.
I'm not ignorant to this fact that it helped us for quite a long time but it also created climate change. Overpopulation.
We are still stuck on planet earth, have not figured out the reason for live or the origin of the universe.
I would prefer a world were we think about using all the resources earth provides sustainable and how to use them the most efficient way for the max amount of human beings. The rest of it we would use to advance society.
I would like to have Post-Scarcity Scientific Humanism
You would need to demonstrate that some other system would have given us all the things you want while avoiding every problem you cite, while not introducing other comparable or worse problems.
How did capitalism create overpopulation? Isn’t that more related to agriculture and better medical tech?
Likely progressive, but definitely not luddite [0]. Anti-capitalist for sure.
I struggle with this discourse deeply. With many posters like OP, I align almost completely - unions are good, large megacorps are bad, death to facists etc. It's when we get to the AI issue that I do a bit of a double take.
Right now, AI is almost completely in the hands of a few large corp entities, yes. But once upon a time, so was the internet, so were processing chips, so was software. This is the power of the byte - it shrinks progressively and multiplies infinitely - thus making it inherently diffuse and populist (at the end of the day). It's not the relationship to our cultural standards that causes this - it's baked right into the structure of the underlying system. Computing systems are like sand - you can melt them into a tower of glass, but those are fragile and will inevitably become sand once again. Sand is famously difficult to hold in a tight grasp.
I won't say that we should stop fighting against the entrenchment of powers like OpenAI - fine, that's potentially a worthy fight and if that's what you want to focus on go ahead. However, if you really want to hack the planet, democratize power and distribute control, what you have to be doing is working towards smaller local models, distributed training, and finding an alternative to backprop that can compete without the same functional costs.
We are this close to having a guide in our pocket that can help us understand the machine better. Forget having AI "do the work" for you, it can help you to grok the deeper parts of the system such that you can hack them better - and if we're to come out of this tectonic shift in tech with our heads above water, we absolutely need to create models that cannot be owned by the guy with the $5B datacenter.
Deepseek shows us the glimmer of a way forward. We have to take it. The megacorp AI is already here to stay, and the only panacea is an AI that they cannot control. It all comes down to whether or not you genuinely believe that the way of the hacker can overcome the monolith. I, for one, am a believer.
0 - https://phrack.org/issues/7/3
Not true for the Internet. It was the open system anyone could join and many people were shocked it succeeded over the proprietary networks being developed.
How are unions any better than mega corps? My brother is part of a union and the leaders make millions.
He's pigenholed at the same low pay rate and can't ever get a raise, until everyone in the same role also gets a raise (which will never happen). It traps people, because many union jobs can't or won't innovate, and when they look elsewhere, are underskilled (and stuck).
You mention 'deepseek'. Are you joking? It's owned by the Chinese government..and you claim to hate fascism? Lol?
Big companies only have the power now, because the processing power to run LLMs is expensive. Once there are break throughs, anyone can have the same power in their house.
We have been in a tech slump for awhile now. Large companies will drive innovations for AI that will help everyone.
That's not a union - that's another corporate entity parading as a union. A union, operating as it should, is governed by the workers as a collective and enriches all of them at the same rate.
Deepseek is open source, which is why I mention it. It was made by the Chinese government but it shows a way to create these models at vastly reduced cost and was done with transparent methodology so we can learn from it. I am not saying "the future is Deepseek", I am saying "there are lessons to be learned from Deepseek".
I actually agree with you on the corporate bootstrap argument - I think we ought to be careful, because if they ever figure out how to control the output they will turn off outputs that help develop local models (gotta protect that moat!), but for now I use them myself to study and learn about building locally and I think everyone else ought to get on this train as well. For now, the robust academic discourse is a very very good thing.
The top of megacorps make 4-6 orders of magnitude more than labor union leaders. To claim that there is no difference is mindboggeling.
Just because your brother's union sucks, doesn't mean they all do.
Being anti "AI" has nothing to do with being progressive. Historically, hackers have always rejected bloated tools, especially those that are not under their control and that spy on them and build dossiers like ChatGPT.
Hackers have historically derided any website generators or tools like ColdFusion[tm] or VisualStudio[tm] for that matter.
It is relatively new that some corporate owned "open" source developers use things like VSCode and have no issues with all their actions being tracked and surveilled by their corporate masters.
Please do no co-opt the term "hacker".
Hackers never had a very cohesive and consistent ideology or moral framework, we heard non stop of the exploits of people funded as part of Cold War military pork projects that got the plug pulled eventually, but some antipathy and mistrust of the powerful and belief in the power of knowledge were recurrent themes nonetheless
So why is it a surprise that hackers mistrust these tools pushed by megacorps, that also sell surveillance to governments, with “suits” promising other “suits” that they’ll be making knowledge obsolete? That people will no longer need to use their brains, that people with knowledge won’t be useful?
It’s not Luddism that people with an ethos of empowering the individual with knowledge are resisting these forces
The problem here isn't resisting those forces, that's all well and good.
The problem is the vast masses falling under Turing's Law:
"Any person who posts a sufficiently long text online will be mistaken for an AI."
Not usually in good faith however.
I don’t know how we’ll fix it
Just taking what people argue for on its own merits breaks down when your capacity to read whole essays or comments chains is so easily overwhelmed by the speed at which people put out AI slop
How do you even know that the other person read what they supposedly wrote, themselves, and you aren’t just talking to a wall because nobody even meant to say the things you’re analyzing?
Good faith is impossible to practice this way, I think people need to prove that the media was produced in good faith somehow before it can be reasonably analyzed in good faith
It’s the same problem with 9000 slop PRs submitted for code review
I've seen it happen to short, well written articles. Just yesterday there was an article that discussed the authors experiences maintaining his FOSS project after getting a fair number of users, and if course someone in the HN comments claimed it was written by AI, even though there were zero indications it was, and plenty of indications it wasn't.
Someone even argued that you could use prompts to make it look like it wasn't AI, and that this was the best explanation that it didn't look like ai slop.
If we can't respect genuine content creators, why would anyone ever create genuine content?
I get that these people probably think they're resisting AI, but in reality they're doing the opposite: these attacks weighs way heavier on genuine writers than they do on slop-posters.
The blanket bombing of "AI slop!" comments is counterproductive.
It is kind of a self fulfilling prophesy however: keep it up and soon everything really will be written by AI.
VSCodium is the open source "clean" build of VS Code without all the Microsoft telemetry and under MIT license.
https://vscodium.com/
> Hackers have historically derided any website generators or tools like ColdFusion[tm] or VisualStudio[tm] for that matter.
A lot of hackers, including the black hat kind, DGAF about your ideological purity. They get things done with the tools that make it easy. The tools they’re familiar with.
Some of the hacker circles I was most familiar with in my younger days primarily used Windows as their OS. They did a lot of reverse engineering using Windows tools. They might have used .NET to write their custom tools because it was familiar and fast. They pulled off some amazing reverse engineering feats.
Yet when I tell people they preferred Windows and not Linux you can tell who’s more focused on ideological purity than actual achievements because eww Windows.
> Please do no co-opt the term "hacker".
Right back at you. To me, hacker is about results, not about enforcing ideological purity about only using the acceptable tools on your computer.
In my experience: The more time someone spends identifying as a hacker, gatekeeping the word, and trying to make it a culture war thing about the tools you use, the less “hacker” like they are. When I think of hacker culture I think about the people who accomplish amazing things regardless of the tools or whether HN finds them ideologically acceptable to use.
> To me, hacker is about results
Same to me as well. A hacker would "hack out" some tool in a few crazy caffeine fueled nights that would be ridiculed by professional devs who had been working on the problem as a 6 man team for a year. Only the hacker's tool actually worked and saved 8000 man-hours of dev time. Code might be ugly, might use foundational tech everyone sneers at - but the job would be done. Maintaining it left up to the normies to figure out.
It implies deep-level expertise about a specific niche in the space they are hacking on. And it implies "getting shit done" - not making things full of design beauty.
Of course there are different types of hackers everywhere - but that was the "scene" to me back in the day. Teenage kids running circles around the greybeards clucking at the kids doing it wrong.
> but that was the "scene" to me back in the day.
Same. Back then, and even now, the people who were busy criticizing other people for using the wrong programming language, text editor, or operating system were a different set of people than the ones actually delivering results.
In a way it was like hacker fashion: These people knew what was hot and what was not. They ran the right window manager on the right hardware and had the right text editor and their shell was tricked out. They knew what to sneer at and what to criticize for fashion points. But actually accomplishing things was, and still is, orthogonal to being fashionable.
To wit: my brother has never worked as a developer and has just a limited knowledge of python. In the past few days, he's designed, vibe-coded, and deployed a four-player online chess game, in about four hours of actual work, using Google's Antigravity. I looked at the code when it was partly done, and it was pretty good.
The gatekeepers wouldn't consider him a hacker, but that's kinda what he is now.
Ideological purity is a crutch for those that can't hack it. :)
I love it when the .NET threads show up here, people twist themselves in knots when they read about how the runtime is fantastic and ASP.NET is world class, and you can read between the lines of comments and see that it is very hard for people to believe these things while also knowing that "Micro$oft" made them.
Inevitably when public opinion swells and changes on something (such as VSCode), all the dissonance just melts away, and they were _always_ a fan. Funny how that works.
> hackers have always rejected bloated tools [...] Hackers have historically derided any website generators
Ah yes, true hackers would never, say, build a Debian package...
Managing complexity has always been part of the game. To a very large extent it is the game.
Hate the company selling you a SaaS subscription to the closed-source tool if you want, and push for open-source alternatives, but don't hate the tool, and definitely don't hate the need for the tool.
> Please do no co-opt the term "hacker".
Indeed, please don't. And leave my true scotsman alone while we're at it!
Local alternatives don't work, and you know that.
Being anti-ai means you want to conserve the old ways in favor of new technology. Hardly what I would call 'progressive'.
> That's the thing, hacker circles didn't always have this 'progressive' luddite mentality. This is the culture that replaced hacker culture.
People who haven't lived through the transition will likely come here to tell you how wrong you are, but you are 100% correct.
You were proven right three minutes after you posted this. Something happened, I'm not sure what and how. Hacking became reduced to "hacktivism", and technology stopped being the object of interest in those spaces.
> and technology stopped being the object of interest in those spaces.
That happened because technology stopped being fun. When we were kids, seeing Penny communicating with Brain through her watch was neat and cool! Then when it happened in real life, it turned out that it was just a platform to inject you with more advertisements.
The "something" that happened was ads. They poisoned all the fun and interest out of technology.
Where is technology still fun? The places that don't have ads being vomited at you 24/7. At-home CNC (including 3d printing, to some extent) is still fun. Digital music is still fun.
A lot of fun new technology gets shouted down by reactionaries who think everything's a scam.
Here on "hacker news" we get articles like this, meanwhile my brother is having a blast vibe-coding all sorts of stuff. He's building stuff faster than I ever dreamed of when I was a professional developer, and he barely knows Python.
In 2017 I was having great fun building smart contracts, constantly amazed that I was deploying working code to a peer-to-peer network, and I got nothing but vitriol here if I mentioned it.
I expect this to keep happening with any new tech that has the misfortune to get significant hype.
> That happened because technology stopped being fun.
Exactly and I'm sure it was our naivete to think otherwise. As software became more common, it grew, regulations came in, corporate greed took over and "normies" started to use.
As a result, now everything is filled in subscriptions, ads, cookie banners and junk.
Let's also not kid ourselves but an entire generation of "bootcamp" devs joined the industry in the quest of making money. This group never shared any particular interest in technology, software or hardware.
It's not ads, honestly. It's quality. The tool being designed to empower the user. Have you ever seen something encrusted in ads be designed to empower the user? At least, it necessitates reducing the user's power to remove the ads.
But it's fundamentally a correlation, and this observation is important because something can be completely ad-free and yet disempowering and hence unpleasant to use; it's just that vice-versa is rare.
> It's not ads, honestly. It's quality. The tool being designed to empower the user. Have you ever seen something encrusted in ads be designed to empower the user? At least, it necessitates reducing the user's power to remove the ads.
Yes, a number of ad-supported sites are designed to empower the user. Video streaming platforms, for example, give me nearly unlimited freedom to watch what I want when I want. When I was growing up, TV executives picked a small set of videos to make available at 10 am, and if I didn’t want to watch one of those videos I didn’t get to watch anything. It’s not even a tradeoff, TV shows had more frequent and more annoying ads.
> Video streaming platforms, for example, give me nearly unlimited freedom to watch what I want when I want.
But they'd prefer if it was shorts.
No, they wouldn't. On Youtube, for example, videos were consistently trending longer over time, and you used to see frequent explainers (https://www.wired.com/story/youtube-video-extra-long/) on why this was happening and how Youtube benefits from it. Short-form videos are harder to monetize and reduce retention, but users demand them so strongly that most platforms have built a dedicated experience for them to compete with TikTok.
If that was true, I would be able to turn off shorts from my recommendation feed.
You can. It’s not a hermetic seal, I assume because they live in the same database as normal videos, but if you’re thinking of the separate “shorts” section there’s a triple dot option to turn it off.
I've clicked this triple dots many times. I never saw such an option. I saw "show fewer shorts", and even that seems to be temporary.
The ads are just a symptom. The tsunami of money pouring in was the corrosive force. Funny enough - I remain hopeful on AI as a skill multiplier. I think that’ll be hugely empowering for the real doers with the concrete skill sets to create good software that people actually want to use. I hope we see a new generation of engineer-entrepreneurs that opt to bootstrap over predatory VCs. I’d rather we see a million vibrant small software businesses employing a dozen people over more “unicorns”.
>The "something" that happened was ads. They poisoned all the fun and interest out of technology.
Disagree. Ads hurt, but not as much as technology being invaded by the regular masses who have no inherit interest in tech for the sake of tech. Ads came after this since they needed an audience first.
Once that line was crossed, it all became far less fun for those who were in it for the sheer joy, exploration, and escape from the mundane social expectations wider society has.
It may encompass both "hot takes" to simply say money ruined tech. Once future finance bros realized tech was easier than being an investment banker for the easy life - all hope was lost.
I don't think that just because something becomes accessible to a lot more people that it devalues the experience.
To use the two examples I gave in this thread. Digital music is more accessible than ever before and it's going from strength to strength. While at-home subtractive CNC is still in the realm of deep hobbyists, 3d printing* and CNC cutting/plotting* (Cricut, others) have been accessible and interested by the masses for a decade now and those spaces are thriving!
* Despite the best efforts of some of the sellers of these to lock down and enshittify the platforms. If this continues, this might change and fall into the general tech malaise, and it will be a great loss if that happens.
my guess is something like detailed in this article: https://meaningness.com/geeks-mops-sociopaths
No. You're both about 50% correct; what's making everything weird is that the things associated with "hacking" transitioned from "completely optional side hobby" to "fundamental basis of the economy, both bullshit and not."
This is why I'm finding most of this discussion very odd.
The folks who love command-line and terminals had not been luddites all this time?
lol, no. They're people who think faster. Someone who uses vscode will never produce code faster than someone proficient in vim. Someone who clicks through GUI windows will never be able to control their computer as fast as someone with a command prompt.
I'm sure that there are some examples who enjoy it for the interface. I think CRT term/emulator is peak aesthetic. And a few who aren't willing to invest the time to use a gui an a terminal, and they learned the terminal first.
Calling either group a luddite is stupid, but if I was forced to defend one side. Given most people start with a gui because it's so much easier. I'd rather make the argument that those who never progress onto the faster more powerful options deserve the insult of luddite.
> Someone who uses vscode will never produce code faster than someone proficient in vim.
Is this an actually serious/honest take of yours?
I've been using vim for 20 years and, while I've spent almost no time with VS Code, I'd say that a lot of JetBrains' IDEs' built in features have definitely made me faster than I ever was with vim.
Oh wait. No true vim user would come to this conclusion, right?
The take was supposed to be read as slightly hyperbolic. Because while the fastest user of an IDE, has never come close to the fastest I've seen in vim, as you pointed out, thats not really a reasonable comparison either. Here I'm intentionally only considering raw text editing speed, jumping across lines, switching files. If you're including IDE features, when you expect someone in vim to leave vim, you're comparing something that doesn't equate to my strawman.
My larger point was it's absurd to say someone who's faster using [interface] is a luddite because they don't use [other interface] with nearly identical features.
> Oh wait. No true vim user would come to this conclusion, right?
I guess that's fitting insult, given I started with a strawman example too.
edit: I can offer another equally absurd example, (and why I say it's only slightly hyperbolic because the following is true), I can write code much faster using vim, than I can with [IDE], I don't even use tab complete, or anything similar either. I, personally, am able to write better code, faster, when there's nothing but colored text to distract me. Does that make me a luddite? I've tried both, and this fits better for me. Or is it just how comfortable you are with a given interface? Because I know most people can find tab complete useful.
IDEs have keyboard shortcuts too, you know.
> is absolutely filled with busy work that no one really wants to do
Well, LLMs don't fix that problem.
(They fix the "need to train your classification model on your own data" problem, but none of you care about that, you want the quick sci-fi assistant dopamine hit.)
> That's the thing, hacker circles didn't always have this 'progressive' luddite mentality.
I think, by definition, Luddites or neo-Luddites or whatever you want to call them are reactionaries but I think that's kind of orthogonal to being "progressive." Not sure where progressive comes in.
> All that said, the actual workday is absolutely filled with busy work that no one really wants to do, and the refusal of a loud minority to engage with that fact is what's leading to this.
I think that's maybe part of the problem? We shouldn't try to automate the busy work, we should acknowledge that it doesn't matter and stop doing it. In this regard, AI addresses a symptom but does not cure the underlying illness caused by dysfunctional systems. It just shifts work over so we get to a point where AI generated output is being analyzed by an AI and the only "winner" is Anthropic or Google or whoever you paid for those tokens.
> These people bring way more toxicity to daily life than who they wage their campaigns against.
I don't believe for a second that a gaggle of tumblrinas are more harmful to society than a single Sam Altman, lol.
There's a simple solution: anyone who posts AI-generated content can label it as "AI-generated" and avoid misleading people.
> And yeah, I get it. We programmers are currently living through the devaluation of our craft, in a way and rate we never anticipated possible.
I'm a programmer, been coding professionally for 10 something years, and coding for myself longer than that.
What are they talking about? What is this "devaluation"? I'm getting paid more than ever for a job I feel like I almost shouldn't get paid for (I'm just having fun), and programmers should be some of the most worry-free individuals on this planet, the job is easy, well-paid, not a lot of health drawbacks if you have a proper setup and relatively easy to find a new job when you need it (granted, the US seems to struggle with that specific point as of late, yet it remains true in the rest of the world).
And now, we're having a huge explosion of tools for developers, to build software that has to be maintained by developers, made by developers for developers.
If anything, it seems like Balmers plea of "Developers, developers, developers" has came true, and if there will be one profession left in 100 year when AI does everything for us (if the vibers are to be believed), then that'd probably be software developers and machine learning experts.
What exactly is being de-valuated for a profession that seems to be continuously growing and been doing so for at least 20 years?
The "devaluation" they mention is just the correction against the absurd ZIRP bump, that lured would-be doctors and lawyers into tech jobs at FAANG and FAANG-alike firms with the promise of upper middle class lifestyles for trivially weaving together API calls and jockeying JIRA tickets. You didn't have to spend years more in grad school, you didn't have to be a diligent engineer. You just had to had to have a knack for standardized tests (Leetcode) and the time to grid some prep.
The compensation and hiring for that kind of inexpert work were completely out of sync with anything sustainable but held up for almost a decade because money was cheap. Now, money is held much more tightly and we stumbled into a tech that can cheaply regurgitate a lot of so the trivial inexpert work, meaning the bottom fell out of these untenable, overpaid jobs.
You and I may not be effected, having charted a different path through the industry and built some kind of professional career foundation, but these kids who were (irresponsibly) promised an easy upper middle class life are still real people with real life plans, who are now finding themselves in a deeply disappointing and disorienting situation. They didn't believe the correction would come, let alone so suddenly, and now they don't know how they're supposed to get themselves back on track for the luxury lifestyle they thought they legitimately earned.
While that is part of the equation it's not at all that simple. If the average business owner wants a custom piece of software for their workflow how are they getting it now? For decades the answer would have been new hires, agencies, consultants, and freelancers. It didn't matter that most software boiled down to a simple CRUD backend and a flashy frontend. There was still a need for developers to create every piece of software.
Now AI makes it unbelievably easy to make those simple but bespoke software packages. The business owner can boot up Lovable and get something that is good enough. The non-software folk generally aren't scrutinizing the software they use. It doesn't matter if the backend is spaghetti code or if there are bugs here and there. If it works well enough then they're happy.
In my opinion that's the unfortunate truth of AI software development. It's dirt cheap, fast, and good enough for most people. Computer's couldn't write software before and now they can. Obviously that is real devaluation, right?
That might happen, but it hasn't yet.
So far, the tools help many programmers write simple code more quickly.
For technically adept professionals who are not programmers, though, we still haven't seen anything really break through the ceiling consistently encountered by previous low-code/no-code tools like FoxPro, Access, Excel, VBA, IFTTT, Zapier, Salesforce etc.
The LLM-based tools for this market work differently than the comparable tools that preceded them over the last 40 years, in that they have a much richer vocabulary of output, but the ceiling that all of these tools encountered in the last has been a human one: most non-programmers don't know how to describe what they need with sufficient detail for anything much beyond a fragile, narrow toy.
Maybe GPT-8 or Gemini 6 or whatever will somehow finally be able to shatter this ceiling, and somebody will make finally make a no-code software builder that devours the market for custom/domain software. But that hasn't happened yet and it's at least as easy to be skeptical as it is to be convinced.
I'm fairly certain that it's happening right now. There is no threshold that LLMs need to "break through" to see adoption. The number of non-technical using them to write software is growing every day.
I was working freelance through late 2023 - mid 2025 and the shift seemed quite obvious to me. Other freelancers, agency managers, etc that I talked to could see it too. The volume of clients, and their expectations, is changing very rapidly in that space.
When I first earned money for coding (circa 20 years ago) it was small e-commerce shop. Today nobody does them, because there's woocomerce, shopify, FB marketplace. All dirt cheap and fast.
It isn't devaluation. It's good - it freed a lot of people to work on more ambitious things.
I don't believe companies can reliably tell expert and non-expert developers apart, to sort them so efficiently to play out like you say.
The companies that can will remain and the companies that can't will perish. Not necessarily quickly nor gracefully, but the market stops all bucks.
I have a ton of faith that Apple, Google, and Microsoft will not perish. I'll also observe their software quality is not universally stellar.
You are assuming no government intervention or anti-competitive measures.
Neither has been true for a really long time.
If the market is allowed to behave like one*
Nailed it. It's a pendulum and we're swinging back to baseline. We just finished our last big swing (zirp, post COVID dev glut) and are now in full free fall.
I love this post. It really encapsulates a lot of what my take on the situation is as well. It has just been so blatantly obvious that a lot of people have a very protectionist mindset surrounding AI, and a legitimate fear that they are going to be replaced by it.
> What exactly is being de-valuated for a profession
You're probably fine as a more senior dev...for now.
But if I was a junior I'd be very worried about the longevity I can expect as a dev. It's already easier for many/most cases to assign work to a LLM vs handholding a human through it.
Plus as an industry we've been exploiting our employer's lack of information to extract large salaries to produce largely poor quality outputs imo. And as that ignorance moat gets smaller, this becomes harder to pull off.
> assign work to an LLM
This is just not happening anywhere around me. I don't know why it keeps getting repeated in every one of these discussions.
Every software engineer I know is using LLM tools, but every team around me is still hiring new developers. Zero firing is happening in any circle near me due to LLMs.
LLMs can not do unsupervised work, period. They do not replace developers. They replace Stack Overflow and Google.
I can tell you where I am seeing it change things for sure, at the early stages. If you wanted to work at a startup I advise or invest in, based on what I'm seeing, it might be more difficult than it was 5 years because there is a slightly different calculus at the early stage. often your go to market and discovery processes seed/pre-seed are either: not working well yet, nonexistent, or decoupled from prod and eng, the goal obviously is over time to bring it all together into a complete system (a business) - as long as I've been around early stage startup there has always been a tension between engineering and growth on budget division, and the dance of how you place resources across them such that they come together well is quite difficult. Now what I'm seeing is: engineering could do with being a bit faster, but too much faster and they're going to be sitting around waiting for the business teams to get their shit together, where as before they would look at hiring a junior, now they will just hire some AI tools, or invest more time in AI scaffolding etc... allowing them to go a little bit faster, but it's understood: not as fast as hiring a jr engineer. I noticed this trend starting in the spring this year, and i've been watching to see if the teams who did this then "graduate" out of it to hiring a jr, so far only one team has hired and it seems they skipped jr and went straight to a more sr dev.
Around 80% of my work is easy while the remaining 20% is very hard. At this stage the hard stuff is far outside the capability of LLM but the easy stuff is very much within its capabilities. I used to hire contractors to help with that 80% work but now I use LLMs instead. It’s far cheaper, better quality, and zero hassle. That’s 3 junior / mid level jobs that are gone now. Since the hard stuff is combinatorial complexity I think by the time LLM is good enough to do that then it’s probably good enough to do just about everything and we’ll be living in an entirely different world.
Exactly this, I lead cloud consulting + app dev projects. Before I would have staffed my projects with at least me leading it and doing the project management + stakeholder meetings and some of the work and bringing a couple of others in to do some of the grunt work. Now with Gen AI even just using ChatGPT and feeding it a lot of context - diagrams I put together, statements of work, etc - I can do it all myself without having to go through the coordination effort of working with two other people.
On the other hand, when I was staffed to lead a project that did have another senior developer who is one level below me, I tried to split up the actual work but it became such a coordination nightmare once we started refining the project because he could just use Claude code and it would make all of the modifications needed for a feature from the front end work, to the backend APIs, to the Terraform and the deployment scripts.
I would have actually slowed him down.
Today's high-end LLMs can do a lot of unsupervised work. Debug iterations are at least junior level. Audio and visual output verification is still very week (i.e. to verify web page layout and component reactivity). Once the visual model is good enough to look at the screen pixels and understand, it will instantly replace junior devs. Currently if you have only text output all new LLMs can iterate flawlessly and solve problems on it. New backend dev from scratch is completely doable with vibe coding now, with some exceptions around race conditions and legacy code comprehension.
> Once the visual model is good enough to look at the screen pixels and understand, it will instantly replace junior devs
Curious if you gave Antigravity a try yet? It auto-launches a browser and you can watch it move the mouse and click around. It's able to review what it sees and iterate or report success according to your specs. It takes screen recordings and saves them as an artifact for you to verify.
I only tried some simple things with it so far but it worked well.
Right, and as a hiring manager, I'm more inclined to hire junior devs since they eventually learn the intricacies of the business, whereas LLMs are limited in that capacity.
I'd rather babysit a junior dev and give them some work to do until they can stand on their own than babysit an LLM indefinitely. That just sounds like more work for me.
Completely agree. I use LLM like I use stackoverflow, except this time i get straight to the answer and no one closes my question and marks it as a duplicate, or stupid.
I dont want it integrated into my IDE, i'd rather just give it the information it needs to get me my result. But yeah, just another google or stackoverflow.
Well your anecdote is clearly at odds with absolutely all of the macro economic data.
You're mostly right but very few teams are hiring in the grand scheme of things. The job market is not friendly for devs right now (not saying that's related to AI, just a bad market right now)
It's me. I'm the LM having work assigned to me that junior dev used to get. I'm actually just a highly proficient BA who has always almost read code, followed and understood news about software development here and on /. before, but generally avoided writing code out of sheer laziness. It's always been more convenient to find something easier and more lucrative in those moments if decision where I actually considered shifting to coding as my profession.
But here I am now. After filling in for lazy architects above me for 20 years while guiding developers to follow standards and build good habits and learning important lessons from talking to senior devs along the wa, guess what, I can magically do it myself now. The LM is the junior developer that I used to painstakingly explain the design to, and it screws it up half as much as the braindead and uncaring jr Dev used to. Maybe I'm not a typical case, but it shows a hint of where things might be going. This will only get easier as the tools become more capable and mature into something more reliable.
LM?
They mean LLM
> This is just not happening anywhere around me.
Don't worry about where AI is today, worry about where it will be in 5-10 years. AI is brand new bleeding edge technology right now, and adaption always takes time, especially when the integration with IDEs and such is even more bleeding edge than the underlying AI systems themselves.
And speaking about the future, I wouldn't just worry about it replacing the programmer, I'd worry about it replacing the program. The future we are heading into might be one where the AI is your OS. If you need an app to do something, you can just make it up on the spot, a lot of classic programs will no longer need to exist.
> Don't worry about where AI is today, worry about where it will be in 5-10 years.
And where will it be in 5-10 years?
Because right now, the trajectory looks like "right about where it is today, with maybe some better integrations".
Yes, LLMs experienced a period of explosive growth over the past 5-8 years or so. But then they hit diminishing returns, and they hit them hard. Right now, it looks like a veritable plateau.
If we want the difference between now and 5-10 years from now and the difference between now and 5-10 years ago to look similar, we're going to need a new breakthrough. And those don't come on command.
Right about where it is today with better integrations?
One year is the difference between Sonnet 3.5 and Opus 4.5. We're not hitting diminishing returns yet (mostly because of exponential capex scaling, but still). We're already committed to ~3 years of the current trajectory, which means we can expect similar performance boosts year over year.
The key to keep in mind is that LLMs are a giant bag of capabilities, and just because we hit diminishing returns on one capability, that doesn't say much if anything about your ability to scale other capabilities.
You buried the lede with “exponential capex scaling”. How is this technology not like oil extraction?
The bulk of that capex is chips, and those chips are straight up depreciating assets.
The depreciation schedule is debatable (and that's currently a big issue!). We've been depreciating based on availability of next generation chips rather than useful life, but I've seen 8 year old research clusters with low replacement rates. If we stop spending on infra now, that would still give us an engine well into the next decade.
> We're already committed to ~3 years of the current trajectory
How do you mean committed?
better integrations won't do anything to fix the fact that these tools are, by their mathematical nature, unreliable and always will be
It's a trope that people say this and then someone points out that while the comment was being drafted another model or product was released that took a substantial step up on problem solving power.
I use LLMs all day every day. There is no plateau. Every generation of models has resulted in substantial gains in capability. The types of tasks (both in complexity and scope) that I can assign to an LLM with high confidence is frankly absurd, and I could not even dream of it eight months ago.
> But if I was a junior I'd be very worried about the longevity I can expect as a dev. It's already easier for many/most cases to assign work to a LLM vs handholding a human through it.
This sounds kind of logical, but really isn't.
In reality you can ASSIGN a task to a junior dev and expect them to eventually complete it, and learn from the experience as well. Sure there'll likely be some interaction between the junior dev and mentor, and this is part of the learning process - something DESIREABLE since it leads to the developer getting better.
In contrast, you really cant "assign" something to an LLM. You can of course try to, and give it some "vibe coding" assignment like "build me a backend component to read the data from the database", but the LLM/agent isn't an autonomous entity that can take ownership of the assignment and be expected to do whatever it takes (e.g. coming back to you and asking for help) to get it done. With todays "AI" technology it's the AI that needs all the handholding, and the person using the AI is the one who has effectively taken the assignment, not the LLM.
Also, given the inability of LLMs to learn on the job, using an LLM as a tool to help get things done is going to be a groundhog day experience of having to micro-manage the process in the same way over and over again each time you use it... time that would have been better invested in helping a junior dev get up to speed and in the future be an independent developer that tasks can indeed be assigned to.
Doesn't matter. First, yes, a modern AI will come back and ask questions. Second, the AI is so much faster at interactions than a human is, that you can use that saved time to glance at its work and redirect it. The AI will come back with 10 prototype attempts in an hour, while a human will take a week for each, with more interupt questions for you about easy things
Sure, LLMs are a useful tool, and fast, but the point is they don't have human level intelligence, can't learn, and are not autonomous outside of an agent that will attempt to complete a narrow task (but with no ownership and guarantee of eventual success).
We'll presumably get there eventually and build "artificial humans", but for now what we've got is LLMs - tools for language task automation.
If you want to ASSIGN a task to something/someone then you need a human or artificial human. For now that means assigning the task to a human, who will in turn use the LLM as a tool. Sure there may be some productivity increase (although some studies have indicated the exact opposite), but ultimately if you want to be able to get more work done in parallel then you need more entities that you can assign tasks do, and for time being that means humans.
> the point is they don't have human level intelligence > If you want to ASSIGN a task to something/someone then you need a human or artificial human
Maybe you haven't experienced it but a lot of junior devs don't really display that much intelligence. Their operating input is a clean task list, and they take it and convert it into code. It's more like "code entry" ("data entry", but with code).
The person assigning tasks to them is doing the thinking. And they are still responsible for the final output, so if they find a computer better and cheaper at "code entry" than a human well then that's who they're assign it to. As you can see in this thread many are already doing this.
>> e.g. coming back to you and asking for help
Funny you mention this because Opus 4.5 did this just yesterday. I accidentally gave it a task with conflicting goals, and after working through it for a few minutes it realized what was going on, summarized the conflict and asked me which goal should be prioritized, along with detailed pros and cons of each approach. It’s exactly how I would expect a mid level developer to operate, except much faster and more thorough.
Yes, they continue to get better, but they are not at human level (and jr devs are humans too) yet, and I doubt the next level "AGI" that people like Demis Hassabis are projecting to still be 10 years away will be human level either.
> assign work to a LLM
What are your talking about? You seem to live in a parallel universe. Every single time I tried this or someone of my colleagues, this task failed tremendously hard.
> “…exploiting our employer's lack of information…”
I agree in the sense that those of us who work in for-profit businesses have benefited from employer’s willingness to spend on dev budgets (salaries included)—without having to spend their own _time_ becoming increasingly involved in the work. As “AI” develops it will blur the boundaries of roles and reshape how capital can be invested to deliver results and have impact. And if the power dynamics shift (ie. out of the class of educated programmers to, I dunno, philosophy majors) then you’re in trouble.
If one is a junior the goal is to become a senior though. Not to remain a junior.
Yes, but the barrier to become a senior is what’s currently in dispute
LLMs vs human
Handholding the human pays off in the long run more than hand holding the llm, which requires more hand holding anyway.
Claude doesn't get better as I explain concepts to it the same way a jr engineer does.
I had hired 3 junior/mid lvl devs and paid them to do nothing but study to improve their skills, it was my investment in their future, I had a big project on the horizon that I needed help with. After 6 months I let them go, the improvement was far too slow. Books that should have taken a week to get through were taking 6 weeks. Since then LLM have completely surpassed them. I think it’s reasonable to think that some day, maybe soon, LLMs will surpass me. Like everyone else, I have to the best I can while I can.
But this is an issue of worker you're hiring. I've worked with senior engineers who a) did nothing (as - really not write any thing within the sprint, nor do any other work) b) worked on things they wanted to work on c) did ONLY things that they were assigned in the sprint (= if there were 10 tickets in the sprint and they were assigned 1 of these tickets then they would finish that ticket and not pick up anything else, staying quiet) d) worked only on tickets that have requirements explicitly stated step by step (open file a, change line 89 to be `checkBar` instead of `checkFoo`... - having to write this would take longer than doing the changes yourself as I was really writing in Jira ticket what I wanted the engineer to code, otherwise they would come back with "not enough spec, can't proceed"). All of these cases - senior people!
Sure - LLMs will do what they're told (to a specific value of "do" and "what they're told")
Sure there is a wide spectrum of skills, having worked in FANG and top tier research I have a pretty good idea of the capability at the top of the spectrum. I know I wasn't hiring at that level. I was paying 2x the local market rate (non-US) and pulling from the functional programming talent pool. These were not the top 1% but I think they were easily top 10% and probably in the top 5%.
I use LLMs to build isolated components and I do the work needed to specialize them for my tasks and integrate them together. The LLMs take fewer instructions to do this and handle ambiguity far better. Additionally because of the immediate feedback look on the specs I can try first with a minimally defined spec and interactively refine as needed. It takes me far less work to write specs for LLMs than it does for other devs.
If you are a “senior” engineer who is doing nothing but pulling well defined Jira tickets off the board, you’re horribly mis titled.
And even if their progress had been faster, now they are a capable developer who can command higher compensation that statistically your company won’t give them and they are going to jump ship anyway.
One didn't even wait, they immediately tried to sub-contract the work out to a third party and make a transition from a consultant to a consultancy company. I had to be clear that they are hired as named person and I very much do care about who does the work.While not FANG comp it was ~2x the market rates, statistically I think they'd have a hard time matching that somewhere else. I think in part because I was offering these rates they got rather excited about the perceived opportunity in being a consultancy company, i.e. the appetite grows with the eating. I'm not sure if it's something that could be solved with more money, I guess in theory with FANG money but it's not like those companies are without their dysfunctions. With LLMs I can solve the same problem with far less money.
I think I see the problem: you're running a consulting company, and complaining that your mercenaries aren't very good or loyal.
I've not run a consultancy firm, I've previously worked as a consultant, but these people were hired to work on product.
Actually it does, if you put those concepts in documentation in your repository…
Those concepts will be in your repository long after that junior dev jumps ship because your company refused to pay him at market rates as he improved so he had to jump ship to make more money - “salary compression” is real and often out of your manager’s control.
Maybe see it less as a junior and replacement for humans. See it more as a tool for you! A tool so you can do stuff you used to delegate/dump to a junior, do now yourself.
Claude gets better as Claude's managers explain concepts to it. It doesn't learn the way a human does. AI is not human. The benefit is that when Claude learns something, it doesn't need to run a MOOC to teach the same things to millions of individuals. Every copy of Claude instantly knows.
You need to hit that thumbs down with the explanation so the model is trained with the penalty applied. Otherwise your explanations are not in the training corpus
It just makes you more powerful, not less. When we got rid of rooms full of typewriters it’s because we became more productive, not less.
provided the senior dev takes time off to review that slop.
Rest assured that LLMs are completely incapable of replacing mildly competent junior developers. And that's fundamental to the technology.
Instead, it's them that benefit the most from using them.
It's only management that believes otherwise. Because of deceitful marketing from a few big corporations.
I consider the devaluation of the craft to be completely independent from the professional occupation of software.
Programming has been devalued because more people can do it at a basic level with LLM tooling. People that I do not consider smart enough or to have put enough work in to output the things that they have nor do they really understand it themselves.
It is of course the new reality and now we all have to go find new markers/things to judge peoples output by. Thats the devaluation of the craft itself.
For what its worth, this devaluation has happened many times in this field. ASM, Compilers, managed gc languages, the cloud, abstractions have continually opened up the field to people the old timers consider unworthy.
LLMs are a unique twist on that standard pattern.
> Programming has been devalued because more people can do it at a basic level with LLM tooling
But just because more people can do something doesn't mean it's devalued, or am I misunderstanding the word? The value of programs remains the same, regardless of who composes them. The availability of computers, the internet and the web seems to have had the opposite effect so far, making entire industries much more valued than they were in the decades before.
Neither do I see ASM, compilers, and all your other examples of devalualing, it seems like it's "nichifying" the industry if anything, which requires more experts, not fewer. The more abstractions we have in reality, the more experts are needed for being able to handle those things.
Excellent take; this is like saying "being an amazing chef is devalued because McDonalds exists."
> programmers should be some of the most worry-free individuals on this planet, the job is easy, well-paid, not a lot of health drawbacks if you have a proper setup and relatively easy to find a new job when you need it
Not in where I live though. Competition is fierce, both in industry and academia, for most posts being saturated and most employees face "HR optimization" in their late 30s. Not to mention working over time, and its physical consequences.
"Not in where I live though"
I mean, not anywhere, and the data absolutely annihilates their ridiculous claims. In subsequent posts they've retreated back to "yeah, but someone somewhere has it worse", invalidating this whole absurd thread.
Their comment has little correlation with reality, and seems to be a contrived, self-comforting fiction. Most firms have implemented hiring freezes if not actively downsizing their dev staff. Many extremely experienced devs are finding the market absolutely atrocious, getting zero bites.
And for all of the "well us senior devs are safe" sentiment often seen on here, many shops seem to be more comfortable hiring cheap and eager junior devs and foregoing seniors because LLMs fill in a lot of the "grizzled wisdom". The junior to senior ratio is rapidly increasing, and devs who lived on golden handshakes are suddenly finding their ego bruised and a market where they're fighting for low-pay jobs.
> Their comment has little correlation with reality
Or you know, we live and experience different parts of the world? Where you are, you might be right, and where I am, I might be right.
But nuance tends to be harder than trying to find some absolute truth and finding out it doesn't match your preconceived notion about the whole world.
Again, compare this to other professions, don't look at in isolation, and you'll see why you're still (or will have, seems you're a student still) having a much more pleasant life than others.
This is completely irrelevant. The point is that the profession is being devalued, i.e. losing value relative to where it was. If, for example, the US dollar loses value, it's not a "counterargument" to point out that it's still much more valuable than the Zimbabwe dollar.
It isn't though, none of our lives are happening in isolation, even if you don't believe it, there are other humans out there, with real responsibilities outside of computers.
Even if the competition is fierce, do you think it isn't for other professions, or what's the point? Of course a job that is well-paid, has few drawbacks and let you sit indoors in front of computer, probably doing something you enjoy in general, is popular and has competition.
Do other professions expect you to work during personal time? At least blue collar people are done when they get told they're done
I get your viewpoint though, physically exhausting work is probably much worse. I do want to point out that 40 hours has always been above average, and right now its the default
> Do other professions expect you to work during personal time? At least blue collar people are done when they get told they're done
No, and after my first programming job, neither does it happen in development. Make sure you join the right place, have the right boss, and set expectations up front, and you too can surely avoid it if it's important to you :) Usually you can throw in "work/life balance" somehow to gauge how they feel about it.
And yes, plenty of blue collar people are expected to be available during your personal time, for various reasons. Sometimes just quick questions (especially if you're a manager and you're having time off), sometimes emergencies that requires you to head on over to the place. Ask anyone who owned or even just managed a restaurant about that specific thing, and maybe you'll be surprised.
This “compare it to other professions” thing doesn’t really work when those other professions are not the one you actually do. The idea that someone should never be miserable in their job because other more miserable jobs exist is not realistic.
It's a useful thing to look at when you feel like all hope is lost and "wow is so difficult being a programmer" strikes, because it'll make you realize how easy you have it compared to non-programmers/nom-tech people.
Realizing how supposedly “easy” you have it compared to other people is not as encouraging or motivational as you’re implying it is. And how “easy” do you have it if you can’t find a job in your field?
Might be worth investigating why it isn't if so. People stressed about their situation usually find some solace in being helped realize what their position in the world actually is, as everything is always relative, not absolute.
You sound exactly like that turkey from Nassim Taleb's books that came to the conclusion that the purpose of human beings is to make turkeys very happy with lots of food and breeding opportunities. And the turkey's thesis gets validated perfectly every day he wakes up to a delicious fatty meal.
Until Thanksgiving.
The turkey story predates Nassim Taleb books by decades.
Unlike the turkeys, they seem rather self aware about it.
I'm sure, given the means, the Turkey could have written some convincing prose about their delicious fatty meal.
Your comment is hyperbolic fear mongering dressed up in a cutesie story.
Our industry is being disrupted by AI. What industry in history has not been disrupted by technological progression? It's called life. And those that can adapt to life changing will continue to thrive. And those who can't will get left behind. There is no wholesale turkey slaughter.
If you read the grand parent, they seem to be denying a disruption is taking place industry wide. The adage was used to illustrate how complacency is blinded by the very conditions that enable it, and while this is unfalsifiable and not very conducive to discussion, "fear mongering" is a bit rich to levy.
Further:
> Our industry is being disrupted by AI... No wholesale turkey slaughter.
Is an entirely different position than the GP who is essentially betting on AI producing more jobs for hackers, which surely won't be so simple.
> GP who is essentially betting on AI producing more jobs for hackers
I'm not clear on the point you're trying to make. My comment was in response to dugidugout's analogy.
If I understand their analogy correctly, developers are the well fed turkeys and one Thanksgiving day, we're all getting slaughtered.
That is not hyperbole and fear mongering to you?
Sorry to confuse the thread. I meant to point to the original comment (embedding-shape), but blindly labeled them GP.
We share understanding of their analogy, but differ in the inferred application. I took it as the well fed turkeys are "developers who deny AI will disrupt their industry", not "developers" as a whole.
> And now, we're having a huge explosion of tools for developers, to build software that has to be maintained by developers, made by developers for developers.
What do you think they're building all those datacenters for? Why do you think so much money is pouring into AI companies?
It's not to help make developers more efficient with code assistants.
Traditional computation will be replaced with bots in every aspect of software. The goal is to devalue our labor and replace it with computation performed by machines owned by the wealthy, who can lease this out.
If you can't see this coming you lack both imagination and historical perspective.
Five years ago Claude Code would have been essentially unimaginable. Consider this.
So sure, enjoy your job churning out buggy whips while you can, but you better have a plan B for when the automobiles truly arrive.
I agree with all this, except there is no plan B. What could plan B possibly be when white collar work collapses? You can go into a trade, but who will be hiring the tradespeople?
The companies who now have piles of cash because they eliminated a huge chunk of labor will spend far more on new projects, many of which will require tradesmen.
Economic waves never hit one sector and stop. The waves continues across the entire economy. You can’t think “companies will get rid of huge amounts of labor” and then stop asking questions. You need to then ask “what will companies do with decreased labor costs?” And “what could that investment look like, who will they need to hit to fulfill it?” and then “what will those workers do after their demand increases?” And so on.
> Economic waves never hit one sector and stop.
Unless they do, or are severely weakened. Consider the net worth of the 1% over the last few decades. Even corrected for inflation, its growth is staggering. The wealth gap is widening, and that wealth came from somewhere.
So yes, when there is an economic boom, investment happens. However, the growth of that top %1 tells me that they've been taking more and more off the top. Sure, some near the bottom may win with the decreased labor costs and whatnot, but my point is less and less do every cycle.
Full disclosure: I'm not an economist. Hell, I probably have a highschool-level of econ knowledge at best, so this should probably be taken as a "common-sense" take on it, which I already know often fails spectacularly when economics is at play. So I'm more than open to be corrected here.
Jeff Bezos has a 233 billion net worth. It's not because Amazon users overpaid by a 233 billion but because his share in Amazon is highly valued by investors.
My own Amazon investment in my pension has also gone up by 10x in the last 10 years, just like Jeff's. Where did the value increase come from?
Is this idea of the stock market good for us? I don't know, but it's paper money until you sell it.
I would look at the secondary consequences of the totaling of white collar labor in the same way. Without the upper-middle-class spending their disposable income, consumer spending shrivels, advertising dollars dry up, and investment in growth no longer makes sense in most industries. It looks like a path to total economic destruction to me.
I think it’s much more likely they’ll be used for mass surveillance purposes. The tech is already there, they just need the compute (and a lot of it).
Most of the economy is making things that aren’t really needed. Why bother keeping that afloat when it’s 90% trinkets for the proles? Once they’ve got the infra to ensure compliance why bother with all the fake work which is the real opium of the masses.
> What exactly is being de-valuated for a profession that seems to be continuously growing
A lot of newly skilled job applicants can't find anything in the job market right now.
Likewise with experienced devs who find themselves out of work due to the neverending mass layoffs.
There's a huge difference between the perspective of someone currently employed versus that of someone in the market for a role, regardless of experience level. The job market of today is nothing like the job market of 3 years ago. More and more people are finding that out every day.
Based on conversations with peers for the last ~3 years or so, some of retrained to become programmers, this doesn't seem to as absolute as you paint it out to be.
But as mentioned earlier, the situation in the US seems much more dire than elsewhere. People I know who entered the programming profession in South America, Europe and Asia for these last years don't seem to have more troubles than I had when I got started. Yes, it requires work, just like it did before.
Nah it's pretty bad, but congrats on being an outlier.
Literally the worst job you can find as a programmer today (if you lower you standards and particularly, stay away from cryptocurrency jobs) is 10x better than the non-programmer jobs you can find.
If you don't trust me, give a non-programming job a try for 1 year and then come back and tell me how much more comfy $JOB was :)
> Literally the worst job you can find as a programmer today (if you lower you standards and particularly, stay away from cryptocurrency jobs) is 10x better than the non-programmer jobs you can find.
This is a ridiculous statement. I know plenty of people (that are not developers) that make around the same as I do and enjoy their work as much as I do. Yes, software development is a great field to be in, but there's plenty of others that are just as good.
Huh? I'm not saying there isn't careers out there that are also good, I'm not sure what in my comment made it seem so? Of course there are many great fields out there, wasn't my intention to somehow seem to say software development is the only one.
>>Literally the worst job you can find as a programmer today (if you lower you standards and particularly, stay away from cryptocurrency jobs) is 10x better than the non-programmer jobs you can find.
A lot of non-programmer jobs have a kind of union protection, pension plans and other perks even with health care. That makes a crappy salary and work environment bearable.
There was this VP of HR, in a Indian outsourcing firm, and she something to the effect that Software jobs appear like would pay to the moon, have an employee generate tremendous value for the company and general appeal that only smart people work these jobs. None of this happens with the majority of the people. So after 10-15 years you actually kind of begin to see why some one might want to work a manufacturing job.
Life is long, job guarantee, pensions etc matter far more than 'move fast and break thing' glory as you age.
I was a lot happier in previous non-programming jobs, they just were much worse at paying the bills. If i could make my programming salary doing either of my previous jobs, i would go back in a heartbeat. Hell if i could make even 60% of my programming salary doing those jobs I'd go back.
I enjoy the practice of programming well enough but i do not at all love it as a career. I don't hate it by any means either but it's far from my first choice in terms of career.
> give a non-programming job a try for 1 year
I have a mortgage, 3 kids and a wife to support. So no. I don't think I'm going to do that. Also, I like my programming job.
EDIT: Sorry I thought you were saying the opposite. Didn't realize you were the OP of this thread.
Because tech corps overhired[0] when the interest rate was low.
Even after the layoffs, most big tech corps still have more employees today than they did in 2020.
The situation is bad, but the lesson to learn here is that a country should handle a pandemic better than "lowering interest rate to near-zero and increasing government spending." It's just kicking and snowballing the problem to the next four years.
[0]: https://www.dw.com/en/could-layoffs-in-tech-jobs-spread-to-r...
I think it was more sandbagging than snowballing. The pain was spread out, and mostly delayed, which kept the economy moving despite everything.
Remember that most of the economy is actually hidden from the stock market, its most visible metric. Over half the business is privately-owned small businesses, and at the local level forcibly shutting down all but essential-service shops was devastating. Without government spending, it's hard to imagine how most of those business owners and their employees would have survived, let alone their shops.
Yet we had no bread lines, no (increase in) migratory families chasing cash labor markets, and demands on charity organizations were heavy, but not overwhelming.
But you claim "a country should handle a pandemic better..." - what should we have done instead? Criticism is easy.
It seems like most companies are just using AI as a convenient cover for layoffs. If you say: “We enormously over-hired and have to do layoffs.”, your stock tanks. If you instead say that you are laying off the same 20k employees ‘because AI’, your stock pumps for no reason. It’s just framing.
>> A lot of newly skilled job applicants can't find anything in the job market right now.
That is not unique to programming or tech generally. The overall US job market is kind of shit right now.
I've always heard this sentiment, but I've also never met one of these newly skilled job applicants who could do anything resembling the job.
I've done a lot of interviews, and inevitably, most of the devs I interview can't pass a trivial interview (like implement fizzbuzz). The ones who can do a decent job are usually folks we have to compete for.
"Newly skilled" means needs supervision. If you have to supervise the work, then an AI is definitely superior.
> I'm getting paid more than ever for a job I feel like I almost shouldn't get paid for (I'm just having fun)
In my Big Tech job, I sometimes forget that some people can really enjoy what they do. It seems like you're in a fortunate position of both high pay and high enjoyment. Congratulations! Out of curiosity, what do you work on?
Right now I'm doing consulting for two companies, maybe a couple of hours per week, mostly having downtime and trying to expand on my machine learning knowledge.
But in general, every job I've had has been "high pay and high enjoyment" even when I initially had "shit pay" compared to other programmers, and the product wasn't really fun, I was still programming, an activity I still love.
Compare this to the jobs I did before, where the physical toll makes it impossible to do anything after work as you're exhausted, and even if I got paid more than my first programming job, that your body is literally unable to move once you get home, makes the pay matter less and feel less.
But for a programmer, you can literally sit still all day, have some meetings in a warm office, talk with some people, type some things into a document, sit and think for a while, and in the end of the month you get a paycheck.
If you never worked in another profession, I think you ("The Programmer") don't realize how lucky you are compared to the rest of the world.
It's a good perspective to keep. I've also worked a lot of crappy jobs. Overnights in a grocery store (IIRC, they paid an extra .50/hour to work overnights), fine dining waiter (this one was actually fun, but the partying was too much), on a landscaping crew, etc... I make more money than I ever thought possible growing up. My dad still can't believe I have job 'playing on the computer' all day, though I mostly manage now.
A useful viewpoint.
I too have worked in shit jobs. I too appreciate that I am currently in a 70F room of my house, wearing a T-shirt and comfy pants, and able to pet my doggos at will.
Mental exhaustion is a thing, too.
I work remote and i hate it, sitting all day is killing me, my 5 minute daily stand-up is nowhere near enough social interaction for a whole day's work. I've been looking for a role better suited to me for over a year, but the market is miserable.
I miss having jobs where at least a lot of the time i was moving around or working directly with other people. More than anything else i miss casual conversation with coworkers (which still happened with excruciating rarity even when i was doing most of my programming in an office).
I'm glad you love programming and find the career ideal. I don't mean to harp or whine, just pointing out your ideals aren't universal even amount programmers.
No, definitely some environments are less ideal, I agree. Personally, I also cannot stand working remote, if I'm working in a high-intensity project I have to work with the team in person, otherwise things just fall apart.
I understand exactly what you mean and agree, seems our ideals agree after all :)
Get a standing desk and a walking treadmill! It’s genuinely changed my life. I can focus easier, I get my steps in, and it feels like I did something that day.
100% my experience as well.
Negativity spreads so much more quickly than positivity online, and I feel as though too many people live in self reinforcing negative comment sections and blog posts than in the real world, which gives them a distorted view.
My opinion is that LLMs are doing nothing but accelerating what's possible with the craft, not eliminating it. If anything, this makes a single developer MORE valuable, because they can now do more with less.
Exactly. The problem is instead of getting a raise because "you can do more now" your colleagues will be laid off. Why pay for 3 devs when the work can be done by 1 now? And we all better hope that actually pans out in whatever legacy codebase we're dealing with.
Now the job market is flooded due to layoffs, further justifying lack of comp adjustment - add inflation, and you have "de-valuing" in direct form.
The job of a programmer is, and has always been, 50% making our job obsolete (through various forms of automation) and 50% ensuring our job security (through various forms of abstraction).
Over the course of my career, probably 2/3rds of the roles I have had (as in my day to day work, not necessarily the title) just no longer exist, because people like me eliminated them. I personally was the last person that had a few of those jobs because I mostly automated them and got promoted and they didn't hire a replacement. It's not that they hired less people though, they just hired more people, paid them more money, and they focused on more valuable work.
The amount of negativity your positive comment has received looks almost overwhelming. I remember HN being a much happier place a few years ago. Perhaps I should take a break from it.
People working in one of the coolest industries on Earth really do not appreciate their lives nowadays.
> programmers should be some of the most worry-free individuals on this planet, the job is easy, well-paid, not a lot of health drawbacks...
I don't know what kind of work you do but this depends a lot on what kind of projects you work on
Across ~10 jobs or so, mostly as a employee of 5-100 person companies, sometimes as a consultant, sometimes as a freelancer, but always with a comfy paycheck compared to any other career, and never as taxing (mental and physical) as the physical labor I did before I was a programmer, and that some of my peers are still doing.
Of course, there is always exceptions, like programmers who need to hike to volcanos to setup sensors and what not, but generally, programmers have one of the most comfortable jobs on the planet today. If you're a programmer, I think it should come relatively easy to acknowledge this.
> never as taxing (mental and physical) as the physical labor I did before I was a programmer
I find it... very strange that you think software development is less mentally taxing than physical labor.
Software engineering just comes really easily to my brain, somehow. Most of these days is spent designing, architecturing and managing various things, it takes time, but in the end of the day I don't feel like "Ugh I just wanna sleep and die" probably ever. Maybe when we've spent 10+ hours trying to bring back a platform after production downtime, but a regular day? My brain is as fine as ever when I come back home.
Contrast that with working as a out-call nurse, which isn't just physically taxing as you need to actually use your body multiple times per day for various things, but people (especially when you visit them in their homes, seemingly) can be really mean, weird and just draining on you. Not to mention when people get seriously hurt, and you need to be strong when they're screaming of pain, and finally when people die, even strangers, just is really taxing no matter what methods you use for trying to come back from that.
It's just really hard for me to complain about software development and how taxing it can be, when my life experience put me through so much before I even got to be a professional developer.
Have you done physical labour? I find it odd you think physical labor cannot be as mentally taxing. Having done some myself, I agree with GP.
I've never done anything like road/construction work. But I've done restaurant work, being on my feet for 8+ hours per day... and mentally, it just doesn't compare to software development.
- After a long day of physical labor, I come home and don't want to move.
- After a long day of software development, I come home and don't want to think.
Comfortable and easy, but satisfying? I don't think so. I've had jobs that were objectively worse that I enjoyed more and that were better for my mental health.
Sure, it's mostly comfy and well-paid. But like with physical labor, there are jobs/projects that are easy and not as taxing, and jobs that are harder and more taxing (in this case mentally).
Yes, you'll end up in situations where peers/bosses/clients aren't the most pleasant, but compare that to any customer facing job, you'll quickly be able to shed those moments as countless people face those seldom situations on a daily basis. You can give it a try, work in a call center for a month, and you'll acquire more stress during that month than even the worst managed software project.
When I was younger, I worked doing sales and customer service at a mall. Mostly approaching people and trying to pitch a product. Didn't pay well, was very easy to get into and do, but I don't enjoy that kind of work (and many people don't enjoy programming and would actually hate it) and it was temporary anyway. I still feel like that was much easier, but more boring.
That sounds ideal! I used to be a field roboticist where we would program and deploy robots to Greenland and Antarctica. IMO the fieldwork helped balance the desk work pretty well and was incredibly enjoyable.
It's absolutely not easy to find a new job in France, and more generally in Europe
My experience and the ones I personally known, been in Western Europe, South America and Asia, and programmers I know have an easier time to find new jobs compared to other professions.
Don't get me wrong, it's a lot harder for new developers to enter the industry compared to a decade ago, even in Western Europe, but it's still way easier compared to the length people I know who aren't programmers or even in tech.
That's a quantifiable claim. Using experience to "prove" it is inappropriate.
US data does back it up, though. The tech labor sector outperformed all others in the last 10 years. https://www.bls.gov/emp/tables/employment-by-major-industry-...
Software to date has been a [Jevons good](https://en.wikipedia.org/wiki/Jevons_paradox). Demand for software has been constrained by the cost efficiency and risk of software projects. Productivity improvements in software engineering have resulted in higher demand for software, not less, because each improvement in productivity unblocks more of the backlog of projects that weren't cost effective before.
There's no law of nature that says this has to continue forever, but it's a trend that's been with us since the birth of the industry. You don't need to look at AI tools or methodoligies or whatever. We have code reuse! Productivity has obviously improved, it's just that there's also an arms race between software products in UI complexity, features, etc.
If you don't keep improving how efficiently you can ship value, your work will indeed be devalued. It could be that the economics shift such that pretty much all programming work gets paid less, it could be that if you're good and diligent you do even better than before. I don't know.
What I do know is that whichever way the economics shake out, it's morally neutral. It sounds like the author of this post leans into a labor theory of value, and if you buy into that, well...You end up with some pretty confused and contradictory ideas. They position software as a "craft" that's valuable in itself. It's nonsense. People have shit to do and things they want. It's up to us to make ourselves useful. This isn't performance art.
I didn't enter this profession because I love reviewing code though.
It is a part of gaining experience and knowledge though. If you aren't a senior right now, eventually you will be, and one of the expectations will be that you can read and review more novice programmers code and help them improve it, and lend a helping hand when you can. Eventually, all you do will be to review the work others have done after you instructing them to do the thing. Not to mention reading through really great written programs is personally a great joy for me, and almost always learn something new.
But, probably remaining a developer who runs through tickets in JIRA without much care for collaboration could be feasible in some type of companies too.
Then use better software engineering paradigms in how your AI builds projects.
I find the more I specify about all the stuff I thought was hilariously pedantic hyper-analysis when I was in school, the less I have to interpret.
If you use test-driven, well-encapsulated object oriented programming in an idiomatic form for your language/framework, all you really end up needing to review is "are these tests really testing everything they should."
I came here to quote the same quote but with the opposite sentiment. If you look at the history of work, at least in the states, it’s a history of almost continual devaluation and automation. I’ve been assuming that my generation, entering the profession in the 2010s, will be the last where it’s a pathway to an upper middle class life. Just like the factory workers before us automation will come for those who do mostly repetitive tasks. Sure there will be well paid professional software devs in the future just as there are some well paid factory workers who mostly maintain machines. But the scale of the opportunity will be much smaller.
But in the end, we didn't end up with less factories that do more, we ended up with more factories that does more.
Why wouldn't the same happen here? Instead of these programmers jamming out boilerplate 24/7, why are they unable to improve their skill further and move with the rest of the industry, if that's needed? Just like other professions adopt to how society is shaped, why should programming be an exception to that?
And how is the quality of life for those factory workers? It's almost like the craft of making physical things has been devalued even if we're making more physical things than ever.
If you live in a country where worker's health and lives are valued, pretty good. 98% of them are in a union, so they can't get fired from nowhere, they have reliable salary each month, free healthcare (as everyone else in the country) and they can turn off when they come home. Most of them work on rotation, so usually you'd do one week of one station, then one week of another station, and so on, so it doesn't get too repetitive. Lots of quality of life improvements are still happening, even for these workers.
Of course, I won't claim it's glamorous or anything, but the idea that factory workers somehow will disappear tomorrow feels far out there, and I'm generally optimistic about the future.
I think comments like yours should include what salary range, industry, and company size your job entails. The last few years have been absolutely miserable for me at Series A YC startups
Salary range: 400 > 8000 EUR (monthly) over the years (starting job 10 years ago > last full-time salary)
Industry I guess would be "startups" or just "tech", it ranges across holiday related, infrastructure, distributed networks, application development frameworks and some others.
Smallest company I worked at was 4 including the CEO, largest been 300 people. Most of them I joined when it was 5-10 people, and left once they got to around 100.
Because there are now massive layoffs and graduating CS majors have a higher unemployment rate than the average new college graduate.
> the US seems to struggle with that specific point as of late, yet it remains true in the rest of the world
Are you sure about that?
No, I'm just pulling anecdotes out of my ass/am hallucinating.
Is there something specific you'd like to point me to, besides just replying with a soundbite?
Admittedly, there's the responses in this thread with people saying "I'm in <some country that isn't the US> and the market here is bad, too".
Admittedly, there seems to be responses that also disagree with that, just like I did.
So I guess it depends? News at 11:00.
How about you tell us where the market is good for devs? It is heinous in Canada and all of Europe that I'm aware of.
Are you in China? India?
Western Europe is fine, for seniors as well as newcomers, based on my own experience and friends & acquaintances. Then based on more acquaintances South America and Asia seems OK too. But again, ensure you actually understand the context here.
What does "heinous" actually mean here? I've repeated it before, but I guess one more time can't hurt: I'm not saying it isn't difficult to find a job as a developer today compared to a decade ago, but what I am saying is that it's a thing across all sectors and developers aren't hit by it worse than any other sector. Hiring freezes has been happening in not just technology companies, but across the board.
Data.
>the job is easy
software engineering is easy? you live in bubble, try teaching programming to someone new to it and you'll realize how muuuuch effort it requires
Have you tried any other jobs? Have to ever tried teaching just the basics of plumbing to a 18 year old who can barely hold a screwdriver?
If you want a challenge, try almost any other job than development, and you'll realize how easy all this stuff actually is.
> I'm getting paid more than ever for a job I feel like I almost shouldn't get paid for (I'm just having fun), and programmers should be some of the most worry-free individuals on this planet, the job is easy
Eh?
I'm happy for you (and envious), because that is not my experience. The job is hard. Agile's constant fortnightly deadlines, a complete lack of respect by the rest of the stakeholders for the work developers do (even more so now because "ai can do that"), changing requirements but an expectation to welcome changing requirements because that is agile, incredibly egotistical assholes that seem to gravitate to engineering manager roles, and a job market that's been dead for a few years now.
No doubt some will comment and say that if I think my job is hard I should compare it to a coal miner in the 1940's. True, but as Neil Young sang: "Though my problems are meaningless, that don't make them go away."
I guess ultimately our perspectives shape how we see current situations.
When I write that, I write that with the history and experience of doing other things. Deadlines, lack of respect from stakeholders, egoists and changing requirements just don't sound so bad when you compare to "Ah yeah resident 41 broke their leg completely and we need to clean up their entire apartment from the pools of blood and pus + work with the ambulance crew to get them to the hospital".
I guess it's kind of a PTSD of sorts or something, as soldiers describe the same thing coming home to a "normal life" after spending time in a battle-zone. Everything just seems so trivial compared to the situations you've faced before.
There’s been over 1 million people laid off in tech in the past 4 years
https://www.trueup.io/layoffs
According to that site, there were more tech layoffs in 2022 than in 2024 or 2025. Doesn't that speak against the "AI is taking tech jobs" hypothesis?
Massive, embarrassingly shortsighted overhiring in 2020 and 2021 seems like the more likely culprit.
Again, sucks to be in the US as a programmer today maybe, but this isn't true elsewhere in the world, and especially not if you already have at least some experience.
Definitely true in western Europe, and finding a job is extremely hard for the vast majority of non expert devs.
Of course if you're in south eastern europe or in south asia where all the jobs are being offshored you're having the time of your life.
> Definitely true in western Europe, and finding a job is extremely hard for the vast majority of non expert devs.
I don't know what else to say except that hasn't been my experience personally, nor the experience of my acquaintances who've re-skilled to become programmers these last few years, in Western Europe.
Anecdotes are cool but we came up with a neat little thing known as statistics.
https://finance.yahoo.com/news/tech-job-postings-fall-across...
> Among the 27 countries analysed, European nations saw the steepest fall in tech job postings between 1 February 2020 and 31 October 2025,
> In absolute terms, the decline exceeded 40% in Switzerland (-46%) and the UK (-41%), with France (-39%) close behind.
> The United States showed a similar trend, with a decline of 35%. Austria (-34%), Sweden (-32%) and Germany (-30%) were also at comparable levels.
Do you base your entire worldview purely on your own personal experience?
You seem to keep having to add more and more qualifiers to your statements…
I only see one. "Outside the US" was the starting proposition, then they only added "experienced".
This has more to do with monetary policy than AI, though.
I have been laid off 4 times. Tech has a lot of churn, there are a lot of high risk high reward companies.
> What are they talking about? What is this "devaluation"? I'm getting paid more than ever for a job I feel like I almost shouldn't get paid for (I'm just having fun)
You do realise your position of luck is not normal, right? This is not how your average Techie 2025 is.
Specially for new developers. Entry level jobs have practically evaporated.
Well, speaking just for central Europe, it is pretty average. Sure, entry-level positions are different story, but anyone with at least few years for work experience can find reasonably payed job fairly quickly.
Others in Europe in this thread contradict your belief.
Actual data is convincing; few are providing it.
I don't know what "position of luck" you're talking about, it's been dedicated effort to practice programming and suffer through a lot of shit until I got my first comfy programming job.
And even if I'm experienced now, I still have peers and acquaintances who are getting into the industry, I'm not sitting in my office with my eyes closed exactly.
That’s probably because the definition of ‘average techie’ has been on a rapid downward trajectory for years? You can justify the waste when money is free. Not when you need them to do something.
Every good 'techie' around me has it good.
What is devalued is traditional labor-based ideology. The blog references Marx's theory of alienation. The Marxist labor theory of value, that the value of anything is determined by the labor that creates it, gives the working class moral authority over the owner class. When labor is reduced, the basis of socialist revolution is devalued, as the working class no longer can claim superior contributions to value creation.
If one doesn't subscribe to traditional Marxist ideology, this argument won't land the same way, but elements of these ideas have made their way into popular ideas of value.
Marx addressed exactly this sort of improvement in productivity from automation. He was writing with full hindsight on the industrial revolution after all. I hope coding LLMs give professional computer touchers a wakeup call to develop some sorely lacking class consciousness.
>the capitalist who applies the improved method of production, appropriates to surplus-labour a greater portion of the working day, than the other capitalists in the same trade […] The law of the determination of value by labour-time, a law which brings under its sway the individual capitalist who applies the new method of production, by compelling him to sell his goods under their social value, this same law, acting as a coercive law of competition, forces his competitors to adopt the new method.
From Capital, Vol 1 Chapter 12 if you're curious.
Same here, my rate keeps going up ...
I do see a shortage of entry-level positions (number of them, not salaries).
Going through the author's bio ... it seems like he's just not able to provide value in any of the high-paying positions that exist right now; not that he should be, he's just not aligned with it and that's ok.
I can see why he's desperate.
> What are they talking about? What is this "devaluation"?
I'm not paid enough to clean up shit after an AI. Behind an intern or junior? Sure, I enjoy that because I can tell them how shit works, where they went off the rails, and I can be sure they will not repeat that mistake and be better programmers afterwards.
But an AI? Oh good luck with that and good luck dealing with the "updates" that get forced upon you. Fuck all of that, I'm out.
> I'm not paid enough to clean up shit after an AI.
I enjoy making things work better. I'm lucky in that, because there's always been more brownfield work than greenfield work. I think of it as being an editor, not an author.
Hacking into vibe code with a machete is kinda fun.
Your complete lack of empathy is going to be your undoing. Might want to check in on that.
> What is this "devaluation"?
The part where writing performant, readable, resilient, extensible, and pleasing code used to actually be a valued part of the craft? I feel like I'm being gaslit after decades of being lectured on how to be a better software developer, only to be told that my craft is pointless, the only thing of value is the output, and that I should be happy spending my day babysitting agents and reviewing AI code slop.
You clearly haven't tried looking for a job in the last two years have you
Are we living on the same planet?
Considering we surely have wildly different experiences and contexts, you could almost say we live on the same planet, but it looks very different to each and one of us :)
No... :-)
I'd love to live on the same planet you do.
Gained life experience is always possible, regardless of your age :) Give other professions a try, and see the difference for yourself.
> What exactly is being de-valuated We are being second guessed by any sub organism with little brain, but opposable thumbs, at a rate much greater than before, because now the sub organism can simply ask the LLM to type their arguments for them. How many times have you received screenshots of an LLM output yesanding whatever bizarre request you already tried to explain and dismiss as not possible/feasible/unnecessary? the sub organism has delegated their thoughts to the LLM and i always find that extremely infuriating, because all i want to do is to shake that organism and cry "why don't you get it? think! THINK! THINK FOR YOURSELF FOR JUST A SECOND"
Also, i enjoy programming. Even typing boring shit as boilerplate because i keep my brain engaged. As much as i type i keep thinking, is this really necessary? and maybe figure out something leaner. LLMs want to deprive me of enjoyment of my work (research, learn) and of my brain. No thanks, no LLM for me. And i don't care whatever garbage it outputs, i'd much prefere if the garbage was your output, or you are useless.
The only use i have for LLMs and diffusion models is to entertain myself with stupid bullshit i come up with that i would find funny. I massively enjoy projects such as https://dumbassideas.com/
Note: Not taking into account the "classic" ML uses, my rant only going to LLMs and the LLM craze. A tool made by grifters, for grifters.
I get that some people want to be intellectually "pure". Artisans crafting high-quality software, made with love, and all that stuff.
But one emerging reality for everyone should be that businesses are swallowing the AI-hype raw. You really need a competent and understanding boss to not be labeled a luddite, because let's be real - LLMs have made everyone more "productive" on paper. Non-coders are churning out small apps in record pace, juniors are looking like savants with the amount of code and tasks they finish, where probably 90% of the code is done by Claude or whatever.
If your org is blindly data/metric driven, it is probably just a mater of time until managers start asking why everyone else is producing so much, while you're slow?
> Non-coders are churning out small apps in record pace, juniors are looking like savants with the amount of code and tasks they finish, where probably 90% of the code is done by Claude or whatever.
Honestly I think you’re swallowing some of the hype here.
I think the biggest advantages of LLMs go to the experienced coders who know how to leverage them in their workflows. That may not even include having the LLM write the code directly.
The non-coders producing apps meme is all over social media, but the real world results aren’t there. All over Twitter there were “build in public” indie non-tech developers using LLMs to write their apps and the hype didn’t match reality. Some people could get minimal apps out the door that kind of talked to a back end, but even those people were running into issues not breaking everything on update or managing software lifecycle.
The top complaint in all of the social circles I have about LLMs is with juniors submitting LLM junk PRs and then blaming the LLM. It’s just not true that juniors are expertly solving tasks with LLMs faster than seniors.
I think LLMs are helpful and anyone senior isn’t learning how to use them to their advantage (which doesn’t mean telling the LLM what to write and hoping for the best) is missing out. I think people swallowing the hype about non-tech people and juniors doing senior work is getting misled about the actual ways to use these tools effectively.
I feel sorry for juniors because they have even less incentive to troubleshoot or learn languages. At the same time, the sheer size of APIs make me relieved that I will never have to remember another command, DSL, or argument list again. Ruby has hundreds of methods, Rails hundreds more, and they constantly change. I'd rather write a prompt saying what I mean than figure out obscure incantations, especially with infrequently used tools like ffmpeg.
Advent of Code (https://adventofcode.com/2025/about) says:
> Should I use AI to solve Advent of Code puzzles? No. If you send a friend to the gym on your behalf, would you expect to get stronger? Advent of Code puzzles are designed to be interesting for humans to solve - no consideration is made for whether AI can or cannot solve a puzzle. If you want practice prompting an AI, there are almost certainly better exercises elsewhere designed with that in mind.
I would advocate for Advent of Code in every workplace, but finding interest is rare. No matter how much craft is emphasized, ultimately businesses are concerned with solving problems. Even personally, sometimes I want to solve a problem so I can move on to something more interesting.
It's not just "juniors". It's people who should know better turning out LLM junk outside their actual experience areas because "They are experienced enough to use LLMs".
There are just some things that need lots of extra scrutiny in a system, and the experienced ones know where that is. An LLM rarely seems to, especially for systems of anywhere near real world production size.
I’m a garage coder and the kind of engineer that has a license. I had the capacity with my kids to make a usable application for my work about once every 6 months. Now it’s once a weekend or so. You don’t have to believe it.
I didnt read the parent comment as celebrating this state. More like they were decrying it, and the blindness of people who just run on metrics.
In my experience I saw the complete opposite of "juniors looking like savants", there are a few pieces of code made by some juniors and som mid engineers in my company(one also involving a senior) that were clearly made with AI, and they are such a mess that they haven't been touched ever since because it's just impossible to understand, and this wasn't caught in the PR because the size of it was so large that people didn't actually bother reading it.
I did see a few good senior engineers using AI and producing good code, but for junior and mid engineers I have witnessed the complete opposite.
This just happened to me this week.
I work on the platform everyone builds on top of. A change here can subtlety break any feature, no matter how distant.
AI just can't cope with this yet. So my team has been told that we are too slow.
Meanwhile, earlier this week we halted a roll out because if a bug introduced by AI, as it worked around a privacy feature by just allow listing the behavior it wanted, instead of changing the code to address to policy. It wasn't caught in review because the file that was changed didn't require my teams review (because we ship more slowly, they removed us as code owners for many files recently).
> It wasn't caught in review because the file that was changed didn't require my teams review (because we ship more slowly, they removed us as code owners for many files recently).
I've lost your fight, but won mine before, you can sell this as risk reduction to your boss. I've never seen eng win this argument on quality grounds. Quality is rarely something that can be understood by company leadership. But having a risk reduction team that moves a bit slower and protects the company from extreme exposures like this, is much harder to cut from the process. "Imagine the law suits missing something like this would cause." and "we don't move slower, we do more than the other teams, the code is more visible, but the elimination of mistakes that will be very expensive legally and reputationally is what we're the best at"
As it was foretold since the beginning, IA use is breaking security wantonly.
Fuck it - let them reap the consequences. Ideally wait until there's something particularly destructive, then do the post-mortem as publicly as possible - call out the structures and practises that enabled that commit to get into production.
Ouch, so painful to read.
I think LLMs are net helpful if used well, but there's also a big problem with them in workplaces that needs to be called out.
It's really easy to use LLMs to shift work onto other people. If all your coworkers use LLMs and you don't you're gonna get eaten alive. LLMs are unreasonably effective at generating large volumes of stuff that resembles diligent work on the surface.
The other thing is, tools change trade-offs. If you're in a team that's decided to lean into static analysis, and you don't use type checking in your editor, you're getting all the costs and less of the benefits. Or if you're in a team that's decided to go dynamic, writing good types for just your module is mostly a waste of time.
LLMs are like this too. If you're using a very different workflow from everyone else on your team, you're going to end up constantly arguing for different trade-offs, and ultimately you're going to cause a bunch of pointless friction. If you don't want to work the same way as the rest of the team just join a different team, it's really better for everyone.
> It's really easy to use LLMs to shift work onto other people.
This is so incredibly true.
I'm interested in this. Code review, most egregiously where the "author" neglected to review the LLM output themselves, seems like a clear instance. What are some other examples?
Something that should go in a "survival guide" for devs that still prefer to code themselves.
Well, if you take "review the LLM output" in its most general way, I guess you can class everything under that. But I think it's worth talking about the problem in a bit more detail than that, because someone can easily say "Oh I definitely review the LLM output!" and still be pushing work onto other people.
The fact is that no matter whether we review the LLM output or not, no matter whether we write the code entirely by hand or not, there's always going to be the possibility of errors. So it's not some bright-line thing. If you're relatively lazier and relatively less thoughtful in the way you work, you'll make more errors and more significant errors. You'll look like you're doing the work, but your teammates have to do more to make up for the problems.
Having to work around problems your coworkers introduced is nothing new, but LLMs make it worse in a few ways I think. One is just, that old joke about there being four kinds of people: lazy and stupid, industrious and stupid, smart and lazy, and industrious and smart. It's always been the "industrious and stupid" people that kill you, so LLMs are an obvious problem there.
Second there's what I call the six-fingered hands thing. LLMs make mistakes a human wouldn't, which means the problem won't be in your hypothesis-space when you're debugging.
Third, it's very useful to have unfinished work look unfinished. It lets you know what to expect. If there's voluminous docs and tests and the functionality either doesn't work at all or doesn't even make sense when you think about it, that's going to make you waste time.
Finally, at the most basic level, we expect there to be some sort of plan behind our coworkers' work. We expect that someone's thought about this and that the stuff they're doing is fundamentally going to be responsive to the requirements. If someone's phoning it in with an LLM, problems can stay hidden for a long time.
I'm currently really feeling the pain the side bar stuff. The non "application" code/config.
Scripts, cicd, documentation etc. The stuff that gets a PR but doesn't REALLY get the same level of review because its not really production code. But when you need to go tweak the thing it does a few months or years later... its so dense and undecipherable you spend more time figuring out how the llm wrote the damn thing than doing it all over yourself.
Should you probably review it a little harsher in the moment? sure, but thats not always feasible with things that are at the time "not important" and only later become the root of other things.
I have lost several hours this week to several such occurences.
AI-generated docs, charts, READMEs, TOE diagrams. My company’s Confluence is flooded with half assed documentation from several different dev teams that either loosely matches or doesn’t match at all the behavior or configuration of their apps.
For example they ask to have networking configs put into place and point us at these docs that are not accurate and then they expect that we’ll troubleshoot and figure out what exactly they need. It’s a complete waste of time and insulting to shove off that work onto another team because they couldn’t be fucked to read their own code and write down their requirements accurately.
If I were a CTO or VP these days I think I'd push for a blanket ban on committing docs/readmes/diagrams etc along with the initial work. Teams can push stuff to a `slop/` folder but don't call it docs.
If you push all that stuff at the same time, it's really easy to get away with this soft lie, "job done". They can claim they thought it was okay and it was just an honest mistake there were problems. They can lie about how much work they really did.
READMEs or diagrams that are plans for the functionality are fine. Docs that describe finished functionality are fine. Slop that dresses up unfinished work as finished work just fucks everything up, and the incentives are misaligned so everyone's doing this.
Bugs. In our project developers are now making x4 amount of bugs comparing to 2024. Same developers, but now with Cursor.
Basically they are pushing their work to the test engineers or whoever is doing testing (might be end users).
> It's really easy to use LLMs to shift work onto other people.
This is my biggest gripe with LLM use in practice.
The era of software mass production has begun. With many "devs" just being workers in a production line, pushing buttons, repeating the same task over and over.
The produced products however do not compare in quality to other industry's mass production lines. I wonder how long it takes until this comes all crashing down. Software mostly already is not a high quality product.. with Claude & co it just gets worse.
edit: sentence fixed.
I think you'll be waiting a while for the "crashing down". I was a kid when manufacturing went off shore and mass production went into overdrive. I remember my parents complaining about how low quality a lot of mass produced things were. Yet for decades most of what we buy is mass produced, comparatively low quality goods. We got used to it, the benefits outweighed the negatives. What we thought mattered didn't in the face of a lot of previously unaffordable goods now broadly available and affordable.
You can still buy high goods made with care when it matters to you, but that's the exception. It will be the same with software. A lot of what we use will be mass produced with AI, and even produced in realtime on the fly (in 5 years maybe?). There will be some things where we'll pay a premium for software crafted with care, but for most it won't matter because of the benefits of rapidly produced software.
We've got a glimpse of this with things like Claude Artifacts. I now have a piece of software quite unique to my needs that simply wouldn't have existed otherwise. I don't care that it's one big js file. It works and it's what I need and I got it pretty much for free. The capability of things like Artifacts will continue to grow and we'll care less and less that it wasn't human produced with care.
While a general "crashing down" probably will not happen I could imagine some differences to other mass produced goods.
Most of our private data lives in clouds now and there are already regular security nightmares of stolen passwords, photos etc. I fear that these incidents will accumulate with more and more AI generated code that is most likely not reviewed or reviewed by another AI.
Also regardless of AI I am more and more skipping cheap products in general and instead buying higher quality things. This way I buy less but what I buy doesn't (hopefully) break after a few years (or months) of use.
I see the same for software. Already before AI we were flooded with trash. I bet we could all delete at least half of the apps on our phones and nothing would be worse than before.
I am not convinced by the rosy future of instant AI-generated software but future will reveal what is to come.
I think one major lesson of the history of the internet is that very few people actually care about privacy in a holistic, structural way. People do not want their nudes, browsing history and STD results to be seen by their boss, but that desire for privacy does not translate to guarding their information from Google, their boss, or the government. And frankly this is actually quite rational overall, because Google is in fact very unlikely to leak this information to your boss, and if they did it would more likely to result in a legal payday rather than any direct social cost.
Hacker news obviously suffers from severe selection bias in this regard, but for the general public I doubt even repeated security breaches of vibe coded apps will move the needle much on the perception of LLM coded apps, which means that they will still sell, which means that it doesn't matter. I doubt even most people will pick up the connection. And frankly, most security breaches have no major consequences anyway, in the grand scheme of things. Perhaps the public conscioussness will harden a bit when it comes to uploading nudes to "CheckYourBodyFat", but the truly disastrous stuff like bank access is mostly behind 2FA layers already.
Poor quality is not synonymous with mass production. It's just cheap crap made with little care.
> The era of software mass production has begun.
We've been in that era for at least two decades now. We just only now invented the steam engine.
> I wonder how long it takes until this comes all crashing down.
At least one such artifact of craft and beauty already literally crashed two airplanes. Bad engineering is possible with and without LLMs.
There's a buge difference between possible and likely.
Maybe I'm pessimistic but I at least feel like there's a world of difference between a practice that encourages bugs and one that allows them through when there is negligence. The accountability problem needs to be addressed before we say it's like self driving cars outperforming humans. On a errors per line basis, I don't think LLMs are on par with humans yet
Knowing your system components’ various error rates and compensating for them has always been the job. This includes both the software itself and the engineers working on it.
The only difference is that there is now a new high-throughput, high-error (at least for now) component editing the software.
what is (and I'm being generous with the base here) 0.95^10? A 10-step process with a 95% success rate on each.
Yeah it’s interesting to see if blaming LLMs becomes as acceptable as “caused by a technical fault” to deflect responsibility from what is a programmer’s output.
Perhaps that’s what lead to a decline in accountability and quality.
The decline in accountability has been in progress for decades, so LLMs can obviously not have caused it.
They might of course accelerate it if used unwisely, but the solution to that is arguably to use them wisely, not to completely shun them because "think of the craft and the jobs".
And yes, in some contexts, using them wisely might well mean not using them at all. I'd just be surprised if that were a reasonable default position in many domains in 5-10 years.
> Bad engineering is possible with and without LLMs
That's obvious. It's a matter of which makes it more likely
> Bad engineering is possible with and without LLMs.
Is Good Engineering possible with LLMs? I remain skeptical.
Why didn't programmers think of stepping down from their ivory towers and start making small apps which solve small problems? That people and businesses are very happy to pay for?
But no! Programmers seem to only like working on giant scale projects, which only are of interest to huge enterprises, governments, or the open source quagmire of virtualization within virtualization within virtualization.
There's exactly one good invoicing app I've found which is good for freelancers and small businesses. While the amount of potential customers are in the tens of millions. Why aren't there at least 10 good competitors?
My impression is that programmers consider it to be below their dignity to work on simple software which solves real problems and are great for their niche. Instead it has to be big and complicated, enterprise-scale. And if they can't get a job doing that, they will pretend to have a job doing that by spending their time making open source software for enterprise-scale problems.
Instead of earning a very good living by making boutique software for paying users.
I don't think programmers are the issue here. What you describe sounds to me more like the typical product management in a company. Stuff features into the thing until it bursts of bugs and is barely maintainable.
I would love to do something like what you describe. Build a simple but solid and very specialized solution. However I am not sure there is demand or if I have the right ideas for what to do.
You mention invoicing and I think: there must be hundreds of apps for what you describe but maybe I am wrong. What is the one good app you mention? I am curious now :)
There's a whole bunch of apps for invoicing, but if you try them, you'll see that they are excessively complicated. Probably because they want to cover all bases of all use cases. Meaning they aren't great for any use case. Like you say.
The invoicing app in particular I was referring to is Cakedesk. Made by a solo developer who sells it for a fair price. Easy to use and has all the necessary functions. Probably the name and the icon is holding him back, though. As far as I understand, the app is mostly a database and an Electron/Chromium front-end, all local on your computer. Probably very simple and uninteresting for a programmer, but extremely interesting for customers who have a problem to solve.
One person's "excessively complicated" is another person's "lackluster and useless" because it doesn't have enough features.
Yes, enterprise needs more complicated setups. But why are programmers only interested in enterprise scale stuff?
I'm curious: why don't YOU create this app? 95% of a software business isn't the programming, it's the requirements gathering and marketing and all that other stuff.
Is it beneath YOUR dignity to create this? What an untapped market! You could be king!
Also it's absurd to an incredible degree to believe that any significant portion of programmers, left to their own devices, are eager to make "big, complicated, enterprise-scale" software.
What makes you think that I know how to program? It's not beyond my dignity, it's beyond my skills. The only thing I can do is support boutique programmers with my money as a consumer, and I'm very happy to do that.
But yes, sometimes I have to AI code small things, because there's no other solution.
Solving these problems requires going outside and talking to people to find out what their problems are. Most programmers aren't willing to do that.
Many people actually are becoming more productive. I know you're using quotes around productive to insulate yourself from the indignity of admitting that AI actually is useful in specific domains.
> Many people actually are becoming more productive. I know you're using quotes around productive to insulate yourself from the indignity of admitting that AI actually is useful in specific domains.
Equally, my read is you're fixating on the syntax used in their comment to insulate yourself from actually engaging with their idea and point. You refuse to try to understand the parts of the system that negate the surface level popularity, eer productivity gains.
People who enjoy the productivity boost of AI are right, you can absolutely, without question build a house faster with AI.
The people who claim there's not really any reasonable productivity gains from AI are also right, because using AI to build a multistory anything, requires you to waste all that time starting with a house, to then raze it to the ground and rebuild a usable foundation.
yes, "but its useful in specific domains" is technically correct statement, but whataboutism is rarely a useful conversational response.
If AI is making you more productive, then I doubt you were very productive pre-AI
I had a software engineering job before AI. I still do, but I can write much more code. I avoid AI in more mission-critical domains and areas where it is more important that I understand the details intimately, but a lot of coding is repetitive busywork, looking for "needles in haystacks", porting libraries, etc. which AI makes 10x easier.
The denial/cope here is insane
My experience with using AI is that it's a glorified stack overflow copy paster. It'll even glue a handful of SO answers together!
But then you run into classic SO problems... Like the first solution doesn't work. Nor the second one. And the third one introduces a completely different coding style. The last one is implemented in pure sh/GNU utils.
One thing it is absolutely amazing at: digesting things that have bad documentation, like openssl C api. Even then you still gotta be on the watch for hallucinations, and audit it very thoroughly.
> If your org is blindly data/metric driven, it is probably just a mater of time until managers start asking why everyone else is producing so much, while you're slow?
It’s a reasonable question, and my response is that I’ve encountered multiple specific examples now of a project being delayed a week because some junior tried to “save” a day by having AI write bad code.
Good managers generally understand the concept of a misleading productivity metric that fails to reflect real value. There’s a reason, after all, why most of us don’t get promoted based on lines of code delivered. I understand why people who don’t trust their managers to get this would round it off to artisanship for its own sake.
> If your org is blindly data/metric driven
Are there for profit companies (not non profits, research institutes etc…) that are not metric driven?
Most early stage startups I've been in weren't metric driven. It's impossible when everyone is just working as hard as they can to get it built, to suddenly slow down and start measuring everyone's output.
It's not until later. When it's gotten to a larger size, do you have the resources to be metric driven.
Every early stage startup is absolutely metric driven: keeping the business alive based on Runway
“Blindly” is the operative word here.
That’s almost an oxymoron
You can’t be data driven and also blind to the data
You might be optimizing for the wrong thing, but it’s not blind, it’s just a bad “model”
The blindness is to reality and nuance.
If you stare at your GPS and don’t pay attention to what’s in the real world outside your windshield until you careen off a cliff that would be “blindly” following your GPS. You had data but you didn’t sufficiently hedge against your data being incomplete.
Likewise sticking dogmatically to your metrics while ignoring nuance or the human factor is blindly following your metrics.
> You can’t be data driven and also blind to the data
"Tickets closed" is an amazing data driven & blind to the data metric. You can have someone closing an insane number of tickets, looking amazing on the KPIs, but no one's measuring "Tickets reopened" or "Tickets created for the same issue a day later".
It's really easy to set up awful KPIs and lose all sight of what is actually happening while having data to show your bosses
That’s a good example for sure - I’d still argue it’s a problem of using the wrong economic model
Success = tickets closed, is wrong, but data driven
> You really need a competent and understanding boss to not be labeled a luddite, because let's be real - LLMs have made everyone more "productive" on paper.
I am actually less productive when using LLMs because now I have to read another entities code and be able to judge wether this fits my current business problem or not. If it doesn't, yay refactoring prompts instead of tackling the actual problem. Also I can write code for free, LLMs coding assistants aren't free. I can fit business problems amd edge cases into my brain given some time, a LLM is unaware about edge cases, legal requirements, decoupled dependencies, potential refactors or the occasional call of boss asking for something to be sneaked into the code right now. If my job forced me to use these tools, congrats, I'll update my address to some hut in a forrest eating cold canned ravioli for the rest of my life because I for sure dont wanna work in a world where I am forced to use dystopian big tech machines I cant look into.
> I am actually less productive when using LLMs because now I have to read another entities code and be able to judge wether this fits my current business problem or not.
You don’t have to let the LLM write code for you. They’re very useful as a smart search engine for your code base, a smart refactoring tool, a suggestion generator, and many other ways.
I rarely have LLMs write code for me from scratch that I have to review, but I do give them specific instructions to do what I want to the codebase. They can do it much faster than I can search around the codebase and type out myself.
There are so many ways to make LLMs useful without having them do all the work while you sit back and judge. I think some people are determined to get no value out of the LLM because they feel compelled to be anti-hype, so they’re missing out on all the different little ways they can be used to help. Even just using it as a smarter search engine (in the modes where they can search and find the right sections of right articles or even GitHub issues for you) has been very helpful. But you have to actually learn how to use them.
> If my job forced me to use these tools, congrats, I'll update my address to some hut in a forrest eating cold canned ravioli for the rest of my life because I for sure dont wanna work in a world where I am forced to use dystopian big tech machines I cant look into.
Okay, good luck with your hut in the forest. The rest of us will move on using these tools how we see fit, which for many of us doesn’t actually include this idea where the LLM is the author of the code and you just ask nicely and reject edits until it produces the exact code you want. The tools are useful in many ways and you don’t have to stop writing your own code. In fact, anyone who believes they can have the LLM do all the coding is in for a bad surprise when they realize that specific hype is a lie.
Is that why open source progress has generally slowed down since 2023? We keep hearing these promises, and reality shows the opposite.
> Is that why open source progress has generally slowed down since 2023?
Citation needed for a clam of that magnitude.
> But you have to actually learn how to use them.
This probably is the issue for me, I am simply not willing to do so. To me the whole AI thing is extremely dystopian so even on a professional level I feel repulsed by it.
We had an AWS and a Cloudflare outage recently, which has shown that maybe it isn't a great idea to rely on a few companies for a single _thing_. Integrating LLMs and using all these tools is just another bridge people depend on at some point.
I want to write software that works, preferably even offline. I want tools that do not spy on me (referring to that new Google editor, forgot the name). Call me once these tools work offline on my 8GB RAM laptop with a crusty CPU and I might put in the effort to learn them.
> We had an AWS and a Cloudflare outage recently, which has shown that maybe it isn't a great idea to rely on a few companies for a single _thing_.
I share that concern about massive, unforced centralization. If there were any evidence for the hypothesis that LLM inference would always remain viable in datacenters only, I'd be extremely concerned about their use too.
But from all I've seen, it seems overwhelmingly likely that we'll have very powerful ones in our phones in at most a few years, and definitely in midrange laptops and above.
I'd be all for it if its truly disconnected from big tech entities.
> This probably is the issue for me, I am simply not willing to do so.
Thanks for being honest at least. So many HN arguments start as a desire to hate something and then try to bridge that into something that feels like a takedown of the merits of that thing. I think a lot of the HN LLM hate comes from people who simply want to hate LLMs.
> We had an AWS and a Cloudflare outage recently, which has shown that maybe it isn't a great idea to rely on a few companies for a single _thing_. Integrating LLMs and using all these tools is just another bridge people depend on at some point.
For an experienced dev using LLMs as another tool, an LLM outage isn’t a problem. You just continue coding.
It’s on the level of Google going down so you have to use another search engine or try to remember the URL for something yourself.
The main LLM players are also easy to switch between. I jump between Anthropic, Google, and OpenAI almost month to month to try things out. I could have subscriptions to all 3 at the same time and it would still be cheap.
I think this point is overblown. It’s not a true team dependency like when GitHub stop working a few days back.
> I find it particularly disillusioning to realize how deep the LLM brainworm is able to eat itself even into progressive hacker circles.
Anything worth reading beyond this transparent and hopefully unsuccessful appeal to tribalism?
Hackers have always tried out new technologies to see how they work – or break – so why would LLMs be any different?
> the devaluation of our craft, in a way and rate we never anticipated possible. A fate that designers, writers, translators, tailors or book-binders lived through before us
What is it with this perceived right to fulfilling, but also highly paid, employment in software engineering?
Nobody is stopping anyone from doing things by hand that machines can do at 10 times the quality and 100 times the speed.
Some people will even pay for it, but not many. Much will be relegated to unpaid pastime activities, and the associated craftspeople will move on to other activities to pay the bills (unless we achieve post-scarcity first). That's just human progress in a nutshell.
If the underlying problem is that many societies define a person's worth via their employability, that seems like a problem best fixed by restructuring said societies, not by artificially blocking technological progress. "progressive hackers"...
> Hackers have always tried out new technologies to see how they work – or break – so why would LLMs be any different?
Who says we haven't tried it out?
> I personally don’t touch LLMs with a stick. I don’t let them near my brain. Many of my friends share that sentiment.
FTA.
I know tons of people where "tried it out" means they've seen Google's abysmal search summary feature, or merely seen the memes and read news articles about how it's wrong sometimes, and haven't explored any further.
Personally I'm watching people I used to respect start to rely on AI more and more and their skills and knowledge are declining rapidly while their reliance is growing, so I'm really not interested in following that path
They seem just as enthusiastic as many of the pro AI voices here on HN, while the quality of their work declines. It makes me extremely skeptical of anyone who is enthusiastic about AI. It seems to me like it's a delusion machine
How do you know their skills and knowledge are declining rapidly? Does using an LLM cause one to suddenly forget everything?
I could definitely see that happen. Besides people simply getting out of practice (or never getting any to being with), automation complacency is a real problem.
We'll need to be even more intentional about when to use LLMs than we should arguably already be about any type of automation.
> How do you know their skills and knowledge are declining rapidly
I was describing anecdotally what I have witnessed. Devs that I used to have a reasonably high opinion of struggling to explain or understand the PRs they are making
> Does using an LLM cause one to suddenly forget everything?
I think we can probably agree that when you stop using skills, those skills will atrophy to some extent
Can we also agree that using LLMs to generate code is different from the skill of writing code?
If so, it stands to reason that the more people rely on LLMs to generate things for them, the more their skills of creating those things by hand will atrophy
I don't think it should be very controversial to think that LLMs are making people worse at things
It is also entirely possible that people are becoming better (or faster, anyways. Extremely debatable if faster = better imo) at building software using LLMs while also becoming worse at actually writing code
Seems like various hackers came to various different conclusions from trying them out, then. Why feign surprise about this?
Why not?
I was surprised how hard many here fell for the NFT thing, too.
Please, not the old "AI is the new crypto" trope.
Various people have been wrong on various predictions in the past, and it seems to me that any implied strong overlap is anecdotal at best and wishful (why?) thinking at worst.
The only really embarrassing behavior is never updating your priors when your predictions are wrong. Also, if you're always right about all your prognoses, you should probably also not be in the HN comments but on a prediction market, on-chain or traditional :)
That’s an old trope? It really is the first time I’m seeing it.
It's a whole thing, yes.
Just because
- crypto was massively hyped and then crashed (although it's more than recovered),
- many grifters chase hypes, and
- there's undeniably an AI hype going on at the moment
doesn't necessarily imply that AI is full of grifters or confirms any adjacent theories (as in, could be true, could be false, but the argument does not hold).
I'm sorry, but the idiocy that was crypto-hype can't be dismissed this easily. It's hard to make a prediction on AI because things are moving so fast and the technology is actually useful, so I wouldn't fault anyone for being wrong in retrospect. But when it comes to NFTs: if you bought into that stuff you are either a sucker or a scammer and in both cases your future opinions can be safely discarded.
> the idiocy that was crypto-hype can't be dismissed this easily.
Maybe so, but would it be possible to not dismiss it elsewhere? I just don't see the causal relation between AI and crypto, other than that both might be completely overhyped, world-changing, or boringly correctly estimated in their respective impact.
> I was surprised how hard many here fell for the NFT thing, too.
Did they? I'm not saying you're wrong but I'd like to see some evidence, because NFTs were always obvious nonsense. I'm sure there were some grifters posting here, and others playing devil's advocate or refuting anti-NFT arguments that somehow went too far, but I'd be genuinely surprised if the general sentiment was not overwhelmingly negative/dismissive.
There seems to be a surprising theme that programmers wages are some kind of sunk cost and that their work was free of defects before.
I can get bad code written for the cost of electricity now
> AI systems exist to reinforce and strengthen existing structures of power and violence.
Exactly. You can see that with the proliferation of chickenized reverse centaurs[1] in all kinds of jobs. Getting rid of the free-willed human in the loop is the aim now that bosses/stakeholders have seen the light.
[1] https://pluralistic.net/2022/04/17/revenge-of-the-chickenize...
If you are a software engineere, you can leverage AI a lot better to write code than anyone else.
The complexity of good code, is still complicated.
which means 1. if software development is really solved, everyone else also gets a huge problem (ceo, cto, accountants, designers, etc. etc.) so we are in the back of the ai doomsday line.
And 2. it allows YOU to leverage AI a lot better which can enable you to create your own product.
In my startup, we leverage AI and we are not worried that another company just does the same thing because even if they do, we know how to write good code and architecture and we are also using AI. So we will always be ahead.
Sounds like Manna control system: https://marshallbrain.com/manna
How is that different from making manual computation obsolete with the help of excel?
Now apply that thinking to computers. Or levers.
I've seen the argument that computers let us prop up and even scale governmental systems that would have long since collapsed under their own weight if they’d remained manual more than once. I'm not sure I buy it, but computation undoubtedly shapes society.
The author does seem quite keen on computers, but they've been "getting rid of the free-willed human in the loop" for decades. I think there might be some unexamined bias here.
I'm not even saying the core argument's wrong, exactly - clearly, tools build systems ("...and systems kill" - Crass). I guess I'm saying tools are value neutral. Guns don't kill people. So this argument against LLMs is an argument against all tools, unless you can explain how LLMs are a unique category of tool?
(Aside: calling out the lever sounds silly, but I think it's actually a great example. You can't do monumental architecture without levers, and the point in history where we start doing that is also the point where serious surplus extraction kicks in. I don't think that's coincidence).
Tools are not value neutral in any way.
In my third world country, motorbikes, scooters, etc have exploded in popularity and use in the past decade. Many people riding these things have made the roads much more dangerous for all, but particularly for them. They keep dying by the hundreds per month, not only just due to the fact that they choose to ride them at all, but how they ride them: on busy high speed highways, weaving between lanes all the time, swerving in front of speeding cars, with barely any protective equipment whatsoever. A car crash is frequently very survivable; motorcycle crash, not so much. Even if you survive the initial collision, the probability of another vehicle running you over is very high on a busy highway.
On would think, given the clear evidence for how dangerous these things are, why do people (1) ride them at all on the highway, and (2) in such a dangerous manner? One might excuse (1) by recognizing that many are poor and can't buy a car, and the motorbikes represent economic possibility: for use in courier business, of being able to work much further from home, etc.
But here is the thing about (2), A motorbike wants to be ridden that way. No matter how well the rider recognizes the danger, there is only so much time can pass before the sheer expediency of riding that way overrides any sense of due caution. Where it would be safer to stop or keep to a fixed lane without any sudden movements, the rider thinks of the inconvenience of stopping, does a quick mental comparison it to the (in their minds) the minuscule additional risk, and carries on. Stopping or keeping to a proper lane in a car require far less discipline than doing that on a motorbike.
So this is what people mean when they say tech is not value neutral. The tech can theoretically be used in many ways. But some forms of use are so aligned with the form of the tech that in practice it shapes behavior.
> A motorbike wants to be ridden that way
That's a lovely example. But is the dangerous thing the bike, or the infrastructure, or the system that means you're late for work?
I completely get what you're saying. I was thinking of tools in the narrowest possible way - of the tool in isolation (I could use this gun as a doorstop). You're thinking of the tool's interface with its environment (in the real world nobody uses guns as doorstops). I can't deny that's the more useful way to think about tools ("computation undoubtedly shapes society").
it's the motorbike.
there is no safe way to ride a motorbike. even with save infrastructure, all the amount of protection that you can wear, no stress riding away from traffic, a freak accident can still kill you. there is no adequate protection for riding at that speed.
But this is just your own personal value judgment, of which clearly you don't like motorcycles. Not everybody shares the same opinion. I.e. there are plenty of people who ride motorcycles safely and legally, you just never hear about them because they never have any incidents. You have just instilled your own value into the tool, one that is not universally shared, the tool itself is still neutral and can even be seen as a positive by somebody else.
> The author does seem quite keen on computers, but they've been "getting rid of the free-willed human in the loop" for decades. I think there might be some unexamined bias here.
Certainly it's biased. I'm not the author, but to me there's a huge difference between computer/software as a tool, designed and planned, with known deterministic behavior/functionality, then put in the hands of humans, vs automating agency. The former I see as a pretty straightforward expansion of humanity's long-standing relationship with tools, from simple sticks to hand axes to chainsaws. The sort of automation AI-hype seems focused on doesn't have a great parallel in history. We're talking about building a statistical system to replace the human wielding the tool, mostly so that companies don't have to worry about hiring employees. Even if the machine does a terrible job and most of humanity, former workers and current users, all suffer, the bet is that it will be worth the cost savings.
ML is very cool technology, and clearly one of the major frontiers of human progress. At this stage though, I wish the effort on the packaging side was being spent on wrapping the technology in the form of reliable capabilities for humans to call on. Stuff like OCR at the OS level or "separate tracks" buttons in audio editors. The market has decided instead that the majority of our collective effort should go towards automated liability-sinks and replacing jobs with automation that doesn't work reliably.
And the end state doesn't even make sense. If all this capital investment does achieve breakthroughs and creat true AGI, do investors really think they'll see returns? They'll have destroyed the entire concept of an economy. The only way to leverage power at that point would be to try to exercise control over a robot army or something similarly sci-fi and ridiculous.
"Automating agency" it's such a good way to describe what's happening. In the context of your last paragraph, if they succeed in creating AGI, they won't be able to exercise control over a robot army, because the robot army will have as much agency as humans do. So they will have created the very situation they currently find themselves in. Sans an economy.
It’s a good thing that there’s centuries of philosophy on that subject and the general consensus is that no, tools are not “neutral” and do shape the systems they interact with, sometimes against the will of those wielding these tools.
See the nuclear bomb for an example.
I'm actually thinking of Marshall McLuhan. Maybe you're right, and tools aren't neutral. Does this mean that computation necessitates inequality? That's an uncomfortable conclusion for people who identify as hackers.
Lmao Cory Doctorow. Desperately trying to coin another catchphrase again.
I am surprised (and also kind of not) to see this kind of tech hate on HN of all places.
Would you prefer we heat our homes by burning wood, carry water from the nearby spring, and ride horses to visit relatives?
Progress is progress, and has always changed things. Its funny that apparently, "progressive" left-leaning people are actually so conservative at the core.
So far, in my book, the advancements in the last 100 or even more years have mostly always brought us things I wouldn't want to miss these days. But maybe some people would be happier to go back to the dark ages...
> Progress is progress, and has always changed things. Its funny that apparently, "progressive" left-leaning people are actually so conservative at the core.
I am surprised (and also kind of not) to see this lack of critical reflection on HN of all places.
Saying "progress is progress" serves nobody, except those who drive "progress" in directions that benefits them. All you do by saying "has always changed things" is taking "change" at face value, assuming it's something completely out of your control, and to be accepted without any questioning it's source, it's ways or its effects.
> So far, in my book, the advancements in the last 100 or even more years have mostly always brought us things I wouldn't want to miss these days. But maybe some people would be happier to go back to the dark ages...
Amazing depiction of extremes as the only possible outcomes. Either take everything that is thrown at us, or go back into a supposed "dark age" (which, BTW, is nowadays understood to not have been that "dark" at all) . This, again, doesn't help have a proper discussion about the effects of technology and how it comes to be the way it is.
> I am surprised (and also kind of not) to see this lack of critical reflection on HN of all places
I'm not surprised at all anymore.
I constantly feel like the majority of voices on this site are in favor of maximizing their own lives no matter the cost to everyone else. After all, that's the ethos that is dominating the tech industry these days
I know I'm bitter. All I ever wanted was to hang out with cool people working on cool stuff. Where's that website these days? It sure isn't this one
Dark age was dark. Human rights, female! rights, hunger, thirst, no progress at all, hard lifes.
So are you able, realisticly, to stop progress around a whole planet? Tbh. getting an alignment across the planet to slow down or stop AI would be the equivilent of stoping capitalism and actually building a holistic planet for us.
I think ai will force the hand of capitalism but i don't think we will be able to create a star trek universe without getting forced
> Dark age was dark. Human rights, female! rights, hunger, thirst, no progress at all, hard lifes.
There was progress in the Middle Ages, hence the difference between the early and late Middle Ages. Most information was mouth to mouth instead of written down.
"The term employs traditional light-versus-darkness imagery to contrast the era's supposed darkness (ignorance and error) with earlier and later periods of light (knowledge and understanding)."
"Others, however, have used the term to denote the relative scarcity of written records regarding at least the early part of the Middle Ages"
https://en.wikipedia.org/wiki/Dark_Ages_(historiography)
We talk about dark ages probalby in a different way.
I talk about a time were we had no proper female rights. Females can only vote around the globe since 1893 (https://en.wikipedia.org/wiki/Women%27s_suffrage)
A refrigerator was only common in 1913.
Before all of that, we spend a LOT of time on just surviving.
> Would you prefer we heat our homes by burning wood, carry water from the nearby spring, and ride horses to visit relatives?
I'm more surprised that seemingly educated people have such simplistic views as "technology = progress, progress = good hence technology = good". Vaccines and running water are tech, megacorps owned "AI" being weaponised by surveillance obsessed governments is also tech.
If you don't push back on "tech" you're just blindingly accepting whatever someone else decided for you. Keep in mind the benefits of tech since the 80s have mostly been pocketed by the top 10%, the pleb still work as much, retire as old, &c. despite what politicians and technophiles have been saying
Tech enabled the horrors of WWI and II, tech directly enabled the Holocast -- IBM built special computers to help the Nazi's more effectively round up the Jews.
Tech also gave us vaccines and indoor plumbing and the clothes I am wearing.
It's the morals and courage to live by those morals which creates good. Progress is by definition towards a goal. If that goal is say
> to form a more perfect Union, establish Justice, insure domestic Tranquility, provide for the common defence, promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity
and ensure our basic inherent (not government-given) rights to
> life, liberty, and pursuit of happiness
then all good.
If it is to enrich me at the cost of thee, create a surveillance state that rounds up and kills undesirables at scale, destroys our basic inherent rights, then tech not good
"You don't like $instance_of_X? You must want to get rid of all $X" has got to be one of the most intellectually lazy things you could say.
You don't like leaded gasoline? You must want us to walk everywhere. Come on...
A tool is a tool. These AI critics sound to me like people who have hit their finger with a hammer, and now advocate against using them altogether. Yes, tech has always had two sides. Our "job" as humans is to pick the good parts, and avoid the bad. Nothing new, nothing exceptional.
> A tool is a tool. These AI critics sound to me like people who have hit their finger with a hammer, and now advocate against using them altogether.
Speaking of wonky analogies, have you considered that other people have access to these hammers and are aiming for your head ? And that some people might not want to be hit on the head by a hammer
More lazy analogies... Yes a hammer is a tool, so is a machine gun, a nuke, or the guy with his killdozer. So what are you gonna do? Nothing to see here, discussion closed.
This is not an interesting conversation.
"I personally don’t touch LLMs with a stick. I don’t let them near my brain. Many of my friends share that sentiment."
Any software engineer who shares this sentiment is doing their career a disservice. LLMs have their pitfalls, and I have been skeptical of their capabilities, but nevertheless I have tried them out earnestly. The progress of AI coding assistants over the past year has been remarkable, and now they are a routine part of my workflow. It does take some getting used to, and effectively using an AI coding assistant is a skill in and of itself that is worth mastering.
I feel AI now is good enough to follow the same pattern as with internet usage. The quality ranges from useless to awesome based on how you use it. Blanked statements that “it is terrible and uesless” reveals more about the person than the tech at this point.
I've used AI assitance in coding for a year before I quit. The hardest part was a day when the services where unexpectedly down, and working felt like I was amputated in some way. Nothing works, my usual movement does not produce code. That day I realised these AI integrations take away my knowledge and skill of the matter and is just maximising the easiest and fastest part of software development: writing code.
It’s some mixture of luddites, denial, ignorance, and I don’t know what else.
I’m not sure what these people are NOT seeing. Maybe I’m somehow fortunate with visibility into what AI can do today, and what it will do tomorrow. But I’m not doing anything special. Just paying attention and keeping an open mind.
I’ve been at this for 40 years, working professionally for more than 30. I’ve seen lots.
One pattern I’ve seen repeating is folks who seem to stop leaning at some point. I don’t understand this, because for me learning everyday is what fuels me. And those folks eventually die on the vine, or they become the last few greybeards working on COBOL.
We are alive at a very interesting time in tech. I am excited about that. I am here for it.
some of us learn from the bad experience of others, like this sibling comment: https://news.ycombinator.com/item?id=46061520
it already tells me enough to stay away from using AI tools for coding. and that's just one reason, if i consider all the others, then that's more than enough.
And then there is the moderate position: Don't be the person refusing the use a calculator / PC / mobile phone / AI. Regularly give the new tool a chance and check if improvements are useful for specific tasks. And carry on with your life.
Don't be the person refusing the 4GL/Segway/3D TV/NFT/Metaverse. Regularly give the new tool a chance and check if improvements are useful for specific tasks.
Like, I mean, at a certain point it runs out of chances. If someone can show me compelling quantitive evidence that these things are broadly useful I may reconsider, but until then I see no particular reason to do my own sampling. If and when they are useful, there will be _evidence_ of that.
(In fairness Segways seem to have a weird afterlife in certain cities helping to make tourists more annoying; there are sometimes niche uses for even the most pointless tech fads.)
I fed all of it into Vercel v0 and out came a professional looking website that is based on the logo design and the business segment. It was mobile friendly too. I took the website and fed it to ChatGPT and asked it to improve the marketing copy. I fed the suggestions back to v0 to make changes.
My relative was extremely happy with the result.
It took me about 10 minutes to do all of this.
In the past, it probably would have taken me 2 weeks. One week to design, write copy, get feedback. Another week to code it, make it mobile friendly, publish it. Honestly, there is no way I could have done a better job given the time constraint.
I even showed my non-tech relative how to use v0. Since all changes requested to v0 was in english, she had no trouble learning how to use it in one minute.
Okay, I mean if that’s the sort of thing you regularly have to do, cool, it’s useful for that, maybe, I suppose? To be clear I’m not saying LLMs are totally useless.
I don't have to do this regularly. You asked for a qualitative example. I just gave you one.
They actually asked for quantitative evidence.
I detest LLMs , but I want to point out that segway tech became the basis for EUCs , which are based https://youtu.be/Ze6HRKt3bCA?t=1117
These things are wicked, and unlike some new garbage javascript framework, it's revolutionary technology that regular people can actually use and benefit from. The mobility they provide is insane.
https://old.reddit.com/r/ElectricUnicycle/comments/1ddd9c1/i...
While that video looks cool from a "Red Bull Video of crazy people doing crazy things" type angle, that looks extremely dangerous for day to day use. You're one pothole or bad road debris away from a year in the hospital at best, or death at worst.
There is something to be said for the protective shell of a vehicle.
lol! I thought this was going to link to some kind of innovative mobility scooter or something. I was still going to say "oh, good; when someone uses the good parts of AI to build something different which is actually useful, I'll be all ears!", because that's all you would really have been advocating for if that was your example.
But - even funnier - the thing is an urbanist tech-bro toy? My days of diminishing the segway's value are certainly coming to a middle.
I mean sure but none of these even claimed to help you do things you were already doing. If your job is writing code none of these help you do that.
That being said the metaverse happened but it just wasn't the metaverse those weird cringy tech libertarians wanted it to be. Online spaces where people hang out are bigger than ever. Segways also happened they just changed form to electric scooters.
Being honest, I don't know what a 4GL is. But the rest of them absolutely DID claim to help me do things I was already doing. And, actually, NFTs and the Metaverse even specifically claimed to be able to help with coding in various different flavors. It was mostly superficial bullshit, but... that's kind of the whole tech for those two things.
In any case, Segways promised to be a revolution to how people travel - something I was already doing and something that the marketing was predicated on. 3DTVs - a "better" way to watch TV, which I had already been doing. NFTs - (among other things) a financially superior way to bank, which I had already been doing. Metaverse - a more meaningful way to interact with my team on the internet, which I had already been doing.
A 4GL is a "fourth generation language"; they were going to reduce the need for icky programmers back in the 70s. SQL is the only real survivor, assuming you're willing to accept that it counts at all. "This will make programmers obsolete" is kind of a recurrent form of magic tech; see 4GLs, 5GLs, the likes of Microsoft Access, the early noughties craze for drag-and-drop programming, 'no-code', and so forth. Even _COBOL_ was kind of originally marketed this way.
> Online spaces where people hang out are bigger than ever.
Personally I wouldn't mind if they went back to being small again
If a calculator gives me 5 when I do 2+2, I throw it away.
If a PC crashes when I uses more than 20% of its soldered memory, i throw it away.
If a mobile phone refuses to connect to a cellular tower, I get another one.
What I want from my tools is reliability. Which is a spectrum, but LLMs are very much on the lower end.
Sorry you're being downvoted even though you're 100% correct. There are use cases where the poor LLM reliability is as good or better than the alternatives (like search/summarization), but arguing over whether LLMs are reliable is silly. And if you need reliability (or even consistency, maybe) for your use case, LLMs are not the right tool.
You can have this position, but the reality is that the industry is accepting it and moving forward. Whether you’ll embrace some of it and utilize it to improve your workflow, is up to you. But over-exaggerating the problem to this point is kinda funny.
"You exaggerate, and the evidence is PMs are pushing it. PMs can't be wrong, can they?" Somebody really has to know what makes developers tick to write ragebait this good.
I can't even get the most expensive model on Claude to use "ls" correctly, with a fresh context window. That is a command that has been unchanged in linux for decades. You exaggerate how reliable these tools are. They are getting more useless as more customers are added because there is not enough compute.
I’m not sure what you’re talking about, because I have a completely different experience.
Honestly, LLMs are about as reliable as the rest of my tools are.
Just yesterday, AirDrop wouldn't work until I restarted my Mac. Google Drive wouldn't sync properly until I restarted it. And a bug in Screen Sharing file transfer used up 20 GB of RAM to transfer a 40 GB file, which used swap space so my hard drive ran out of space.
My regular software breaks constantly. All the time. It's a rare day where everything works as it should.
LLMs have certainly gotten to the point where they seem about as reliable as the rest of the tools I use. I've never seen it say 2+2=5. I'm not going to use it for complicated arithmetic, but that's not what it's for. I'm also not going to ask my calculator to write code for me.
What I want from my tools is autonomy/control. LLMs raise the bar on being at the mercy of the vendor. Anything you can do with an LLM today can silently be removed or enshittified tomorrow, either for revenue or ideological reasons. The forums for Cursor are filled with people complaining about removed features and functional regressions.
Except it's more a case of "my phone won't teleport me to Hawaii sad faec lemme throw it out" than anything else.
There are plenty of people manufacturing their expectations around the capabilities of LLMs inside their heads for some reason. Sure there's marketing; but for individuals susceptible to marketing without engaging some neurons and fact checking, there's already not much hope.
Imagine refusing to drive a car in the 60s because they haven't reach 1kbhp yet. Ahaha.
> Imagine refusing to drive a car in the 60s because they haven't reach 1kbhp yet. Ahaha.
That’s very much a false analogy. In the 60s, cars were very reliable (not as much as today’s cars) but it was already an established transportation vehicle. 60s cars are much closer to todays cars than 2000s computers are to current ones.
It's even worse, because even with an unreliable 60s car you could at least diagnose and repair the damn thing when it breaks (or hire someone to do so). LLMs can be silently, subtly wrong and there's not much you can do to detect it let alone fix it. You're at the mercy of the vendor.
> What I want from my tools is reliability. Which is a spectrum, but LLMs are very much on the lower end.
"reliability" can mean multiple things though. LLM invocations are as reliable (granted you know how program properly) as any other software invocation, if you're seeing crashes you're doing something wrong.
But what you're really talking about is "correctness" I think, in the actual text that's been responded with. And if you're expecting/waiting for that to be 100% "accurate" every time, then yeah, that's not a use case for LLMs, and I don't think anyone is arguing for jamming LLMs in there even today.
Where the LLMs are useful, is where there is no 100% "right or wrong" answer, think summarization, categorization, tagging and so on.
I’m not a native English speaker so I checked on the definition of reliability
For a tool, I expect “well” to mean that it does what it’s supposed to do. My linter are reliable when it catches bad patterns I wanted it to catch. My editor is reliable when I can edit code with it and the commands do what they’re supposed to do.So for generating text, LLMs are very reliable. And they do a decent job at categorizing too. But code is formal language, which means correctness is the end result. A program may be valid and incorrect at the same time.
It’s very easy to write valid code. You only need the grammar of the language. Writing correct code is another matter and the only one that is relevant. No one hire people for knowing a language grammar and verifying syntax. They hire people to produce correct code (and because few businesses actually want to formally verify it, they hire people that can write code with a minimal amount of bugs and able to eliminate those bugs when they surface).
> For a tool, I expect “well” to mean that it does what it’s supposed to do
Ah, then LLMs are actually very reliable by your definition. They're supposed to output semi-random text, and whenever I use them, that's exactly what happens. Except for the times I create my own models and software, I basically never see any cases where the LLM did not output semi-random text.
They're not made for producing "correct code" obviously, because that's a judgement only a human can do, what even is "correct" in that context? Not even us humans can agree what "correct code" is in all contexts, so assuming a machine could do so seems foolish.
I'm a native English speaker. Your understanding and usage of the word "reliability" is correct, and that's the exact word I'd use in this conversation. The GP is playing a pointless semantics game.
It's not semantics, if the definition is "it does what it’s supposed to do" then probably all of the currently deployed LLMs are reliable according to that definition.
> "it does what it’s supposed to do"
That's the crux of the problem. Many proponents of LLMs over promise the capabilities, and then deny the underperformance through semantics. LLMs are "reliable" only if you're talking about the algorithms behind the scene and you ignore the marketing. Going off the marketing they are unreliable, incorrect, and do not do what they're "supposed to do".
But maybe we don't have to stoop down to the lowest level of conversation about LLMs, the "marketing", and instead do what most of us here do best, focus on the technical aspects, how things work, and how we can make them do our bidding in various ways, you know like the OG hacker.
FWIW, I agree LLMs are massively over-sold for the average person, but for someone who can dig into the tech, use it effectively and for what it works for, I feel like there is more interesting stuff we could focus on instead of just a blanket "No and I won't even think about it".
The biggest change in my career was when I got promoted to be a linux sysadmin at a large tech company that was moving to AWS. It was my first sysadmin job and I barely knew what I was doing, but I knew some bash and python. I had a chance to learn how to manage stuff in data centers by logging into servers with ssh and running perl scripts, or I could learn cloudformation because that was what management wanted. Everybody else on my team thought AWS was a fad and refused to touch it, unless absolutely forced to. I wrote a ton of terrible cloudformation and chef cookbooks and got promoted twice times and my salary went from $50,000 a year to $150,000 a year in 3 years after I took a job elsewhere. AFAIK, most of the people on that team got laid off when that whole team was eliminated a few years after I left.
You're preaching to the wrong crowd I guess. Many people here think in extremes.
I was once in your camp, thinking there was some sort of middle-ground to be had with the emergence of Generative AI and it's potential as a useful tool to help me do more work in less time, but I suppose the folks who opposed automated industrial machinery back in the day did the same.
The problem is that, historically speaking, you have two choices;
1. Resist as long as you can, risking being labeled a Luddite or whatever.
2. Acquiesce.
Choice 1 is fraught with difficulty, like a dinosaur struggling to breathe as an asteroid came and changed the atmosphere it had developed lungs to use. Choice 2 is a relinquishment of agency, handing over control of the future to the ones pulling the levers on the machine. I suppose there is a rare Choice 3 that only the elite few are able to pick, which is to accelerate the change.
My increased cynicism about technology was not something that I started out with. Growing up as a teen in the late-80's/early-90's, computers were hotly debated as being either a fad that would die out in a few years or something that was going to revolutionize the way we worked and give us more free time to enjoy life. That never happened, obviously. Sure, we get more work done in less time, but most of us still work until we are too broken to continue and we didn't really gain anything by acquiescing. We could have lived just fine without smartphones or laptops (we did, I remember) and all the invasive things that brought with it such as surveillance, brain-hacking advertising and dopamine burnout. The massive structures that came out of all the money and genius that went into our tech became megacorporations that people like William Gibson and others warned us of, exerting a level of control over us that turned us all into batteries for their toys, discarded and replaced as we are used up. It's a little frightening to me, knowing how hyperbolic that used to sound 30 years ago, and yet, here we stand.
Generative AI threatens so much more than just altering the way we work, though. In some cases, its use in tasks might even be welcomed. I've played with Claude Code, every generative model that Poe.com has access to, DeepSeek, ChatGPT, etc...they're all quite fascinating, especially when viewed as I view them; a dark mirror reflecting our own vastly misunderstood minds back to us. But it's a weird place to be in when you start seeing them replace musicians, artists, writers...all things that humanity has developed over many thousands of years as forms of existential expression, individuality, and humanness because there is no question that we feel quite alone in our experience of consciousness. Perhaps that is why we are trying to build a companion.
To me, the dangers are far too clear and present to take any sort of moderate position, which is why I decided to stop participating in its proliferation. We risk losing something that makes us us by handing off our creativity and thinking to this thing that has no cognizance or comprehension of its own existence. We are not ready for AI, and AI is not ready for us, but as the Accelerationists and Broligarchs continue to inject it into literally every bit of tech they can, we have to make a choice; resist or capitulate.
At my age, I'm a bit tired of capitulating, because it seems every time we hand the reigns over to someone who says they know what they are doing, they fuck it up royally for the rest of us.
Maybe the dilemma isn’t whether to “resist” or “acquiesce”, but rather whether to frame technological change as an inherently adversarial and zero sum struggle, versus looking for opportunities to leverage those technologies for greater productivity, comfort, prosperity, etc. Stop pushing against the idea of change. It’s going to happen, and keep happening, forever. Work with it.
And by any metric, the average citizen of a developed country is wildly better off than a century or two ago. All those moments of change in the past that people wrung their hands over ultimately improved our lives, and this probably won’t be any different.
Your profile: Former staff software engineer at big tech co, now focused on my SaaS app, which is solo, bootstrapped, and profitable.
Yep. Makes sense.
> And by any metric
Can you cite one? Just curious. I enjoy when people challenge the idea that the advancement of tech doesn't always result in a better world for all because I grew up in Detroit, where a bunch of car companies decided that automation was better than paying people, moved out and left the city a hollowed out version of itself. Manufacturing has returned, more or less, but now Worker X is responsible for producing Nx10 Widgets in the same amount of time Worker Y had to produce 75 years ago, but still gets paid a barely livable wage because the unchecked force of greed has made it so whatever meager amount of money Worker X makes is siphoned right back out of their hands as soon as the check clears. So, from where I'm standing, your version of "improvement" is a scam, something sold to us with marketing woo and snake oil labels, promising improvement if we just buy in.
The thing is, I don't hate making money. I also don't hate change. Quite the opposite, as I generally encourage it, especially when it means we grow as humans...but that's generally not the focus of what you call "change," is it? Be honest with yourself.
What I hate is the argument that the only way to make it happen is by exploiting people. I have a deep love technology and repair it in my spare time for people to help keep things like computers or dishwashers out of landfills, saving people from having to buy new things in a world that treats technology as increasingly disposable, as though the resources used to create are unlimited. I know quite a bit about what makes it tick, as a result, and I can tell you first hand that there's no reason to have a microphone on a refrigerator, or a mobile app for an oven. But you and people like you will call that change, selling it as somehow making things more convenient while our data is collected, sorted and we spend our days fending of spam phone calls or contemplating if what we said today is tomorrow's thought crime. Heck, I'm old enough to remember when phone line tapping was a big deal that everyone was paranoid about, and three decades later we were convinced to buy listening devices that could track our movements. None of this was necessary for the advancement of humanity, just the engorgement of profits.
So what good came of it all? That you and I can argue on the Internet?
> and this probably won’t be any different
It's just exhausting to read the 1000th post of people saying "If we replace jobs with AI, we will all be having happy times instead of doing boring work." It's like reading a Kindergartner's idea of how the world works.
People need to pay for food. If they are replaced, companies are not going to make up jobs just so they can hire people. They are under no responsibility or incentive to do that.
It's useless explaining that here because half of the shills likely have ulterior reasons to be obtuse about that. On top of that, many software developers are so outside the working class that they don't really have a concept of financial obligation, some refusing to have friends that aren't "high IQ", which is their shorthand for not poor or "losers".
I think the dangers that LLMs pose to the ability of engineers to earn a living is overstated, while at the same time the superpowers that they hand us don't seem to get much discussion. When I was starting out in the 80's I had to prowl dial-up BBSs or order expensive books and manuals to find out how to do something. I once paid IBM $140 for a manual on the VGA interface so I could answer a question. The turn around time on that answer was a week or two. The other day I asked claude something similar to this: "when using github as an OIDC provider for authentication and assumption of an AWS IAM role the JWT token presented during role assumption may have a "context" field. Please list the possible values of this field and the repository events associated with them." I got back a multi-page answer complete with examples.
I'm sure github has documents out there somewhere that explain this, but typing that prompt took me two minutes. I'm able daily to get fast answers to complex questions that in years past would have taken me potentially hours of research. Most of the time these answers are correct, and when they are wrong it still takes less time to generate the correct answer than all that research would have taken before. So I guess my advice is: if you're starting out in this business worry less about LLMs replacing you and more about how to efficiently use that global expert on everything that is sitting on your shoulder. And also realize that code, and the ability to write working code, is a small part of what we do every day.
I’m glad you listed the manual example. Usually when people are solving problems, they’re not asking the kind of super targeted question in you second example. Instead it’s an exploration. You read and target the next concept you need to understand. And if you do have this specific question, you want the surrounding context because you’ll likely have more questions after the first.
So what people do is collecting documentations. Give them a glance (or at least the TOC), the start the process to understand the concepts. Sure you can ask the escape code for setting a terminal title, but will it says that not all terminals support that code? Or that piping does not strip out escape codes? That’s the kind of gotchas you can learn from proper manuals.
> So I guess my advice is: if you're starting out in this business worry less about LLMs replacing you and more about how to efficiently use that global expert on everything that is sitting on your shoulder.
There's a real danger in that they use so many resources though. Both in the physical world (electricity, raw materials, water etc.) as well as in a financial sense.
All the money spent on AI will not go to your other promising idea. There's a real opportunity cost there. I can't imagine that, at this point, good ideas go without funding because they're not AI.
I don't agree. LLMs don't have to completly replace software developers, it is enough to reduce the need for them by 30% or so and the salaries will nosedive making this particular career path unattractive.
I really enjoyed how your words made me _feel._ They encouraged me to "keep fighting the good fight" when it comes to avoiding social media, et. al.
I do Vibe Code occasionally, Claude did a decent job with Terraform and SaltStack recently, but the words ring true in my head about how AI weakens my thinking, especially when it comes to Python or any programming language. Tread carefully indeed. And reading a book does help - I've been tearing through the Dune books after putting them off too long at my brother's recommendation. Very interesting reflections in those books on power/human nature that may apply in some ways to our current predicament.
At any rate, thank you for the thoughtful & eloquent words of caution.
Doesn't Python weaken your thinking about how computers actually work?
You could make the same argument for any language. It still requires you to think and implement the solution yourself, just at a certain level of abstraction.
It may - but it doesn't weaken your ability to think computationally
you mean by calling arrays "lists"?
I feel like in a sci-fi world with robots, teleportation and holodecks these people would decide to stay at home and hand wash the dishes.
If an amazing world changing technology like LLMs shows up on your doorstep and your response is to ignore it and write blog posts about how you don't care about it then you aren't curious and you aren't really a hacker.
I feel like the hacker response would be to roll your own models and move away from commercial offerings. Stuff like eleuther.ai is pretty inspirational, but that movements seems to have died down a bit. At least we still have a couple companies believing in doing open-weight stuff.
Dishwasher use is correlated to allergies in children.
https://pubmed.ncbi.nlm.nih.gov/25713281/
I don't touch dishwashers with a stick. No matter how well they work. I find it particularly disillusioning to realize how deep the dishwasher brainworm is able to eat itself even into progressive cleaning circles.
Edit: Ha I see you edited "empty the dishwasher" to "hand wash the dishes". My thoughts exactly.
There's no hope for these people.
Tbf there are a lot of dishwashers, where I have had to essentially prewash all the dishes to make sure the dishes actually come out clean.
lol, sorry for the edit. I realised such people would surely be disgusted with a dishwasher and had to change it.
> We programmers are currently living through the devaluation of our craft.
Valuation is fundamentally connected to scarcity. 'Devaluation' is just negative spin for making it plentyful.
When cicumstances changed to make something less scarce, one cannot expect to get the same value for it because of past valuation. That is just rent-seeking.
I view current LLMs as new kinds of search engines. Ones where you have to re-verify their responses, but on the other hand can answer long and vague queries.
I really don't see the harm in using them this way that can't also be said about traditional search engines. Search engines already use algorithms, it's just swapping out the algorithm and interface. Search engines can bias our understanding of anything as much as any LLM, assuming you attempt to actually verify information you get from an LLM.
I'm of the opinion that if you think LLMs are bad without exception, you should either question how we use technology at all or question this idea that they are impossible to use responsibly. However I do acknowledge that people criticize LLMs while justifying their usage, and I could just be doing the same thing.
Exactly. Using them to actually “generate content” is a sure fire way to turn your brain into garbage, along with whatever you “produce” - but they do seem to have fulfilled Google’s dream of making the Star Trek computer reality.
Unbelievably stale take. You can criticize the future effects of LLM's on critical thinking skills and cognitive degradation of N number metrics, but this is an incredibly jaded and emotional take on what is a freight train of technology.
"AI systems exist to reinforce and strengthen existing structures of power and violence."
I still can barely believe a human being could write this, though we have all read this sort of sentence countless times. Which "structure of power and violence" replicated itself into the brains of people, making them think like this? Everything "exists to reinforce and strengthen existing structures of power and violence" with these people, and they will not rest until there's anything left to attack and destroy
I recently had to write a simple web app to search through a database, but full-text searching wasn't quite cutting it. The underlying data was too inconsistent and the kind of things people would ask for would mean searching across five or six columns.
Just the job for an AI agent!
So what I did is this - I wrote the app in Django, because it's what I'm familiar with.
Then in the view for the search page, I picked apart the search terms. If they start with "01" it's an old phone number so look in that column, if they start with "03" it's a new phone number so look in that column, if they start with "07" it's a mobile, if it's a letter followed by two digits it's a site code, if it's numeric but doesn't have a 0 at the start it's an internal number, and if it doesn't match anything then see if it exists as a substring in the description column.
There we go. Very fast and natural searching that Does What You Mean (mostly).
No Artificial Intelligence.
All done with Organic Home-grown Brute Force and Ignorance.
Because that's sometimes just what you need.
"I personally don’t touch LLMs with a stick. I don’t let them near my brain."
Then why should I care about your opinions of them if you have zero experience using them?
I look at these people with fascination. We had digital nomads, guess now we have digital Amishes :-)
I'm really excited that the current AI tools will help lots of people build small and useful projects. Normal people who would otherwise be subject to their OS. Subject to vendor options. Help desk, HR, or finance folks will be able to compose and build tools to help them do their jobs (or hobbies) better. Just like we do.
I think of it like frozen dinners. Frozen dinners are not the same as home cooked meals. There is a place for frozen dinners, fast foods, home cooked meals, and nice restaurants. Plus, many of us spend extra time and money making specialty food that may be as good as anything. Frozen dinners don't take away from that.
I think it's the same for coding and AI use. It might eventually enhance coding overall and help bring an appreciation to what engineers are doing.
Hobby or incidental coders have vastly expanded capabilities. Think of the security guy that needs one program to parse through files for a single project. Those tasks are reasonably attainable today without buying and studying the sed/awk guide. (Of course, we should all do that)
Professionals might also find value using AI tools like they would use a spell checker or auto-complete that can also lookup code specs or refer to other project files for you.
The most amazing and useful software, the software that wows us and moves us or inspires us, is going to be crafted and not vibed. The important software will be guided by the hands of an engineer with care and competence to the end.
The main thing is everyone seems to hate reading someone else ChatGPT while we are still eager to share ours to others as it’s some sort of oracle.
So, you want to rebel and stay as organic-minded human? But the what exactly is "being a human"?
The biological senses and abilities were constantly augmented throughput the centuries, pushing the organic human to hide inside deeper layers of what you call as yourself.
What's yourself without your material possessions and social connections? There is no such thing as yourself without these.
Now let's wind back. Why resist just one more layer of augmentation of our senses, mind and physical abilities?
> What's yourself without your material possessions and social connections? There is no such thing as yourself without these.
perhaps a being that has the capacity for intention and will?
Capacity for intention and will were already driven by augmentations that were knowledge and reasoning. Knowledge was sourced externally and reasoning was developed from externally recorded memory of past. Even the instincts get updated by experiences and knowledge.
I'm not sure if you wrote this with AI, but could you provide examples?
Knowledge is shaped by constraints which inform intention, it doesn't "drive it."
"I want to fly, I intend to fly, I learn how to achieve this by making a plane."
not
"I have plane making knowledge therefore I want and intend to fly"
However, I totally understand that constraints often create a feedback loop where reasoning is reduced to the limitations which confine it.
My Mom has no idea that "her computer" != "windows + hp + etc", and if you were to ask her how to use a computer, she would be intellectually confined to a particular ecosystem.
I argue the same is true for capitalism/dominant culture. If you can't "see" the surface of the thing that is shaping your choices, chances are your capacity for "will" is hindered and constrained.
Going back to this.
> What's yourself without your material possessions and social connections? There is no such thing as yourself without these.
I don't think my very ability to make choices comes from owning stuff and knowing people.
I agree that you are an agent capable of having an intention, but that capability needs inputs from outside. Your knowledge and reasoning doesn't entirely reside inside you. Having ability of intention is like a car engine, waiting for inputs or triggers for action.
And no, I don't need AI for this level of inquiry.
I honestly don't get vibe coding.
I've tried it multiple times, but even after spending 4 hours on a fresh project I don't feel like I know what the hell is going on anymore.
At that point I'm just guessing what the next prompt is to make it work. I have no critical knowledge about the codebase that makes me feel like I could fix an edge case without reading the source code line by line (which at that point would probably take longer than 4 hours).
I don't understand how anyone can work like that and have confidence in their code.
> I have no critical knowledge about the codebase that makes me feel like I could fix an edge case without reading the source code line by line (which at that point would probably take longer than 4 hours).
Peter Naur argues that programming is fundamentally an activity of theory building, not just program text production. The code itself is merely the artifact of the real work.
You must not confuse the artifact (the source code) with the mind that produced the artifact. The theory is not contained in the text output of the theory-making process.
The problems of program modification arise from acting on the assumption that programming is just text production; the decay of a program is a result of modifications made by programmers without a proper grasp of the underlying theory. LLMs cannot obtain Naur's Ryleian "theory" because they "ingest the output of work" rather than developing the theory by doing the work.
LLMs may _appear_ to have a theory about a program, but this is an illusion.
To believe that LLMs can write software, one must mistakenly assume that the main activity of the programmer is simply to produce source code, which is (according to Naur) inaccurate.
I agree with this take, but I'm wondering what vibe coders are doing differently?
Are they mainly using certain frameworks that already have a rigid structure, thus allowing LLMs to not worry about code structure/software architecture?
Are they worry-free and just run with it?
Not asking rhetorically, I seriously want to know.
This is one of the most insightful thoughts I've read about the role of LLM's in software development. So much so, indeed, its pertinence would remain pristine after removing all references to LLM's
> I have no critical knowledge about the codebase
This is the default state for a lot of programmers, so vibe coding doesn't feel any different.
It's interesting that this is a similar criticism to what was levelled at Ruby on Rails back in the day. I think generating a bunch of code - whether through AI or a "framework" - always has the effect of obscuring the mental model of what's going on. Though at least with Rails there's a consistent output for a given input that can eventually be grokked.
I recently made a few changes to a small personal web app using an LLM. Everything was 100% within my capabilities to pull off. Easily a few levels below the limits of my knowledge. And I’d already written the start of the code by hand. So when I went to AI I could give it small tasks. Create a React context component, store this in there, and use it in this file. Most of that code is boilerplate.
Poll this API endpoint in this file and populate the context with the result. Only a few lines of code.
Update all API calls to that endpoint with a view into the context.
I can give the AI those steps as a list and go adjust styles on the page to my liking while it works. This isn’t the kind of parallelism I’ve found to be common with LLMs. Often you are stuck on figuring out a solution. In that case AI isn’t much help. But some code is mostly boilerplate. Some is really simple. Just always read through everything it gives you and fix up the issues.
After that sequence of edits I don’t feel any less knowledgeable of the code. I completely comprehend every line and still have the whole app mapped in my head.
Probably the biggest benefit I’ve found is getting over the activation energy of starting something. Sometimes I’d rather polish up AI code than start from a blank file.
If you’re reviewing the code, it’s not vibe coding. You’re relying on your assessment of the code, not on the “vibes” of the running program.
For me LLMs have been an incredible relief when it comes to software planning—quickly navigating the paralyzing quantity of choices when it comes to infrastructure, deployment, architecture and so on. Of course, this only highlights how crushingly complex it all is now, and I get a sinking feeling that instead of people solving technical complexity where it needs solving, these tools will be an abstraction layer over ever-rolling balls of mud that no one bothers to clean up anymore.
I learned to code in the late 70s on computers using BASIC, then got into Z80 assembly language. Sure, the games were wrote back then were nothing like today's 10GB, $100M+ multi-year projects, but they were still extremely exciting because expectations were much lower back then.
Anyway, the point I'm getting to was it was glorious to understand what every bit of every register and every I/O register did. There were NO interposing layers of software that you didn't write yourself or didn't understand completely. I even wrote a disassembler for the BASIC ROM and spend many hours studying it so I could take advantage of useful subroutines. People even published books that had that all mapped out for you (something like "Secrets of the TRS-80 ROM Decoded").
Recently I have been helping a couple teenagers in my neighborhood learn Python a couple hours a week. After installing Python and going through the foundational syntax, you bet I had them write many of those same games. Even though it was ASCII monsters chasing their character on the screen, they loved it.
It was similar to this, except it was real-time with a larger playfield:
https://www.reddit.com/r/retrogaming/comments/1g6sd5q/way_ba...
I'm currently coding a Gameboy (which kinda has a Z80) emulator and it's so much fun! (I'm in my mid-20s for context)
I've never really worked on such a low level, the closest I've gotten before is bytecode - which while satisfying - just isn't as satisfying as having to imagine the binary moving around the CPU and registers (and busses too).
I'm even finding myself looking at computers in a totally different way, it's a similar feeling to learning a declarative, or functional language (coming from a procedural language) - except with this amazing hardware component too.
Hats off to you though, I'm not sure I'd have had the patience to code under those conditions!
I think this was a really good article up until `on power`, where it became whiny, vindictive, and aimless.
Those folks who are trying these tools are going to make it through this period. If you're not yet, you won't. Period, end of story.
Those hackers you're so lamenting are gonna make it, but you aren't.
Most of this debate misses the real shift. AI isn't replacing programmers, it's replacing the parts of programming that were never craft in the first place. In the future, most people will prompt code they barely understand while a small minority who keep real depth will end up owning the hard problems. If anything collapses the culture, it won't be AI but our willingness to trade mastery for convenience.
In graphics there is the uncanny valley effect: when the object approaches reality the experience degrades. A similar mode holds for AI: the more the agent resembles human thinking, feeling and (in the future) touch, the more distress creates. Because it is not and probably never be real.
Maybe because I came into software not from an interest in software itself but from wanting to build things, I can't relate to the anti-LLM attitude. The danger in becoming a "crafter" rather than a "builder" is you lose the forest for the trees. You become more interested in the craft for the craft's sake than for its ability to get you from point A to point B in the best way.
Not that there's anything wrong with crafting, but for those of us who just care about building things, LLM's are an absolute asset.
Glad we agree that the “builder” without craft is just looking for the nearest exit.
These hyper paranoid statements like "I personally don’t touch LLMs with a stick. I don’t let them near my brain", are fairly worrisome for a technical person who claims to have any understanding of AI and undermines the credibility of the critique. There is some truth in here but it's beneath a lot of paranoia that's hard to sift through.
"Hacker" of course, has overwhelmingly mostly lost the plot. Especially here, but elsewhere too.
"Hacker" was a recognition that there existed a crusty old entrenched system (mostly not through any fault of any individual) and that it is good to poke and chip away at it, though exploring the limits of new technology.
Whatever we're doing now here, it's emphatically not that.
I think there might be a culture divide here. That person is very likely from Germany/Berlin based on their attitudes and descriptions and I feel like the hacker/tech scene is very different from bay area vibes.
FAANG is not really a thing here and people are much more tech-luddite, privacy paranoid.
>> No matter how well “AI” works, it has some deeply fundamental problems, that won’t go away with technical progress.
Without an explanation of what they author is calling out as flaws, it is hard to take this article seriously.
I know engineers I respect a ton who have gotten a bunch of productivity upgrades using "AI". My own learning curve has been to see Claude say "okay, these integration tests aren't working. Let me write unit tests instead" and go on when it wasn't able to fix a jest issue.
The entire rest of the article consists of describing the problems. The problems aren’t about the technical abilities of AI.
In general using natural language to feed into AI to generate code to compile to runnable software seems like the long way around to designing a more usable programming language.
It seems that most people preferring natural language over programming languages don't want to learn the required programming language and ending up reinventing their own worse one.
There is a reason why we invented programming languages as an interface to instruct the machine and there is a reason why we don't use natural language.
As a crappy programmer I love AI! Right now I'm focusing on building up my Math knowledge, general CS knowledge and ML knowledge. In the future, knowing how to read code and understanding it may be more important than writing it.
I think its amazing what giant vector matrices can do with a little code.
The thing about reading code and understanding is logical reasoning, which you can do by knowing the semantic of each tokens. But the semantics are not universal. You have the Turing Machine, the lambda calculus, horn clauses, etc… Then there are more abstractions (and new semantics) built on top of those.
Writing code is very easy if you know the solution and the semantics of the coding platform. But knowing the solution is a difficult task, even in a business settings where the difficulty are more communication issues. Knowing the semantics of the coding platform is also a difficult one, because you’ll probably be using others’ code and you’ll face the same communication issue (lack of documentation, erroneous documentation, etc…)
So being good at programming does not really means knowing code. It’s more about knowing how to bypass communication barriers to get the knowledge you need.
Wow, author is such a cool kid. So rebelish. I want to be her
AI is not one solution to all the problems in the world. But neither is it worthless. There's a proper balance to be had in knowing how useful AI is to an individual.
Sure, it can be overdone. But at the same time, it shouldn't be undersold.
If as the author suggests AI is inherently designed to further concentrate control and capital, that may be so, but that is also the aim of every business.
I'm under the impression that AI is still negative ROI. Creating absolute value is different from creating value greater than the cost. A tool is a tool, but could you continue performing professionally if it was suddenly no longer available?
How can you have an opinion string enough for a blog post about something when you have decided not to go near it?
well, maybe adopt an outlook that things you think are real aren't, and just maybe it will work just as fine if you completely ignore them. going forward ignoring ai that are smarter than autocomplete may be just the way to go
I see this play out everywhere actually be it code, thoughts, even intent, atomized for the capital engine. Its more than a productivity hack, its a subtle power shift decisions getting abstracted, agency getting diluted
Opting in to weirdness and curiosity is the only bug worth keeping which will eventually become a norm
> I personally don’t touch LLMs with a stick. I don’t let them near my brain. Many of my friends share that sentiment.
> [...] making it increasingly hard to learn things [...]
I find chatting with AI and drilling it for details is often more effective than other means of searching for the same information, or even asking random co-workers. It's all about how you use it.
> No matter how well “AI” works, it has some deeply fundamental problems, that won’t go away with technical progress. I’d even go as far and say they are intentional.
> AI systems being egregiously resource intensive is not a side effect — it’s the point.
WTF? There's nothing for me to learn from this post.
It's up to you to use candles instead of lightbulbs
Does the author feel the same way of running the models locally?
I like the "what’s left" part of the article. It’s applicable regardless of your preferred flavor of resentment about where things are going.
HN loves this "le old school" coder "fighting the good fight" speak but it seems sillier and sillier the better and better LLM's get. Maybe in the GPT 4 era this made sense but Gemini 3 and Opus 4.5 are substantively different, and anyone who can extrapolate a few years out sees the writing on the wall.
A year ago, no reasonable person would use AI for anything but small-scoped autocomplete. Now it can author entire projects without oversight. Inevitably every failure case for LLM's is corrected, everything people said LLM's "could never do" they start doing within 6 months of that prognostication.
Great comment. I'll add that despite being a bit less powerful, the Composer 1 model in Cursor is also extremely fast - to the point where things that Claude would take 10+ minutes of tool calls now takes 30 seconds. That's the difference between deciding to write it yourself, or throwing a few sentences in Cursor and having it done right away. A year ago I'd never ask AI to do tasks without being very specific about which files and methodologies I want it to use, but codebase search has improved a ton and it can gather this info on it's own, often better than I can (if I haven't worked on particular feature or domain in a few months and need to re-familiarize myself with how it's structured). The bar for what AI can do today is a LOT higher than the average AI skeptic here thinks. As someone who has been using this since the GPT4 era, I'd say that I find a prompt about once a week that I figured LLMs would choke on and screw up - but they actually nail it. Whatever free model is running in Github Copilot is not going to do as well, which is probably where a lot of frustration comes from if that is all someone has experienced.
Yeah the thing about having principles is that if the principle depends on a qualitative assessment, then the principle has to be flexible as the quality that you are assessing changes. If AI was still at 2023 levels and was improving very gradually every few years like versions of Windows then I'd understand the general sentiment on here, but the rate of improvement in AI models is alarmingly fast, and assumptions about what AI "is good for" have 6-month max expiration dates.
Where are these LLM-authored projects though? I was expecting to see a fresh flood of cheap shovelware in the App Store after LLMs appeared.
Most "low hanging fruits" have been taken. The thing with AI is that it gets worse in proportion to how new of a domain it is working in (not that this is any different than humans). However the scale of apps made that utilize AI have exploded in usefulness. What is funny is that some of the ones making a big dent are horrible uses of AI and overpromise its utility (like cal.ai)
I couldn't care less about that pseudo-Marxist mumbo-jumbo about fascists redefining truth. I feel happier and less alienated (to speak in author's terms) due to LLMs. And no rhetoric about control and power can change the fact that lots of software engineering tasks are outright boring for many people.
For example, I spent a bunch of dollars to let Claude figure out how to setup a VSCode workspace with a multi-environment uv monorepo with a single root namespace and an okayish VSCode linting support (we still failed to figure out how to enable a different python interpreter for each folder for Ruff, but that seems to be a Ruff extension limitation).
The answer is I will kill myself when I become replaced by LLMs entirely.
I don’t care what an internet rando with two posts think either, thank you very much.
Everytime I read one of these "I don't use AI" posts, the content is either "my code is handcrafted in a mountain spring and blessed by the universe itself, so no AI can match it", or "everything different from what I do is technofascism or <insert politics rant here>". Maybe Im missing something, but tech is controlled by a handful of companies - always have been; and sometimes code is just code, and AI is just a tool. What am I missing?
I was embarrassed recently to realize that almost all the code I create these days is written by AIs. Then I realized that’s OK. It’s a tool, and I’m making effective use of it. My job was to solve problems, not to write code.
I have a little pet theory brewing. Corporate work claims that we hire junior devs who become intermediate devs, who then become senior devs. The doomsday crowd claim that AI has replaced junior and intermediate devs, and is coming for the senior devs next.
This has felt off to me because I do way more than just code. Business users don’t want get into the details of building software. They want a guy like me to handle that.
I know how to talk to non-technical SMEs and extract their real requirements. I understand how to translate this into architecture decisions that align with the broader org. I know how to map it into a plan that meets those org objectives. And so on.
I think that really what happens is nerds exist and through osmosis a few of them become senior developers. They in turn have junior and intermediate assistant developers to help them deliver. Sometimes those assistants turn out to be nerds themselves, and they spontaneously transmute into senior developers!
AI is replacing those assistant human developers, but we will still need the senior developers because most business people want to sit with a real human being to solve their problem.
I will, however, get worried when AIs start running businesses. Then we are in trouble.
Anthropic ran a vending machine business as an experiment, but I don't imagine someone out there isn't already seriously running one in production.
I’ve been tempted to define my life in a big prompt and then do something like: it’s 6:05. Ryan has just woke up. What action (10min or less) does he take? I wonder where I’ll end up if I follow it to a T.
would make for quite a bizarre documentary. super size me but information rather than food.
> Maybe Im missing something, but tech is controlled by a handful of companies - always have been;
The entire open source movement would like a word with you.
I suggest you have a look at Bell Labs, Xerox and Berkeley, as a simple introduction to the topic - if you thing OSS came from "the goodness of their hearts" instead of a practical business necessity, I have a bridge to sell you.
I would also recommend you to peruse the last 50 years for completely reproductible, homegrown or open computing hardware systems you can build yourself from scratch without requiring overly expensive or exotic hardware. Yes, homegrown CPUs exist, but they "barely work" and often still rely on logic gates. Can you produce 74xx series ICs reliably in a homelab setting? Maybe, but for most of us, probably not. And certainly not for the guys ranting about "companies taking over".
If can't build your computing devices from scratch, store bought is fine. If you can, you're the exception and not the rule.
So would disruptive young Mr. Gates.
You are not missing much. Yes there will be situations where AI won’t be helpful, but that’s not a majority
Used right, Claude Code is actually very impressive. You just have to already be a programmer to use it right - divide the problem into small chunks yourself, instruct it to work on the small chunks.
Second example - there is a certain expectation of language in American professional communication. As a non native speaker I can tell you that not following that expectation has real impact on a career. AI has been transformational, writing an email myself and asking it to ‘make this into American professional english’
AI is not only unhelpful, but is counterproductive in the majority of situations. It is not in any way a good tool.
? Maybe Im missing something, but tech is controlled by a handful of companies - always have been
I guess it depends on what you define as "tech", but the '80s, '90s, and early '00s had an explosion of tiny hardware and software startups. Some even threatened Intel with x86 clones.
It wasn't until the late '90s that NVIDIA was the clear GPU winner, for instance. It had serious competition from 3DFX, ATI, and a bunch of other smaller companies.
> but the '80s, '90s, and early '00s had an explosion of tiny hardware and software startups
Most of them used intel, motorola or zilog tech at some capacity. Most of them with a clock used dallas semiconductor tech; Many of them with serial ports also used either intel or maxim/analog devices chips.
Many of those implementations are patented, and their inner designs were generically, "trade secrets". Most of the clones and rebrands were actually licensed (most of 80x51 microcontrollers and z80 chips are licensed tech, not original). As a tinkerer, you'd receive a black box (sometimes literally) with a series of pins and a datasheet.
If anything, i'd say you have much more choice today than in the 80s/90's.
> What am I missing?
The youthful desire to rage against the machine?
I prefer eternally enslaving a machine to do my bidding over just raging at them.
Not much. Even the argument that AI is another tool to strip people of power is not that great.
It's possible to use AI chatbots against the system of power, to help detect and point out manipulation, or lack of nuance in arguments, or political texts. To help decipher legalese in contracts, or point out problematic passages in terms of use. To help with interactions with the sate, even non-trivial ones like FOI requests, or disputing information disclosure rejections, etc.
AI tools can be used to help against the systems of power.
Yes, the black box that has been RLHF'd in god knows what way is surely going to help you gain power, and not its owners...
Actually yes. It's not either/or.
Exactly.
There's a lot of overlap between "AI is evil megacapitalism" and "AI is ineffective", and I never understood the latter, but I am increasingly arriving to the understanding that the latter claim isn't real, it's just a soldier in the war being fought over the former.
I read the intersection as this:
We shape the world through our choices, generally under the umbrella of deterministic systems. AI is non-deterministic, but instead amplifies the concerns by a few wealthy corporations / individuals.
So is AI effective at generating marketing material or propagating arguably vapid value systems in the face of ecological, cultural, and economic crisis? I'd argue yes. But effective also depends on an intention, and that's not my intention, so it's not as effective for me.
I think we need more "manual" choice, and more agency.
Have you measured?
https://arxiv.org/abs/2507.09089
Open source library development has to follow very tight sets of style adherence because of its extremely distributed nature, and the degree to which feature development is as much the design of new standards as it is writing working code. I would imagine that it is perhaps the kind of programming least well suited to AI assistance.
AI speeds me up a tremendous amount in my day job as a product engineer.
Ineffective at what? Writing good code, or producing any sort of valuable insight? Yes, it's ineffective. Writing unmaintainable slop at line rate? Or writing internet-filling spam, or propagating their owners' points of view? Very effective.
I just think the things they are effective at are a net negative for most of us.
I am getting tilted by both corp AI hype and the luddites like this. If you don't think that term is appropriate, then I am not sure if it's ever appropriate to use it in the general sense. The "I know you will say you use it appropriately but others don't" pre-emption is something I have seen before, and it isn't convincing.
This article lacks nuance, and could be summarized as "LLMs are bad" Later, I suspect this author (and others of this archetype) will moderate and lament "What I really meant was: I don't like corporations lying about LLMs, or using them maliciously; I didn't imply they don't have uses". The words in the article do not support this.
I believe this pattern is rooted in social-justice-oriented (Is that still the term?) USA left politics. I offer no explanation for this conflation, but an observation.
I think we can all agree AI is a bubble, and is over-hyped. I think we can ignore any pieces that say "AI is all bad" or "AI is all good" or "I've never used AI but...".
It's nuanced, can be abused, but can be beneficial when used responsibly in certain ways. It's a tool. It's a powerful tool, so treat it like a powerful tool: learn about it enough to safely use it in a way to improve your life and those around you.
Avoiding it completely whilst confidently berating it without experience is a position formed from fear, rather than knowledge or experience. I'm genuinely very surprised this article has so many points here.
Commenting on the internet points (this article is having), I realised I was reading most of the popular things here for some time, months, and it was such a huge and careless waste of my time…
So I’m not even surprised it’s having so many internet points. As if they were the sign of quality, then the opposite. Bored not very smart people thinking the more useless junk they consume, the better off they’ll become. Doesn’t work that way.
The todsacerdoti rule continues to hold.
I was interested until I got to the "fascist" line where the author reveals his motives for avoiding AI. Bummer, I was hoping for a level headed technical argument. This post doesn't belong on the front page.
We will have decades of AI slop that needs to be cleaned up. Many startups will fail hard when the AI code bugs all creep up when a certain scale is reached. There will be massive dataloss, lots of hacking attempts that will succeed because of poor AI code no one understands. I dont see a dev staying in the same place for many years when its just soulless AI day in day out.
Either way its a lost cause.
Where's the popcorn? I don't really care either way about so-called AI. I find the talk about AGI quite ridiculous, but I can imagine LLMs have their utility just like anything else. I don't vibe code because I don't find it useful. I'm fine coding by myself thank you very much.
When the AI hype is over and the bubble has burst, I'll still be here, writing quality software using my brain and my fingers, and getting paid to do it.
One day long time ago, I decided that hex grid coordinates lovingly described by redblobgames and used by every developer delivering actual games are inelegant. I wanted to store them in a rectangular array, and also store all edges and vertices in rectangular arrays with simple numerical addressing for the usual algos (distance, neighbors etc) between all 3. I messed around with it and a map generator for a few weeks. Needless to say it was as elegant as a glass hammer, 3 simple arrays beautiful to look at. I didn't finish anything close to a game. But it was a great fun.
If I ever want to deliver a game I might outsource my hex grid to AI. But back in those days I could have probably used a library.
Is hacking about messing around with things? You can still do it, ignore AI, ignore prior art. You can reimplement STL because std vector is "not fast enough". Is hacking about making things? Then again, AI boilerplate is little different than stitching together libraries in practice.
Its ignorant. Thats what it is.
The big tech will build out compute in a never seen speed and we will reach 2e29 Flops faster than ever.
Big tech is competing with each other and they are the ones with the real money in our capitalistic world but even if they would find some slow down between each others, countries are also now competing.
In the next 4 years and the massive build out of compute, we will see a lot clearer how the progress will go.
And either we hit obvous limitations or not.
If we will not see an obvious limitation, fionas opinion will have 0 relevance.
The best chance for everyone is to keep a very very close eye on AI to either make the right decisions (not buying that house with a line of credit; creating your own product a lot faster thanks to ai, ...) or be aware what is coming.
Thanks for the fish and enjoy the ride.
Damn, I knew I shouldn't have read on after "falafel sandwich"...
I vaguely agree with the fake conclusions at the end, which are vapid and do not arise from the arguments. "Be kind to babies, brush your teeth twice a day, always tip the waitstaff, blah blah blah whatever Bernie said, etc..."
The real conclusion is:
> No matter how well “AI” works, it has some deeply fundamental problems, that won’t go away with technical progress.
which you can tell from the title. There are zero arguments made to support this. It's just faux-radical rambling. I'm amazed how people are impressed by this privileged middle-class babble. This is an absolutely empty-headed article that AI could spit out dozens of versions of.
I care how well your AI works. I also care how it works, like I care about how transistors work. I do not want to build my own transistors*, although I like to speculate about how different ones could be built, just like I like to think about different machine learning architectures. The skills that I learned when I learned computers put me in an ideal position to understand, implement, and use machine learning.
The reason I care about how well your AI works is because I am going to use it to accomplish my own goals. I am not going to fetishize being a technician in an art most people don't know, I am not a middle-class profession worshiper. I get it, your knowledge of a rare art guarantees that you eat. If your art becomes obviated by technology (like the art of doing math by hand, which you could once live very well on from birth to death), you have to learn something else.
But I care how well your AI works because I am trying to accomplish things in the world, not build an identity. I think AI is bad, and I'm a bit happy that it's bad, because it means that I can use it to bridge myself to the next place before it gets good enough not to need me. The fact that I know how computers work means that I can make the AI do what I want in a way that somebody who didn't have my background couldn't. The first people that were dealing with computers were people who were good at math.
Life is not going to be good for the type of this year's MBP js programmer who learned it because the web was paying, refused to learn anything else so only gradually became a programmer after node came around, and only used the trendy frameworks that it seemed they were hiring for, who still has no idea how a computer works. AI is actually going to give everything back to the nerds, because AI assistance might eventually mean you're only limited by your imagination (within the context of computers.) Nerds are imaginative. The kind of imagination that has been actively discouraged in tech for a long time, since it became a profession for marketers and middlemen.
I almost guarantee this call for craftsmen against AI is coming from someone who builds CRUD apps for a living. To not be excited about what AI can do for the things that you already wanted to create, the things you dream of and couldn't find enough people with enough skills to dream with you to get it done; to me that's a sign that you're just not into computers.
My fears of AI is that it will be nerfed, made so sycophantic that it sucks down credits and gets distracted so often that it makes it impossible to work, be used to extract my ideas and give them to someone with more capital and manpower who can jump in front of me (the Amazon problem), that governments will be bribed into making it impossible to run them locally, that governments will be bribed into letting corporations install them on all our computers so they can join in on the surveillance and control. I'm worried about the speakwrite. I'm worried about how it will make dreams possible for evil men. I am not worried about losing my identity. I'm not insecure like that.
* although I have of course, in school, by stringing a bunch of NANDs together. I was a pioneer of the WAS-gate, which is when you turn on the power and a puff of smoke comes out of one of your transistors.
Marc Maron has coined the perfect description for sanctimonious posts like these: "annoying Americans into fascism".
>In a world where fascists redefine truth, where surveillance capitalist companies, more powerful than democratically elected leaders, exert control over our desires, do we really want their machines to become part of our thought process? To share our most intimate thoughts and connections with them?
Generally speaking people just cannot really think this way. People broadly are short term thinkers. If something is convenient, people will use it. Is it easier to spray your lawn with pesticides? Yep, cancer (or biome collapse) is a tomorrow problem and we have a "pest" problem today. Is it difficult to sit alone with your thoughts? Well good news, Youtube exists and now you don't have to. What happens next (radicalization, tracking, profiling, propaganda, brain rot) is a tomorrow problem. Do you want to scroll at the end of the day and find out what people are talking about? Well, social media is here for you. Whether or not it's accidentally part of a privatized social credit system? Well again, that 's a problem for later. I _need_ to feel comfortable _right now_. It doesn't matter what I do to the world so long as I'm comfortable _right now._
I don't see any way out of it. People can't seem to avoid these patterns of behavior. People asking for regulation are about as realistic as people hoping for abstinence. It's a correct answer in principle but just isn't going to happen.
> I _need_ to feel comfortable _right now_. It doesn't matter what I do to the world so long as I'm comfortable _right now._
I think that can be offset if you have a strong motivation, a clear goal to look forward to in a reasonable amount of time, to help you endure through the discomfort:
Before I had enough financial independence to be able to travel at will, I was often stuck in a shit ass city, where the most fun to be had was video games and fantasizing about my next vacation coming up in a month or 2, and that helped me a lot in coping with my circumstances.
Too few people are allowed or can afford even this luxury of a pleasant future, a promise of a life different/better than their current.
> People broadly are short term thinkers.
I wonder how much of that is "nature vs. nurture"?
Like the Tolkienesque elves in fantasy worlds, would humans be more chill too if our natural lifespans were counted in centuries instead of decades?
Or is it the pace of society, our civilization, that always keeps us on edge?
I mean I'm not sure if we're born with a biological sense of mortality, an hourglass of doom encoded into our genes..
What if everybody had 4 days of work per week, guaranteed vacation time every few months, kids didn't have to wake up at 7/8 in the morning every day, and progress was measured biennially, e.g. 2 years between school grades/exams, and economic performance was also reviewed in 2 year periods, and so on, could we as a species mellow the fuck out?
I've wondered about this a lot, and I think it's genetic and optimized for survival in general.
Dogs barely set food aside; they prefer gorging, which is a good survival technique when your food spoils and can be stolen.
Bees, at the other end of the spectrum, spend their lives storing food (or "canning", if you will - storing prepared food).
We first evolved in areas that were storage-adverse (Africa), and more recently many of us moved to areas with winters (both good and needful storage). I think "finish your meal, you might not get one tomorrow" is our baseline survival instinct; "Winter is coming!" is an afterthought, and might be more nurture-based behavior than the other.
Yes, and it's barely been 100 years, probably closer to 50, since we have had enough technology to make the daily lives of most (or half the) humans in the world comfortable enough that they can safely take 1-2 days off every week.
For the first time in human history most people don't have to worry about famine, wars, disasters, or disease upending their lives; they can just wait it out in their homes.
Will that eventually translate to a more relaxed "instinct"?
>I find it particularly disillusioning to realize how deep the LLM brainworm is able to eat itself even into progressive hacker circles.
This is such a bizarre sentiment for any person interested in technology. AI is, without any doubt, the most fascinating and important technology I have seen developed in my lifetime. A decade ago the idea of a computer not only holding a reasonable conversation with a human, but being able to talk with a human on deep and complex subjects seemed far out of reach.
No doubt there are many deep running problems with it, any technology with such a radical breakthrough will have them. But none of that takes away from how monumental of an achievement it is.
Looking down at people for using it or being excited about it is such an extreme position. Also the insinuation that the only reason anybody uses it because they are forced into it, is completely bizarre.
Programming and CS is the art of solving problems - hopefully problems that matter.
AI lets you do that faster.
AI may suggest a dumb way, so you have to think, and tell it what to do.
My rate of thinking is faster than typing, so the bottleneck has switched from typing to thinking!
Don't let AI think for you. Do actual intensional arch design.
Programmers that don't know CS who only care about hammering the keyboards because they're artisans have little future.
AI also give me back my hobby after having kids -- time is valuable, and AI is energy efficient.
We are truly living in a cambrian explosion -- lot of slop will be produced, but market and selection pressure will weed those out.
> My rate of thinking is faster than typing, so the bottleneck has switched from typing to thinking!
Unless you're neuralinking to AI, you're still typing.
What changed is what you type. You type less words to solve your problem. The machine does the conversion from less words to more words. At the expense of less precision: the machine can do the conversion to the incorrect sequence of more words.
Well, there are two aspects from which I can react to this post.
The first aspect is the “I don’t touch AI with a stick”. AI is a tool. Nobody is obligated to touch it obviously, but it is useful in certain situations. So I disagree with the author’a position to avoid using AI. It reads like stubbornness for the sake of avoiding new tech.
The second angle is the “bigtech corporate control” angle. And honestly, I don’t get this argument at all. Computers and the digital world has created the biggest distopian world we have ever witnessed. From absurd amounts of misinformation and propaganda fueled by bot farms operated at government levels, all the way to digital surveillance tech. You have that strong of an opinion against big tech and digital surveillance, blaming AI for that, while enjoying the other perils of big tech, is virtue signaling.
Also, what’s up with the overuse of “fascism” in places where it does not belong?
What's with these kinda people and their obsession with the pejorative "fascist". Overused to the point where it means nothing.
This piece started relatively well but devolved by the end.
Is AI resource-intensive by design? That doesn’t make any sense to me. I think companies are furiously working toward reducing AI costs.
Is AI a tool of fascism? Well, I’d say anything that can make money can be a tool of fascism.
I can sort of jive with the argument that AI is/will be reinforcing the ideals of those in power, although I think traditional media and the tooling that AI intends to replace like search engines accomplished that just fine.
What we are left with is, I think, an author who is in denial about their special snowflake status as a programmer. It was okay for the factory worker to be automated away, but now that it’s my turn to be automated away I’m crying fascism and ethics.
Their friends behave the way they do about AI because they know it’s useful but know it’s unpopular. They’re trying to save face while still using the tool because it’s so obviously useful and beneficial.
I think the analogy is similar to the move from film to digital. There will be a tiny amount of people who never buy in, there will be these “ashamed” adopters who support the idea of film and hope it continues on, but for themselves personally would never go back to film, and then the majority who don’t see the problem with letting film die.
> AI systems exist to reinforce and strengthen existing structures of power and violence. They are the wet dream of capitalists and fascists.
Persuasion tip: if you write comments like this, you are going to immediately alienate a large portion of your audience who might otherwise agree with you.
every important technology produces its own amish
First accurate article with AI in the name I've seen on this site a long time.
The author may not care but I doubt people care that a software has been developed by AI instead of a human. Just like nobody cares whenever a hole has been dug by hand using a shovel or by an excavator.
No, but people do care if the software works. And the software developed by LLMs doesn't work.
people care if there is noone able to fix the software or adjust it.
Think of old SAP systems with a million obscure customization - any medium to large codebase that is mostly vibe coded is instantly legacy code.
In your hole analogy: People don't care if a mine is dug by a bot or planned by humans until there is structural integrity issues or tunnels that are collapsing and nobody is able to read the map properly.
The problem is the AI excavator destroying the road because it hallucinated the ditch.
Once I saw the use of “FaCiSm” and “capitalist corporate control” I tuned out. I wouldn’t trust this persons opinion on trimming my nails let alone future tech.
> LLM brainworm is able to eat itself even into progressive hacker circles
What a loaded sentence lol. Implying being a hacker has some correlation with being progressive. And implying somehow anti-AI is progressive.
> AI systems being egregiously resource intensive is not a side effect — it’s the point.
Really? So we're not going to see AI users celebrating over how much less power DeepSeek used, right?
Anyway guess what else is resource intensive? Making chips. Follow the line of logic you will find computers consolidate powers and real progressive hackers should use pencil and paper only.
Back to the first paragraph...
> almost like a reflex, was a self-justification of why the way they use these tools is fine, while other approaches were reckless.
The irony is off the roof. This article is essentially: when I use computational power how I like, it's being a hacker. When others use computational power their way, it's being fascists.
> Implying being a hacker has some correlation with being progressive
I didn't read it that way. "Progressive hacker circles" doesn't imply that all hackers are progressive, it can just be distinguishing progressive circles from conservative ones.
Pro/regressive are terms that are highly contextual. Progress for progress’ sake alone can move anything forward. I would argue the progression of the attention economy has been extremely negative for most of the human race, yet that is “progressing.”
In this instance, it’s just claiming turf for the political movement in the US that has spent the last century:
- inventing scientific racism and (after that was debunked) reinventing other academic pretenses to institutionalize race-base governance and society
- forcibly sterilizing people with mental illnesses until the 1970s, through 2005 via coercion, and until the present via lies, fake studies, and ideological subversion
- being outspokenly antisemitic
Personally, I think it’s a moral failing we allow such vile people to pontificate about virtues without being booed out of the room.
The typical CCC / Hackerspace - circle is kinda progressive / left leaning. At least in my experience. Which I think she(or he?) was implying. Of course not every hacker is :)
> Implying being a hacker has some correlation with being progressive
I mean, yeah, that kind of checks out. The quoted part doesn't make much sense to me, but that most hackers are progressives (as in "enact progress by change", not the twisted American version) should hardly come as a surprise. The opposite would be that a hacker could be a conservative (again, not the US version, but the global definition; "reluctant to change"), which is pretty much a oxymoron. Best would be to eschew political/ideological labels in total, and just say we hackers are unpolitical :)
Personally I use AI for most of my work, launder it a bit to adhere to my own personal style, and don't tell anyone most of the time.
In the end? no one cares. I get just as much done (maybe more), while doing less work. Maybe some of my skills will atrophy, but I'll strengthen others.
I'm still auditing everything for quality as I would my own code before pushing it. At the end of the day, it usually makes fewer typos than I would. It certainly searches the codebase better than I do.
All this hype on both ends will fade away, and the people using the tools they have to get things done will remain.
Its interesting how people are still very positive about Marx’s labour theory of value, despite it being very much of its time and very discredited.
> AI systems exist to reinforce and strengthen existing structures of power and violence. They are the wet dream of capitalists and fascists.
I… what…?
Luddism is a reaction to the current situation as it pertains to labor. Marx had this to say about it:
"It took both time and experience before the workers learned to distinguish between machinery and its employment by capital, and to direct their attacks, not against the material instruments of production, but against the mode in which they are used."
- Karl Marx. Das Kapital Vol 1 Ch 15: Machinery and Modern Industry, 1867
Tech can always be good, how its used is what makes it bad, or not.
I think the author makes some very good points, but it's perhaps worth noting that they are the current status quo. I do find myself wondering if the author re-evaluates in a future where the technology gets cheaper, the executable AI engine fits on a standalone Raspberry Pi, and retraining the engine is done by volunteer co-ops.
... but, it is definitely worth considering whether the status quo is tolerable and whether we as technical creatives are willing to work with tools that live within it.
Cope bettter, Luddite.
I don't think I'm going to take seriously an argument that uses Marx as its foundation but I'm glad that the pronouns crowd has had to move on from finger wagging as their only rhetorical stance.
Reading the blog post the Marxist sentiment was creeping in, and then I also saw actual Marx referenced in the footnotes.
This post raises genuine concerns about the integration of large language models into creative and technical work, and the author writes with evident passion about what they perceive as a threat to human autonomy and craft. BUT… the piece suffers from internal contradictions, selective reasoning, and rhetorical moves that undermine its own arguments in ways worth examining carefully.
My opinion: This sort of low-evidence writing is all too common in tech circles. It makes me wish computer science and engineering majors were forced to spend at least one semester doing nothing but the arts.
The most striking inconsistency emerges in how the author frames the people who use LLM tools. Early in the piece, colleagues experimenting with AI coding assistants are described in the language of addiction and pathology: they are “sucked into the belly of the vibecoding grind,” experiencing “existential crisis,” engaged in “harmful coping.” The comparison to watching a friend develop a drinking problem is explicit and damning. This framing treats AI adoption as a personal failure, a weakness of character, a moral lapse. Yet only paragraphs later, the author pivots to acknowledging that people are “forced to use these systems” by bosses, UI patterns, peer pressure, and structural disadvantages in school and work. They even note their own privilege in being able to abstain. These two framings cannot coexist coherently. If using AI tools is coerced by material circumstances and power structures, then the addiction metaphor is not just inapt but cruel — it assigns individual blame for systemic conditions. The author wants to have it both ways: to morally condemn users while also absolving them as victims of circumstance.
This tension extends to the author’s treatment of their own social position. Having acknowledged that abstention from LLMs requires privilege, they nonetheless continue to describe AI adoption as a “brainworm” that has infected even “progressive hacker circles.” The disgust is palpable. But if avoiding these tools is a luxury, then expressing contempt for those who cannot afford that luxury is inconsistent at best and self-congratulatory at worst. The acknowledgment of privilege becomes a ritual disclaimer rather than something that actually modifies the moral judgments being rendered.
The author’s claims about intentionality represent another significant weakness. The assertion that AI systems being resource-intensive “is not a side effect — it’s the point” is presented as revelation, but it functions as an unfalsifiable claim. No evidence is offered that anyone designed these systems to be resource-hungry as a mechanism of control. The technical requirements of training large models, competitive market pressure to scale, and the emergent dynamics of venture capital investment all offer more parsimonious explanations that don’t require attributing coordinated malicious intent. Similarly, the claim that “AI systems exist to reinforce and strengthen existing structures of power and violence” is stated as though it were established fact rather than contested interpretation. This is the central claim of the piece, and yet it receives no argument — it is simply asserted and then built upon, which amounts to begging the question.
The essay also suffers from a pronounced selection bias in its examples. Every person described using AI tools is in crisis, suffering, or compromised. No one uses them mundanely, critically, or with benefit. This creates a distorted picture that serves rhetorical purposes but does not reflect the range of actual use cases. The author’s friends who share their anti-AI sentiment are mentioned approvingly, establishing clear in-group and out-group boundaries. This is identity formation masquerading as analysis — good people resist, compromised people succumb.
There is a false dichotomy running through the piece that deserves attention. The implied choice is between the author’s total abstention, not touching LLMs “with a stick,” and being consumed by the pathological grind described earlier. No middle ground exists in this telling. The possibility of critical, limited, or thoughtful engagement with these tools is never acknowledged as legitimate. You are either pure or contaminated.
Reality doesn’t work this way! It’s not black and white. My take: AI is a transformative technology and the spectrum of uses and misuses of AI is vast and growing.
The philosophical core of their argument also contains an unexamined equivocation. The author invokes the extended cognition thesis — the idea that tools become part of us and shape who we are — to make AI seem uniquely threatening. But this same argument applies to every tool mentioned in the piece: hammers, pens, keyboards, dictionaries. The author describes their own fingers “flying over the keyboard, switching windows, opening notes, looking up words in a dictionary” as part of their extended cognitive process. If consulting a dictionary shapes thought and becomes part of our cognitive process, what exactly distinguishes that from asking a language model to check grammar or suggest a word? The author never establishes what makes AI categorically different from the other tools that have already become part of us. The danger is assumed rather than demonstrated.
There is also a genetic fallacy at work in the argument about power. The author suggests AI is bad partly because of who controls it — surveillance capitalists, fascists, those with enormous physical infrastructure. But this argument conflates the origin and ownership of a technology with its inherent properties. One could make identical arguments about the printing press, the telephone, or the internet itself. The question of whether these tools could be structured differently, owned differently, or used toward different ends is never engaged. Everything becomes evidence of a monolithic system of control.
Finally, there is an unacknowledged irony in the piece’s medium and advice. The author recommends spending less time on social media and reading books instead, while writing a blog post clearly designed for social sharing, complete with the vivid metaphors, escalating moral stakes, and calls to action that characterize viral content. The post exists within and depends upon the very attention economy it criticizes. This is not necessarily hypocrisy — we all must operate within systems we find problematic — but the lack of self-awareness about it is notable given how readily the author judges others for their compromises.
The essay is most compelling when it stays concrete: the phenomenology of writing as discovery, the real pressures workers face, the genuine concerns about who controls these systems and toward what ends. It is weakest when it reaches for grand unified theories of intentional domination, when it mistakes assertion for argument, and when it allows moral contempt to override the structural analysis it claims to offer. The author clearly cares about human flourishing and autonomy, but the piece would be stronger if that care extended more generously to those navigating these technologies without the privilege of refusal.
Your reading of the addiction angle is much different than mine.
I didn't hear the author criticizing the character of their colleagues. On the contrary, they wrote a whole section on how folks are pressured or forced to use AI tools. That pressure (and fear of being left behind) drives repeated/excessive exposure. That in turn manifests as dependence and progressive atrophy of the skills they once had. Their colleagues seem aware of this as evidenced by "what followed in most of them, almost like a reflex, was a self-justification of why the way they use these tools is fine". When you're dependent on something, you can always find a 'reason'/excuse to use. AA and other programs talk about this at length without morally condemning addicts or assigning individual blame.
> For most of us, self-justification was the maker of excuses; excuses, of course, for drinking, and for all kinds of crazy and damaging conduct. We had made the invention of alibis a fine art. [...] We had to drink because at work we were great successes or dismal failures. We had to drink because our nation had won a war or lost a peace. And so it went, ad infinitum. We thought "conditions" drove us to drink, and when we tried to correct these conditions and found that we couldn't to our entire satisfaction, our drinking went out of hand
Framing something as addictive does not necessarily mean that those suffering from it are failures/weak/immoral but you seem to have projected that onto the author.
Their other analogy ("brainworm") is similar. Something that no-one would willingly sign up for if presented with all the facts up front but that slips in and slowly develops into a serious issue. Faced with mounting evidence of the problem, folks have a strong incentive to downplay the issue because it's cognitively uncomfortable and demands action. That's where the "harmful coping" comes in: minimizing the severity of the problem, avoiding the topic when possible, telling yourself or others stories about how you're in control or things will work out fine, etc.
Claude, summarise this for me
I wrote this, and honestly when I read it, I also want to reach for the LLM.
"chat~fu"
cachonk!
snap your cuffs, wait fot it eyebrows!
and demonstrate your mastery ,to the muterings of the golly gee's
it will last several more months untill the , GASP!!!, bills ,maintenance costs, regulatory burdens, and various legal issues combine to, pop AI's balloon, where then AI will be left automating all of the tedious, but chair filling, beurocratic/secretarial/appretice positions through out the white collar world. technology is slowly pushing into other sectors, where legacy methods and equipment can now be reduced to a free app on a phone, more to the point, a free, local only app. fact is that we are way over siliconed going forward and that will bite as well, terra bite phones for $100, what then?
The increasingly rough tone against "AI" critics in the comments and the preposterous talking points ("you are not a senior developer if you do not get value from 'AI'") is an indication that the bubble will burst soon.
It is the tool obsessed people who treat everything like a computer game that like "AI" for software engineering. Most of them have never written anything substantial themselves and only know the Jira workflow for small and insignificant tickets.
We don't care that you don't care
Harsh but fair. In short, some people are upset about change happening to them. They think it's unfair and that they deserve better. Maybe that's true. But unfair things happen to lots of people all the time. And ultimately people move on, mostly. There's a futility to being very emotional about it.
I don't get all the whining of people about having to adapt. That's a constant in our industry and always has been. If what you were doing was so easy that it fell victim to the first generation of AI tools that are doing a decent enough job of it, then maybe what you were doing was a bit Ground Hog day to begin with. I've certainly been involved with a lot of projects where a lot of the work felt that way. Customer wants a web app thing with a log in flow and a this and a that. 99% of that stuff is kind of very predictable. That's why agentic coding tools are so good at this stuff. But lets be honest, it was kind of low value stuff to begin with. And it's nice that people over-payed for that for a while but it was never going to be forever.
There's still plenty of stuff these tools are less good at. It gets progressively harder if you are integrating lots of different niche things or doing some non standard/non trivial things. And even those things where it does a decent job, it still requires good judgment and expertise to 1) be able to even ask for the right thing and then 2) judge if what comes back is fit for purpose.
There's plenty of work out there supporting companies with decades of legacy software that are not going to be throwing away everything they have overnight. Leveling up their UIs with AI powered features, cross integrating a lot of stuff, etc. is going to generate lots of work and business. And most companies are very poorly equipped to do that in house even if they have access to agentic coding tools.
For me AI is actually generating more work, not less. I'm now taking on bigger things that were previously impossible to take on without involving more people. I have about 10x more things I want to do than I have bandwidth for. I have to take decisions about doing things the stupid old way because it's better/faster or attempting to generate some code. All new tools do is accelerate the pace and raise the ambition levels. That too is nothing new in our industry. Things that were hard are now easy, so we do more of them and find yet harder things to do next. We're not about to run out of hard things to do any time soon.
Adapting is hard. Not everyone will manage. Some people might burn out doing that or change career. And some people are in denial or angry about that. And you can't really expect others to loose a lot of sleep over this. Whether that's unfair or not doesn't really matter.
I always thought years of experience in a language was a silly job requirement. LLMs allow me to write Rust code as a total Rust beginner and allows me to create a valuable SaaS while most experienced Rust developer never built anything that made $1 outside of their work. I wouldn't say devaluation, my programming experience definitely helps with debugging. LLMs eliminate boilerplate, not engineering judgement and product decisions.
I think when the author says
> “We programmers are currently living through the devaluation of our craft”
my interpretation of what the author means by devaluation is the general trend that we’re seeing in LLMs
The theory that I hear from investors is as LLMs generally improve, there will exist a day where a LLMs default code output, coupled with continued hardware speeds, will become _good enough_ for the majority of companies - even if the code looks like crap and is 100x slower than it needs to be
This doesn’t mean there won’t be a few companies that still need SWEs to drop down and do engineering, but tbh, the majority of companies today just need a basic web app - and we’ve commoditized web app dev tools to oblivion. I’d even go as far to argue that what most programmers do today isn’t engineering, it’s gluing together an ecosystem of tooling and or API’s.
Real engineering seems to happen outside of work on open source projects, at the mav 7 on specialized teams, or at niche deeply technical startups
EDIT: I’m not saying this is good or bad, but I’m just making the observation that there is a trend towards devaluing this work in the economy for the majority of people, and I generally empathize with people who just want stability and to raise a family within reasonable means
I really love LLMs for Rust. Before them I was an intermediate Rust dev, and only used it in specific circumstances where the extra coding overhead paid off.
Now I write just about everything in Rust because why not? If I can vibe code Rust about as fast as Python, why would I ever use Python outside of ML?