Unfortunately, one of the struggles in old high tech (thats the only thing i know, are you also experiencing this?) is that the C-level people don't look at Ai and say LLM's can make an individual 10x more productive therefore (and this is the part they miss) we can make our tool 10x better. They think: therefore we can lay off 9 people.
Whenever I get worried about this I comb through our ticket tracker and see that ~0% of them can be implemented by AI as it exists today. Once somebody cracks the memory problem and ships an agent that progressively understands the business and the codebase, then I'll start worrying. But context limitation is fundamental to the technology in its current form and the value of SWEs is to turn the bigger picture into a functioning product.
While true, my personal fear is that the higher-ups will overlook this fact and just assume that AI can do everything because of some cherry-pick simple examples, leading to one of those situations where a bunch of people get fired for no reason and then re-hired again after some time.
A lot of this can be provided or built up by better documentation in the codebase, or functional requirements that can also be created, reviewed, and then used for additional context. In our current codebase it's definitely an issue to get an AI "onboarded", but I've seen a lot less hand-holding needed in projects where you have the AI building from the beginning and leaving notes for itself to read later
Curious to hear if you've seen this work with 100k+ LoC codebases (i.e. what you could expect at a job). I've had some good experiences with high autonomy agents in smaller codebases and simpler systems but the coherency starts to fizzle out when the system gets complicated enough that thinking it through is the hard part as opposed to hammering out the code.
We have this in some of our projects too but I always wonder how long it's going to take until it just fails. Nobody reads all those memory files for accuracy. And knowing what kind of BS the AI spews regularly in day to day use I bet this simply doesn't scale.
I look at my ticket tracker and I see basically 100% of it that can be done by AI. Some with assistance because business logic is more complex/not well factored than it should be, but most of the work that is done AI is perfectly capable of doing with a well defined prompt.
Here's an example ticket that I'll probably work on next week:
Live stream validation results as they come in
The body doesn't give much other than the high-level motivation from the person who filed the ticket. In order to implement this, you need to have a lot of context, some of which can be discovered by grepping through the code base and some of which can't:
- What is the validation system and how does it work today?
- What sort of UX do we want? What are the specific deficiencies in the current UX that we're trying to fix?
- What prior art exists on the backend and frontend, and how much of that can/should be reused?
- Are there any scaling or load considerations that need to be accounted for?
I'll probably implement this as 2-3 PRs in a chain touching different parts of the codebase. GPT via Codex will write 80% of the code, and I'll cover the last 20% of polish. Throughout the process I'll prompt it in the right direction when it runs up against questions it can't answer, and check its assumptions about the right way to push this out. I'll make sure that the tests cover what we need them to and that the resultant UX feels good. I'll own the responsibility for covering load considerations and be on the line if anything falls over.
Does it look like software engineering from 3 years ago? Absolutely not. But it's software engineering all the same even if I'm not writing most of the code anymore.
As always, the limit is human bandwidth. But that's basically what AI-forward companies are doing now. I would be curious which tasks OP commenter has that couldn't be done by an agent (assuming they're a SWE)
This sounds bogus to me: if AI really could close 100% of your backlog with just a couple more humans in the loop, you’d hire a bunch of temps/contractors to do that, then declare the product done and lay off everybody. How come that isn’t happening?
I think the "well defined prompt" is precisely what the person you responded to is alluring to. They are saying they don't get worried because AI doesn't get the job done without someone behind it that knows exactly what to prompt.
You don't need AI to replace whole jobs 1:1 to have massive displacement.
If AI can do 80% of your tasks but fails miserably on the remaining 20%, that doesn't mean your job is safe. It means that 80% of the people in your department can be fired and the remaining 20% handle the parts the AI can't do yet.
Right, it doesn't help pay the bills to be right in the long run if you are discarded in the present.
There exists some fact about the true value of AI, and then there is the capitalist reaction to new things. I'm more wary of a lemming effect by leaders than I am of AI itself.
Which is pretty much true of everything I guess. It's the short sighted and greedy humans that screw us over, not the tech itself.
That's exactly the point of the essay though. The way that you're implicitly modeling labor and collaboration is linear and parallelizable, but reality is messier than that:
> The most important thing to know about labor substitution...is this: labor substitution is about comparative advantage, not absolute advantage. The question isn’t whether AI can do specific tasks that humans do. It’s whether the aggregate output of humans working with AI is inferior to what AI can produce alone: in other words, whether there is any way that the addition of a human to the production process can increase or improve the output of that process... AI can have an absolute advantage in every single task, but it would still make economic sense to combine AI with humans if the aggregate output is greater: that is to say, if humans have a comparative advantage in any step of the production process.
In reality that would probably mean that something like 60% of the developer positions would be eliminated (and, frankly, those 60% are rarely very good developers in a large company).
The remaining "surplus" 20% roles retained will then be devoted to developing features and implementing fixes using AI where those features and fixes would previously not have been high enough priority to implement or fix.
When the price of implementing a feature drops, it becomes economically viable (and perhaps competitively essential) to do so -- but in this scenario, AI couldn't do _all_ the work to implement such features so that's why 40% rather than 20% of the developer roles would be retained.
The 40% of developer roles that remain will, in theory, be more efficient also because they won't be spending as much time babysitting the "lesser" developers in the 60% of the roles that were eliminated. As well, "N" in the Mythical Man Month is reduced leading to increased efficiency.
(No, I have no idea what the actual percentages would be overall, let alone in a particular environment - for example, requirements for Spotify are quite different than for Airbus/Boeing avionics software.)
The problem is, you won’t necessarily know which 20% it did wrong until it’s too late. They will happily solve advanced math problems and tell you to put glue on your pizza with the same level of confidence.
What happens if you lay off 80% of your department while your competitors don't? If AI multiplies each developer's capabilities, there's a good chance you'll be outcompeted sooner or later.
Labor substitution is extremely difficult and almost everybody hand waves it away.
Take even the most unskilled labor that people can think about such as flipping a burger at a restaurant like McDonald's. In reality that job is multiple different roles mixed into one that are constantly changing. Multiple companies have experimented with machines and robots to perform this task all with very limited success and none with any proper economics.
Let's be charitable and assume that this type of fast food worker gets paid $50,000 a year. For that job to be displaced it needs to be performed by a robot that can be acquired for a reasonable capital expenditure such as $200,000 and requires no maintenance, upkeep, or subscription fees.
This is a complete non-reality in the restaurant industry. Every piece of equipment they have cost them significant amounts and ongoing maintenance even if it's the most basic equipment such as a grill or a fryer. The reality is that they pay service technicians and professionals a lot of money to keep that equipment barely working.
Yeah, although in the "Something big is happening" Shumer did say at the end "Eventually, robots will handle physical work too. They're not quite there yet. But "not quite there yet" in AI terms has a way of becoming "here" faster than anyone expects."
Being the hype-man that he is I assume he meant humanoid robots - I think he's being silly here, and the sentence made me roll my eyes.
I lost my job as a software developer some time ago.
Flipping burgers is WAY more demanding than I ever imagined. That's the danger of AI:
It takes jobs faster than creating new ones PLUS for some fields (like software development) downshifting to just about anything else is brutal and sometimes simply not doable.
Forget becoming manager at McDonald's or be even good at flipping burgers at the age of 40: you are competing with 20yr olds doing sports with amazing coordination etc
Jobs that require physical effort will be fine for the reasons you state
Any job that is predominantly done on a computer though is at risk IMO. AI might not completely take over everything, but I think we'll see way fewer humans managing/orchestrating larger and larger fleets of agents.
Instead of say 20 people doing some function, you'll have 3 or 4 prompting away to manage the agents to get the same amount of work done as 20 people did before.
So the people flipping the burgers and serving the customers will be safe, but the accountants and marketing folks won't be.
Can you walk me through this argument for a customer service agent? The jobs where the nuance and variety isn’t there and don’t involve physical interaction are completely different to flipping burgers
The burger cook job has already been displaced and continues to be. Pre-1940s those burger restaurants relied on skilled cooks that got their meat from a butcher and cut fresh lettuce every day. Post-1940s the cooking process has increasingly become assembly-lined and cooks have been replaced by unskilled labor. Much of the cooking process _is_ now done by robots in factories at a massive scale and the on-premise employees do little else than heat it up. In the past 10 years, automation has further increased and the cashiers have largely been replaced by self-order terminals so that employees no longer even need to speak rudimentary English. In conclusion, both the required skill-level and amount of labor needed for restaurants has been reduced drastically by automation and in fact many higher skilled trade jobs have been hit even harder: cabinetmakers, coachbuilders and such have been almost eradicated by mass production.
Funny, I go to South Korea and the fast food burger joints literally operate exactly as you say they couldn't. I've had the best burger in my life from a McDonalds in South Korea operated practically by robots.
It's a non reality in America's extremely piss poor restaurant industry. We have a competency crisis (the big key here) and worker shortage that SK doesn't, and they have far higher trust in their society.
(In the semiconductor industry) We experienced brutal layoffs arguably due to over-investment into Ai products that produce no revenue. So we've had brutal job loss due to Ai, just not in the way people expected.
Having said that, it's hard to imagine jobs like mine (working on np-complete problems) existing if the LLMs continue advancing at the current rate, and its hard to imagine they wont continue to accelerate since they're writing themselves now, so the limitations of human ability are no longer a bottleneck.
Maybe I'm being naive here, but for AI (heck, for any good algorithm) to work well, you need some at least loosely-clearly defined objectives. I assume it's much more straightforward in semi, but there're many industries, once you get into the details, all kinds of incentives start to disalign and I doubt AI could understand all kinds of nuances.
E.g. once I was tasked to build a new matching algorithm for a trading platform, and upon fully understanding of the specs I realized it can be interpreted as a mixed integer programming problem; the idea got shot down right away because PM don't understand it. There're all kinds of limiting factors once you get into the details.
Alternatively: this is an America problem. I'm outside of America and I've been fielding more interviews than ever in the past 3 months. YMMV but the leading indicator of slowed down hiring can come from so many things. Including companies just waiting to see how much LLMs affect SWE positions.
Even people in category #1 should be concerned. Even if their income is not directly affected, the potential for disruption is clearly brewing: mass unemployment, social and civil unrest.
I know smart and capable people that have been unemployed for 6+ months now, and a few much longer. Some have been through multiple layoffs.
I am presently employed, but have looked for a job. The market is the worst I've seen in my almost 30 year career. I feel deeply for anyone who needs a new job right now. It is really bad out there.
For 1, unless you already have an self-sustaining underground bunker or island, you will be affected. No matter how much savings and total compensation you have. If you went out to get grocery in the last week, it will affect you.
The take that I am increasingly believing is that Software Engineers should broadly be worried, because while there will always be demand for people who can create software products, whatever the tools may be, the skills necessary to do it well are changing rapidly. Most Software Engineers are going to wake up one day and realize their skills aren't just irrelevant, but actively detrimental, to delivering value out of software.
There will also be far fewer positions demanding these skills. Easy access to generating code has moved the bottleneck in companies to positions & skills that are substantially harder to hire for (basically: Good Judgement); so while adding Agentic Sorcerers would increase a team's code output, it might be the wrong code. Corporate profit will keep scaling with slower-growing team sizes as companies navigate the correct next thing to build.
This is exactly why man + machine can be much worse than just machine. A strong argument needs to address what we can do as an extremely slow operating, slow learning, and slow adapting species, that machines that improve in ability and efficiency monthly and annually will find they cannot do well or without.
It is clear that we are going through a disruptive change, but COVUD is not comparable. Job loss is likely to have statistics more comparable to the Black Plague. And sensible people are concerned it could get much worse.
I don’t have the answers, but acknowledging and facing the uncertainty head on won’t make things worse.
The black plague's capital-concentration aftermath supposedly fueled the renaissance and the city-state ascensions, and ultimately the great land discoveries of the 14th and 15th centuries.
Not sure if there's an analogy to make somewhere though
I belive the black plague actually caused a massive labor shortage and wages increased. When a huge amount of people die and you still need to have people build bridges and be soldiers and finish building the damn cathedral that's been under construction for the last 400 years then that is what will happen.
I meant the jobs die. So I am not sure what would stand in for "labor shortage" in a situation of sustained net job losses. Perhaps a growth opportunity for mannequins to visually fill the offices/shops of the fired, and maintain appearances?
But yes, if lots of people deathed by AI, the remaining humans might have more job security! Could that be called a "soft landing"?
> Job loss is likely to have statistics more comparable to the Black Plague.
Maybe this is overly optimistic, but if AI starts to have negative impacts on average people comparable to the plague, it seems like there's a lot more that people can do. In medieval Europe, nobody knew what was causing the plague and nobody knew how to stop it.
On the other hand, if AI quickly replaces half of all jobs, it will be very obvious what and who caused the job loss and associated decrease in living standards. Everybody will have someone they care about affected. AI job loss would quickly eclipse all other political concerns. And at the end of the day, AI can be unplugged (barring robot armies or Elon's space-based data centers I suppose).
No it's not a February 2020 moment for sure. In February 2020, most people had heard of COVID and a few scattered outbreaks happened, but people generally viewed the topic as more of a curiosity (like major world news but not necessarily something that will deeply impact them). This is more like start of March 2020 for general awareness.
I read that essay on Twitter the other day and thought that it was a mildly interesting expression of one end of the "AI is coming for our jobs" thing but a little slop-adjacent and not worth sharing further.
I just skimmed this and the so called zeitgeist here is fear. People are scared, it's material concern and he effectively stoked it.
I work on this technology for my job and while I'm very bullish pieces like that are as you said slopish and as I'll say breathless because there are so many practical challenges here to deal with standing between what is being said there and where we are now.
Capability is not evenly distributed and it's getting people into loopy ideas of just how close we are to certain milestones, not that it's wrong to think about those potential milestones but I'm wary of timelines.
Are you ever concerned about the consequences of what you are making? No one really knows how this will play out and the odds of this leading to disaster are significant.
I just don't understand people working on improving ai. It just isn't worth the risk.
Of course, I think about this at least once a week maybe more often. I think that the technology overall will be a great net benefit to humanity or I wouldn't touch it.
> Did the 80 million people believe what they were reading?
Those numbers are likely greatly exaggerated. Twitter is nowhere near where it was at its peak. You could almost call it a ghost town. Linkedin but for unhinged crypto- and AI bros.
I'm sure the metrics report 80 million views, but that's not 80 million actual individuals that cared about it.
The narrative just needs these numbers to get people to buy into the hype.
This is what they get for not reading our antislop paper (ICLR 2026) and using our anti-slopped sampler/models, or Kimi (which is remarkable relatively non sloppy)
I thought normies would have caught onto the EM dash, overuse of semicolons, overuse of fancy quotes, lack of exclamation marks, "It's not X, it's Y", etc. Clearly I was wrong.
I’m not worried about job loss as a result of being replaced by AI, because if we get AI that is actually better than humans - which I imagine must be AGI - then I don’t see why that AI would be interested in working for humans.
I’m definitely worried about job loss as a result of the AI bubble bursting, though.
The advent of AI may shape up to be just like the automobile.
At first, it's a pretty big energy hog and if you don't know how to work it, it might crash and burn.
After some time, the novelty wears off. More and more people begin using it because it is a massive convenience that does real work. Luddites who still walk or ride their bike out of principle will be mocked and scoffed.
Then the mandatory compliance will come. A government-issued license will be required to use it and track its use. This license will be tied to your identity and it will become a hard requirement for employment, citizenship, housing, loans, medical treatment, and more. Not having it will be a liability. You will be excluded from society at large if you do not comply.
Last will come the AI-integrated brain computer interface. You won't have any choice when machine-gun-wielding Optimus robots coral you into a self-driving Tesla bus to the nearest FEMA camp to receive your Starlink-connected Neuralink N1 command and control chip. You will be decapitated if you refuse the mark of the beast. Rev 20:4
> This license will be tied to your identity and it will become a hard requirement for employment, citizenship, housing, loans, medical treatment, and more. Not having it will be a liability. You will be excluded from society at large if you do not comply.
That's just an American thing, I've never owned a car and most people of my age I know haven't either.
I'm one of those developers who is now writing probably ~80% of my code via Claude. For context, I have >15 years experience and former AWS so I'm not a bright-eyed junior or former product manager who now believes themselves a master developer.
I'm not worried about AI job loss in the programming space. I can use Claude to generate ~80% of my code precisely because I have so much experience as a developer. I intuitively know what is a simple mechanical change; that is to say, uninteresting editing of lines of code; as opposed to a major architectural decision. Claude is great at doing uninteresting things. I love it because that leaves me free to do interesting things.
You might think I'm being cocky. But I've been strongly encouraging juniors to use Claude as well, and they're not nearly as successful. When Claude suggests they do something dumb--and it DOES still suggest dumb things--they can't recognize that it's dumb. So they accept the change, then bang their head on the wall as things don't work, and Claude can't figure it out to help them. Then there are bad developers who are really fucked by Claude. The ones who really don't understand anything. They will absolutely get destroyed as Claude leads them down rabbit holes. I have specific anecdotes about this from people I've spoken to. One had Claude delete a critical line in an nginx config for some reason and the dev spent a week trying to resolve it. Another was tasked with doing a simple database maintenance script, and came back two weeks later (after constant prodding by teammates for a status update) with a Claude-written reimplementation of an ORM. That developer just thought they would need another day of churning through Claude tokens to dig themselves out of an existential hole. If you can't think like a developer, these tools won't help you.
I have enough experience to review Claude's output and say "no, this doesn't make sense." Having that experience is critical, especially in what I call the "anti-Goldilocks" zone. If you're doing something precise and small-scoped, Claude will do it without issues. If you try to do something too large ("write a Facebook for dogs app") Claude will ask for more details about what you're trying to do. It's the middle ground where things are a problem: Claude tries to fill in the details when there's something just fundamentally wrong with what it's being asked.
As a concrete example, I was working on a new project and I asked Claude to implement an RPC to update a database table. It did so swimmingly, but also added a "session.commit()" line... just kind in the middle of somewhere. It was right to do so, of course, since the transaction needed to be committed. And if this app was meant to a prototype, sure. But anyone with experience knows that randomly doing commits in the middle of business logic code is a recipe for disaster. The issue, of course, was not having any consistent session management patterns. But a non-developer isn't going to recognize that that's an issue in the first place.
Or a more silly example from the same RPC: the gRPC API didn't include a database key to update. A mistake on my part. So Claude's initial implementation of the update RPC was to look at every row in the table and find ones where the non-edited fields matched. Makes... sense, in a weird roundabout way? But God help whoever ends up vibe coding something like that.
The type of AI fears are coming from things like this in the original article:
> I'll tell the AI: "I want to build this app. Here's what it should do, here's roughly what it should look like. Figure out the user flow, the design, all of it." And it does. It writes tens of thousands of lines of code. [...] when I test it, it's usually perfect.
Which is great. How many developers are getting paid full-time to make new apps on a regular basis? Most companies, I assume, only build one app. And then they spend years and many millions of dollars working on that app. "Making a new app from scratch" is the easy part! What's hard is adding new features to that app while not breaking others, when your lines of code go from those initial tens of thousands to tens of millions.
There's something to be said about the cheapness of making new software, though. I do think one-off internal tools will become more frequent thanks to AI support. But developers are still going to be the ones driving the AI, as the article says.
Unfortunately, one of the struggles in old high tech (thats the only thing i know, are you also experiencing this?) is that the C-level people don't look at Ai and say LLM's can make an individual 10x more productive therefore (and this is the part they miss) we can make our tool 10x better. They think: therefore we can lay off 9 people.
Whenever I get worried about this I comb through our ticket tracker and see that ~0% of them can be implemented by AI as it exists today. Once somebody cracks the memory problem and ships an agent that progressively understands the business and the codebase, then I'll start worrying. But context limitation is fundamental to the technology in its current form and the value of SWEs is to turn the bigger picture into a functioning product.
It's not binary. Jobs will be lost because management will expect the fewer developers to accomplish more by leveraging AI.
While true, my personal fear is that the higher-ups will overlook this fact and just assume that AI can do everything because of some cherry-pick simple examples, leading to one of those situations where a bunch of people get fired for no reason and then re-hired again after some time.
A lot of this can be provided or built up by better documentation in the codebase, or functional requirements that can also be created, reviewed, and then used for additional context. In our current codebase it's definitely an issue to get an AI "onboarded", but I've seen a lot less hand-holding needed in projects where you have the AI building from the beginning and leaving notes for itself to read later
Curious to hear if you've seen this work with 100k+ LoC codebases (i.e. what you could expect at a job). I've had some good experiences with high autonomy agents in smaller codebases and simpler systems but the coherency starts to fizzle out when the system gets complicated enough that thinking it through is the hard part as opposed to hammering out the code.
We have this in some of our projects too but I always wonder how long it's going to take until it just fails. Nobody reads all those memory files for accuracy. And knowing what kind of BS the AI spews regularly in day to day use I bet this simply doesn't scale.
Apparently you haven't seen ChatGPT enterprise and codex. I have bad news for you ...
Codex with their flagship model (currently GPT-5.3-Codex) is my daily driver. I still end up doing a lot of steering!
Can you give an example to help us understand?
I look at my ticket tracker and I see basically 100% of it that can be done by AI. Some with assistance because business logic is more complex/not well factored than it should be, but most of the work that is done AI is perfectly capable of doing with a well defined prompt.
Then why isn't it? Just offload it to the clankers and go enjoy a margarita at the beach or something.
Here's an example ticket that I'll probably work on next week:
The body doesn't give much other than the high-level motivation from the person who filed the ticket. In order to implement this, you need to have a lot of context, some of which can be discovered by grepping through the code base and some of which can't:- What is the validation system and how does it work today?
- What sort of UX do we want? What are the specific deficiencies in the current UX that we're trying to fix?
- What prior art exists on the backend and frontend, and how much of that can/should be reused?
- Are there any scaling or load considerations that need to be accounted for?
I'll probably implement this as 2-3 PRs in a chain touching different parts of the codebase. GPT via Codex will write 80% of the code, and I'll cover the last 20% of polish. Throughout the process I'll prompt it in the right direction when it runs up against questions it can't answer, and check its assumptions about the right way to push this out. I'll make sure that the tests cover what we need them to and that the resultant UX feels good. I'll own the responsibility for covering load considerations and be on the line if anything falls over.
Does it look like software engineering from 3 years ago? Absolutely not. But it's software engineering all the same even if I'm not writing most of the code anymore.
Why do you have a backlog then? If a current AI can do 100% of it then just run it over the weekend and close everything
As always, the limit is human bandwidth. But that's basically what AI-forward companies are doing now. I would be curious which tasks OP commenter has that couldn't be done by an agent (assuming they're a SWE)
This sounds bogus to me: if AI really could close 100% of your backlog with just a couple more humans in the loop, you’d hire a bunch of temps/contractors to do that, then declare the product done and lay off everybody. How come that isn’t happening?
I think the "well defined prompt" is precisely what the person you responded to is alluring to. They are saying they don't get worried because AI doesn't get the job done without someone behind it that knows exactly what to prompt.
>>I look at my ticket tracker and I see basically 100% of it that can be done by AI.
That's a sign that you have spurious problems under those tickets or you have a PM problem.
Also, a job is a not a task- if your company has jobs which is a single task then those jobs would definitely be gone.
You don't need AI to replace whole jobs 1:1 to have massive displacement.
If AI can do 80% of your tasks but fails miserably on the remaining 20%, that doesn't mean your job is safe. It means that 80% of the people in your department can be fired and the remaining 20% handle the parts the AI can't do yet.
Also, you don’t need AI to replace your job, you need someone higher up in leadership who thinks AI could replace your job.
It might all wash out eventually, but eventually could be a long time with respect to anybody’s personal finances.
Right, it doesn't help pay the bills to be right in the long run if you are discarded in the present.
There exists some fact about the true value of AI, and then there is the capitalist reaction to new things. I'm more wary of a lemming effect by leaders than I am of AI itself.
Which is pretty much true of everything I guess. It's the short sighted and greedy humans that screw us over, not the tech itself.
That's exactly the point of the essay though. The way that you're implicitly modeling labor and collaboration is linear and parallelizable, but reality is messier than that:
> The most important thing to know about labor substitution...is this: labor substitution is about comparative advantage, not absolute advantage. The question isn’t whether AI can do specific tasks that humans do. It’s whether the aggregate output of humans working with AI is inferior to what AI can produce alone: in other words, whether there is any way that the addition of a human to the production process can increase or improve the output of that process... AI can have an absolute advantage in every single task, but it would still make economic sense to combine AI with humans if the aggregate output is greater: that is to say, if humans have a comparative advantage in any step of the production process.
In reality that would probably mean that something like 60% of the developer positions would be eliminated (and, frankly, those 60% are rarely very good developers in a large company).
The remaining "surplus" 20% roles retained will then be devoted to developing features and implementing fixes using AI where those features and fixes would previously not have been high enough priority to implement or fix.
When the price of implementing a feature drops, it becomes economically viable (and perhaps competitively essential) to do so -- but in this scenario, AI couldn't do _all_ the work to implement such features so that's why 40% rather than 20% of the developer roles would be retained.
The 40% of developer roles that remain will, in theory, be more efficient also because they won't be spending as much time babysitting the "lesser" developers in the 60% of the roles that were eliminated. As well, "N" in the Mythical Man Month is reduced leading to increased efficiency.
(No, I have no idea what the actual percentages would be overall, let alone in a particular environment - for example, requirements for Spotify are quite different than for Airbus/Boeing avionics software.)
The problem is, you won’t necessarily know which 20% it did wrong until it’s too late. They will happily solve advanced math problems and tell you to put glue on your pizza with the same level of confidence.
We are already in low-hire low-fire job market where while there aren't massive layoffs to spike up unemployment there also aren't as many vacancies.
What happens if you lay off 80% of your department while your competitors don't? If AI multiplies each developer's capabilities, there's a good chance you'll be outcompeted sooner or later.
Labor substitution is extremely difficult and almost everybody hand waves it away.
Take even the most unskilled labor that people can think about such as flipping a burger at a restaurant like McDonald's. In reality that job is multiple different roles mixed into one that are constantly changing. Multiple companies have experimented with machines and robots to perform this task all with very limited success and none with any proper economics.
Let's be charitable and assume that this type of fast food worker gets paid $50,000 a year. For that job to be displaced it needs to be performed by a robot that can be acquired for a reasonable capital expenditure such as $200,000 and requires no maintenance, upkeep, or subscription fees.
This is a complete non-reality in the restaurant industry. Every piece of equipment they have cost them significant amounts and ongoing maintenance even if it's the most basic equipment such as a grill or a fryer. The reality is that they pay service technicians and professionals a lot of money to keep that equipment barely working.
>the most unskilled labor
People are worried about white-collar not blue-collar jobs being replaced. Robotics is obviously a whole different field from AI.
Yeah, although in the "Something big is happening" Shumer did say at the end "Eventually, robots will handle physical work too. They're not quite there yet. But "not quite there yet" in AI terms has a way of becoming "here" faster than anyone expects."
Being the hype-man that he is I assume he meant humanoid robots - I think he's being silly here, and the sentence made me roll my eyes.
I lost my job as a software developer some time ago.
Flipping burgers is WAY more demanding than I ever imagined. That's the danger of AI:
It takes jobs faster than creating new ones PLUS for some fields (like software development) downshifting to just about anything else is brutal and sometimes simply not doable.
Forget becoming manager at McDonald's or be even good at flipping burgers at the age of 40: you are competing with 20yr olds doing sports with amazing coordination etc
Jobs that require physical effort will be fine for the reasons you state
Any job that is predominantly done on a computer though is at risk IMO. AI might not completely take over everything, but I think we'll see way fewer humans managing/orchestrating larger and larger fleets of agents.
Instead of say 20 people doing some function, you'll have 3 or 4 prompting away to manage the agents to get the same amount of work done as 20 people did before.
So the people flipping the burgers and serving the customers will be safe, but the accountants and marketing folks won't be.
Can you walk me through this argument for a customer service agent? The jobs where the nuance and variety isn’t there and don’t involve physical interaction are completely different to flipping burgers
The burger cook job has already been displaced and continues to be. Pre-1940s those burger restaurants relied on skilled cooks that got their meat from a butcher and cut fresh lettuce every day. Post-1940s the cooking process has increasingly become assembly-lined and cooks have been replaced by unskilled labor. Much of the cooking process _is_ now done by robots in factories at a massive scale and the on-premise employees do little else than heat it up. In the past 10 years, automation has further increased and the cashiers have largely been replaced by self-order terminals so that employees no longer even need to speak rudimentary English. In conclusion, both the required skill-level and amount of labor needed for restaurants has been reduced drastically by automation and in fact many higher skilled trade jobs have been hit even harder: cabinetmakers, coachbuilders and such have been almost eradicated by mass production.
It will happen to you.
Funny, I go to South Korea and the fast food burger joints literally operate exactly as you say they couldn't. I've had the best burger in my life from a McDonalds in South Korea operated practically by robots.
It's a non reality in America's extremely piss poor restaurant industry. We have a competency crisis (the big key here) and worker shortage that SK doesn't, and they have far higher trust in their society.
(In the semiconductor industry) We experienced brutal layoffs arguably due to over-investment into Ai products that produce no revenue. So we've had brutal job loss due to Ai, just not in the way people expected.
Having said that, it's hard to imagine jobs like mine (working on np-complete problems) existing if the LLMs continue advancing at the current rate, and its hard to imagine they wont continue to accelerate since they're writing themselves now, so the limitations of human ability are no longer a bottleneck.
Maybe I'm being naive here, but for AI (heck, for any good algorithm) to work well, you need some at least loosely-clearly defined objectives. I assume it's much more straightforward in semi, but there're many industries, once you get into the details, all kinds of incentives start to disalign and I doubt AI could understand all kinds of nuances.
E.g. once I was tasked to build a new matching algorithm for a trading platform, and upon fully understanding of the specs I realized it can be interpreted as a mixed integer programming problem; the idea got shot down right away because PM don't understand it. There're all kinds of limiting factors once you get into the details.
I think those conversations occur due to changes in timeline of deliverables or certainty of result, would that not be an implementation detail?
You are not worrried for one of the 2 reasons:
1 You are not affected somehow (you got savings, connections, not living paycheck to paycheck, and have food on the table).
2 You prefer to persue no troubles in matters of complexity.
Time will tell, is showing it alredy.
Agree. I feel like most of the people sounding the alarm have been in the software-focused job hunting market for 6+ months.
Those who downplay it are either business owners themselves or have been employed for 2+ years.
I think a lot of software engineers who _haven't_ looked for jobs in the past few years don't quite realize what the current market feels like.
Alternatively: this is an America problem. I'm outside of America and I've been fielding more interviews than ever in the past 3 months. YMMV but the leading indicator of slowed down hiring can come from so many things. Including companies just waiting to see how much LLMs affect SWE positions.
Even people in category #1 should be concerned. Even if their income is not directly affected, the potential for disruption is clearly brewing: mass unemployment, social and civil unrest.
I know smart and capable people that have been unemployed for 6+ months now, and a few much longer. Some have been through multiple layoffs.
I am presently employed, but have looked for a job. The market is the worst I've seen in my almost 30 year career. I feel deeply for anyone who needs a new job right now. It is really bad out there.
For 1, unless you already have an self-sustaining underground bunker or island, you will be affected. No matter how much savings and total compensation you have. If you went out to get grocery in the last week, it will affect you.
The take that I am increasingly believing is that Software Engineers should broadly be worried, because while there will always be demand for people who can create software products, whatever the tools may be, the skills necessary to do it well are changing rapidly. Most Software Engineers are going to wake up one day and realize their skills aren't just irrelevant, but actively detrimental, to delivering value out of software.
There will also be far fewer positions demanding these skills. Easy access to generating code has moved the bottleneck in companies to positions & skills that are substantially harder to hire for (basically: Good Judgement); so while adding Agentic Sorcerers would increase a team's code output, it might be the wrong code. Corporate profit will keep scaling with slower-growing team sizes as companies navigate the correct next thing to build.
> Bottlenecks rule everything around me
The self-setup here is too obvious.
This is exactly why man + machine can be much worse than just machine. A strong argument needs to address what we can do as an extremely slow operating, slow learning, and slow adapting species, that machines that improve in ability and efficiency monthly and annually will find they cannot do well or without.
It is clear that we are going through a disruptive change, but COVUD is not comparable. Job loss is likely to have statistics more comparable to the Black Plague. And sensible people are concerned it could get much worse.
I don’t have the answers, but acknowledging and facing the uncertainty head on won’t make things worse.
The black plague's capital-concentration aftermath supposedly fueled the renaissance and the city-state ascensions, and ultimately the great land discoveries of the 14th and 15th centuries.
Not sure if there's an analogy to make somewhere though
I belive the black plague actually caused a massive labor shortage and wages increased. When a huge amount of people die and you still need to have people build bridges and be soldiers and finish building the damn cathedral that's been under construction for the last 400 years then that is what will happen.
Here's an article:
https://history.wustl.edu/news/how-black-death-made-life-bet...
I meant the jobs die. So I am not sure what would stand in for "labor shortage" in a situation of sustained net job losses. Perhaps a growth opportunity for mannequins to visually fill the offices/shops of the fired, and maintain appearances?
But yes, if lots of people deathed by AI, the remaining humans might have more job security! Could that be called a "soft landing"?
Ahh I see what you mean.
> Job loss is likely to have statistics more comparable to the Black Plague.
Maybe this is overly optimistic, but if AI starts to have negative impacts on average people comparable to the plague, it seems like there's a lot more that people can do. In medieval Europe, nobody knew what was causing the plague and nobody knew how to stop it.
On the other hand, if AI quickly replaces half of all jobs, it will be very obvious what and who caused the job loss and associated decrease in living standards. Everybody will have someone they care about affected. AI job loss would quickly eclipse all other political concerns. And at the end of the day, AI can be unplugged (barring robot armies or Elon's space-based data centers I suppose).
Maybe you should be a little worried. A healthy fear never killed anyone.
I mean - anxiety definitely kills people, right?
Is it "healthy fear" if it turns out to be a fatal dose?
"For quality of life, it is better to err on the side of being an optimist and wrong, rather than a pessimist and right." -Elon Musk
Profound quotes are only profound when said by someone who's widely respected.
Is that true? I’m not so sure. In the 1950s I could have been optimistic that asbestos won’t give people cancer.
“Some of you may die, but that’s a risk I’m willing to make” -also Elon Mush probably
Optimism is a luxury for those who won't be the ones paying for the mistake.
I'm optimistic that my favorite team will play well this season.
I ain't paying for shit.
No it's not a February 2020 moment for sure. In February 2020, most people had heard of COVID and a few scattered outbreaks happened, but people generally viewed the topic as more of a curiosity (like major world news but not necessarily something that will deeply impact them). This is more like start of March 2020 for general awareness.
I read that essay on Twitter the other day and thought that it was a mildly interesting expression of one end of the "AI is coming for our jobs" thing but a little slop-adjacent and not worth sharing further.
And it's now at 80 million views! https://x.com/mattshumer_/status/2021256989876109403
It appears to have really caught the zeitgeist.
I just skimmed this and the so called zeitgeist here is fear. People are scared, it's material concern and he effectively stoked it.
I work on this technology for my job and while I'm very bullish pieces like that are as you said slopish and as I'll say breathless because there are so many practical challenges here to deal with standing between what is being said there and where we are now.
Capability is not evenly distributed and it's getting people into loopy ideas of just how close we are to certain milestones, not that it's wrong to think about those potential milestones but I'm wary of timelines.
Are you ever concerned about the consequences of what you are making? No one really knows how this will play out and the odds of this leading to disaster are significant.
I just don't understand people working on improving ai. It just isn't worth the risk.
Of course, I think about this at least once a week maybe more often. I think that the technology overall will be a great net benefit to humanity or I wouldn't touch it.
Let me get something straight: That essay was completely fake, right? He/It was lying about everything, and it was some sort of... what?
Did the 80 million people believe what they were reading?
Have we now transitioned to a point where we gaslight everyone for the hell of it just because we can, and call it, what, thought-provoking?
The guy is a fraud https://venturebeat.com/ai/new-open-source-ai-leader-reflect...
What was fake? I don't see anything controversial or factually wrong. I question the prediction but that's his opinion.
Yes. It’s an ad for his product, which nobody had heard of before. I’m not on twitter but I’m seeing it pretty much everywhere now.
> Did the 80 million people believe what they were reading?
Those numbers are likely greatly exaggerated. Twitter is nowhere near where it was at its peak. You could almost call it a ghost town. Linkedin but for unhinged crypto- and AI bros.
I'm sure the metrics report 80 million views, but that's not 80 million actual individuals that cared about it. The narrative just needs these numbers to get people to buy into the hype.
Well the zeitgeist is that our brains are so fried that such piece of mediocre writing penned by a GPT-container startupper can surge to the top
This is what they get for not reading our antislop paper (ICLR 2026) and using our anti-slopped sampler/models, or Kimi (which is remarkable relatively non sloppy)
https://arxiv.org/abs/2510.15061
I thought normies would have caught onto the EM dash, overuse of semicolons, overuse of fancy quotes, lack of exclamation marks, "It's not X, it's Y", etc. Clearly I was wrong.
I’m not worried about job loss as a result of being replaced by AI, because if we get AI that is actually better than humans - which I imagine must be AGI - then I don’t see why that AI would be interested in working for humans.
I’m definitely worried about job loss as a result of the AI bubble bursting, though.
Related discussions on the essay mentioned:
https://news.ycombinator.com/item?id=46973011
https://news.ycombinator.com/item?id=46974928
The advent of AI may shape up to be just like the automobile.
At first, it's a pretty big energy hog and if you don't know how to work it, it might crash and burn.
After some time, the novelty wears off. More and more people begin using it because it is a massive convenience that does real work. Luddites who still walk or ride their bike out of principle will be mocked and scoffed.
Then the mandatory compliance will come. A government-issued license will be required to use it and track its use. This license will be tied to your identity and it will become a hard requirement for employment, citizenship, housing, loans, medical treatment, and more. Not having it will be a liability. You will be excluded from society at large if you do not comply.
Last will come the AI-integrated brain computer interface. You won't have any choice when machine-gun-wielding Optimus robots coral you into a self-driving Tesla bus to the nearest FEMA camp to receive your Starlink-connected Neuralink N1 command and control chip. You will be decapitated if you refuse the mark of the beast. Rev 20:4
> This license will be tied to your identity and it will become a hard requirement for employment, citizenship, housing, loans, medical treatment, and more. Not having it will be a liability. You will be excluded from society at large if you do not comply.
That's just an American thing, I've never owned a car and most people of my age I know haven't either.
That's fair. The public infrastructure in other places around the world is a lot more hospitable to other methods of transportation.
> it’s been viewed about 100 million times and counting
That's a weird way of saying 80 million times.
I'm one of those developers who is now writing probably ~80% of my code via Claude. For context, I have >15 years experience and former AWS so I'm not a bright-eyed junior or former product manager who now believes themselves a master developer.
I'm not worried about AI job loss in the programming space. I can use Claude to generate ~80% of my code precisely because I have so much experience as a developer. I intuitively know what is a simple mechanical change; that is to say, uninteresting editing of lines of code; as opposed to a major architectural decision. Claude is great at doing uninteresting things. I love it because that leaves me free to do interesting things.
You might think I'm being cocky. But I've been strongly encouraging juniors to use Claude as well, and they're not nearly as successful. When Claude suggests they do something dumb--and it DOES still suggest dumb things--they can't recognize that it's dumb. So they accept the change, then bang their head on the wall as things don't work, and Claude can't figure it out to help them. Then there are bad developers who are really fucked by Claude. The ones who really don't understand anything. They will absolutely get destroyed as Claude leads them down rabbit holes. I have specific anecdotes about this from people I've spoken to. One had Claude delete a critical line in an nginx config for some reason and the dev spent a week trying to resolve it. Another was tasked with doing a simple database maintenance script, and came back two weeks later (after constant prodding by teammates for a status update) with a Claude-written reimplementation of an ORM. That developer just thought they would need another day of churning through Claude tokens to dig themselves out of an existential hole. If you can't think like a developer, these tools won't help you.
I have enough experience to review Claude's output and say "no, this doesn't make sense." Having that experience is critical, especially in what I call the "anti-Goldilocks" zone. If you're doing something precise and small-scoped, Claude will do it without issues. If you try to do something too large ("write a Facebook for dogs app") Claude will ask for more details about what you're trying to do. It's the middle ground where things are a problem: Claude tries to fill in the details when there's something just fundamentally wrong with what it's being asked.
As a concrete example, I was working on a new project and I asked Claude to implement an RPC to update a database table. It did so swimmingly, but also added a "session.commit()" line... just kind in the middle of somewhere. It was right to do so, of course, since the transaction needed to be committed. And if this app was meant to a prototype, sure. But anyone with experience knows that randomly doing commits in the middle of business logic code is a recipe for disaster. The issue, of course, was not having any consistent session management patterns. But a non-developer isn't going to recognize that that's an issue in the first place.
Or a more silly example from the same RPC: the gRPC API didn't include a database key to update. A mistake on my part. So Claude's initial implementation of the update RPC was to look at every row in the table and find ones where the non-edited fields matched. Makes... sense, in a weird roundabout way? But God help whoever ends up vibe coding something like that.
The type of AI fears are coming from things like this in the original article:
> I'll tell the AI: "I want to build this app. Here's what it should do, here's roughly what it should look like. Figure out the user flow, the design, all of it." And it does. It writes tens of thousands of lines of code. [...] when I test it, it's usually perfect.
Which is great. How many developers are getting paid full-time to make new apps on a regular basis? Most companies, I assume, only build one app. And then they spend years and many millions of dollars working on that app. "Making a new app from scratch" is the easy part! What's hard is adding new features to that app while not breaking others, when your lines of code go from those initial tens of thousands to tens of millions.
There's something to be said about the cheapness of making new software, though. I do think one-off internal tools will become more frequent thanks to AI support. But developers are still going to be the ones driving the AI, as the article says.