I can't find the article anymore but I remember reading almost 10 years ago an article on the economist saying that the result of automation was not removal of jobs but more work + less junior employment positions.
The example they gave was search engine + digital documents removed the junior lawyer headcount by a lot. Prior to digital documents, a fairly common junior lawyer task was: "we have a upcoming court case. Go to the (physical) archive and find past cases relevant to current case. Here's things to check for:" and this task would be assigned to a team of junior (3-10 people). But now one junior with a laptop suffice. As a result the firm can also manage more cases.
Dwarkesh had a good interview with Zuck the other week. And in it, Zuck had an interesting example (that I'm going to butcher):
FB has long wanted to have a call center for its ~3.5B users. But that call center would automatically be the largest in history and cost ~15B/yr to run. Something that is cost ineffective in the extreme. But, with FB's internal AIs, they're starting to think that a call center may be feasible. Most of the calls are going to be 'I forgot my password' and 'it's broken' anyways. So having a robot guide people along the FAQs in the 50+ languages is perfectly fine for ~90% (Zuck's number here) of the calls. Then, with the harder calls, you can actually route it to a human.
So, to me, this is a great example of how the interaction of new tech and labor is a fractal not a hierarchy. In that, with each new tech that your specific labor sector finds, you get this fractalization of the labor in the end. Zuck would have never thought of a call center, denying the labor of many people. But this new tech allows for a call center that looks a lot like the old one, just with only the hard problems. It's smaller, yes, but it looks the same and yet is slightly different (hence a fractal).
Look, I'm not going to argue that tech is disruptive. But what I am arguing is that tech makes new jobs (most of the time), it's just that these new jobs tend to be dealing with much harder problems. Like, we''re pushing the boundaries here, and that boundary gets more fractal-y, and it's a more niche and harder working environment for your brain. The issue, of course, is that, like a grad student, you have to trust in the person working at the boundary is actually doing work and not just blowing smoke. That issue, the one of trust, I think is the key issue to 'solve'. Cal Newport talks a lot about this now and how these knowledge worker tasks really don't do much for a long time, and then they have these spats of genius. It's a tough one, and not an intellectual enterprise, but an emotional one.
I worked in automated customer support, and I agree with you. By default, we automated 40% of all requests. It becomes harder after that, but not because the problems the next 40% face are any different, but because they are unnecessarily complex.
A customer who wants to track the status of their order will tell you a story about how their niece is visiting from Vermont and they wanted to surprise her for her 16th birthday. It's hard because her parents don't get along as they used to after the divorce, but they are hoping that this will at the very least put a smile on her face.
The AI will classify the message as order tracking correctly, and provide all the tracking info and timeline. But because of the quick response, the customer will write back to say they'd rather talk to a human and ask for a phone number they can call.
The remaining 20% can't be resolved by neither human nor robot.
Between the lines, you highlight a tangental issue: execs like Zuckerberg think easy/automatable stuff is 90%. People with skin in the game know it is much less (40% per your estimate).This isn't unique to LLMs. Overestimating the benefit of automation is a time-honored pastime.
Yeah I think I do already see this happening in my work. It's clearly very beneficial, but its benefit is also overestimated. This can lead to some disenchantment and even backlash where people conclude it's all useless.
But it isn't! It's very useful. Even if it isn't eliminating 90% of work, eliminating 40% is a huge benefit!
I’ve noticed this when trying to book a flight with American Airlines earlier this year. Their website booking was essentially broken, insisting that one of my flight segments was fully booked but giving no indication of which one and attempting alternate bookings which replaced each of the segments in turn still failed. They’d replaced most of their phone booking people with an AI system that also was nonfunctional and wanted to direct me to the website to book. After a great deal of effort, I managed to finally reach a human being who was able to place the booking in a couple minutes (and, it turned out, at a lower price than the website had been quoting).
I never call a customer service line unless the website doesn't work, but customer service robots try very hard to get me to hang up and go to the website.
It's super frustrating. These robots need to have an option like "I am technically savvy and I tried the website and it's broken."
This reminds me how Klarna fired their a large part of their customer support department to replace it with ai, only to eventually realize they couldn't do the job primarily using ai and had to rehire a ton of people.
OT: just googled that name, info panel on the right in my language settings categorizes it as "金融の連鎖", or "cascading of finances". am not sure how to take that.
Their business model is an online payment provider (like e.g. PayPal/apple pay) that splits the payment into 3, 6 or 12 monthly payments, usually at 0% interest
The idea being that for the business the loss in revenue from an interest free loan is worth it if it causes an increase in sales
> A customer who wants to track the status of their order will tell you a story about how
I build NPCs for an online game. A non trivial percentage of people are more than happy to tell these stories to anything that will listen, including an LLM. Some people will insist on a human, but an LLM that can handle small talk is going to satisfy more people than you might think.
> But because of the quick response, the customer will write back to say they'd rather talk to a human
Is this implying it's because they want to wag their chins?
My experience recently with moving house was that most services I had to call had some problem that the robots didn't address. Fibre was listed as available on the website but then it crashed when I tried "I'm moving home" - turns out it's available in the general area but not available for the specific row of houses (had to talk to a human to figure it out). Water company, I had an account at house N-2, but at N-1 it was included, so the system could not move me from my N-1 address (no water bills) to house N (water bill). Pretty sure there was something about power and council tax too. With the last one I just stopped bothering, figuring that it's the one thing that they would always find me when they're ready (they got in touch eventually).
The world is imperfect and we are pretty good at spotting the actual needle in the haystack of imperfection. We are also good at utilizing a whole range of disparate signals + past experience to make reasonably accurate decisions. It'll take some working for AI to be able to successfully handle such things at a large scale - this is all still frontier days of AI.
They don’t care about you. You are a number on a screen that happens to pay their company money sometimes. But by using recorded voices, the company hopes to tap into the empathetic part of your human brain to subconsciously make excuses for their crappy service.
When I get stellar customer service these days, I’m happy and try to call it out, but i don’t expect it anymore. My first expectation is always AI slop or a shitty phone tree. When I reframed it for myself, it was a lot easier not to get frustrated about something that I can’t control and not blame a person who doesn’t exist.
Zuck also said that AI is going to start replacing senior software engineers at Meta in 2025. His job isn’t to state objective facts but hype up his company’s products and share price.
Honestly I hope this is true. I recognize this is a risky thing to say, for my own employment prospects as a software engineer. But if companies like Facebook could run their operations with fewer engineers, and those people could instead start or join a larger diversity of smaller businesses, that would be a positive development.
I do think we're going to see less employment for "coding" but I remain optimistic that we're going to see more employment for "creating useful software".
So would everyone that ever created a business. Nobody grows headcount if they don't have to. Why be responsible for other people's livelihoods if you can make it work with less people? Just more worries and responsibilities.
From my experience in corporations this is a false statement. The goal of each manager is to grow their headcount. More people under you - more weight you have and higher position you got.
There is a difference between business owners (who don't want to spend money unless they have to) ans managers (who want career growth and are not necessarily worried about the company 's bottom line wrt headcount)
Most major corporations have increased head count in recent years when they didn’t have to via the creation of DEI roles. These positions might look good in the current cultural moment but add nothing to a company’s bottom line and so are an unnecessary drain on resources.
This doesn't seem true to me at all. Humans are not rational drones that analyze the business and coldly determines the required number of people. I would be surprised if CEOs didn't keep people around because it felt good to be a boss.
Facebook might be able to operate with half the headcount, but them Zuckerberg wouldn't be the boss of as many people, and I think he likes being the boss.
He can definitely fire most people at Facebook. He just doesn't because it would be like not providing a simple defense against a pawn move on a Chess board. No point in not matching the opposition's move if you can afford it. They hire, we hire, they fire, we fire.
Because things would happen on the platform, that would be bad PR. Availability might even go down. Who knows what kind of automatized things need to be kept in check daily.
You are showing your own biases here. Twitter did cease to exist the way it did. In its place is a platform mostly free of censorship and with new features added.
I’d rather see humanity in all of its good, bad, and ugly than have a feed sanitized for me by random Twitter employees who in many cases had their own agenda.
I would rather not see hate speech and incitement of violence online. If you think that Twitter in the form it has now doesn't have a hidden agenda ... That is a very naive believe to be held. Censorship is not the only negative thing that can happen to information. We should all have learned that lesson by now.
> Most of the calls are going to be 'I forgot my password' and 'it's broken' anyways. So having a robot guide people along the FAQs in the 50+ languages is perfectly fine for ~90% (Zuck's number here) of the calls.
No it isn't. Attempts to do this are why I mash 0 repeatedly and chant "talk to an agent" after being in a phone tree for longer than a minute.
> And you don't think that this won't improve with better bots?
Actually, now that I think about it, yeah.
The whole purpose of the bots is to deflect you from talking to a human. For instance: Amazon's chatbot. It's gotten "better": now when I need assistance, it tries three times to deflect me from a person after it's already agreed to connect me to one.
Anything they'll allow the bot to do can probably can be done better by a customer facing webpage.
Maybe for you, but not for most people. Most people have problems that are answered online, but knowledge sites are hard to navigate, and they can't solve their own problems.
A high quality bot to guide people through their poorly worded questions will be hugely helpful for a lot of people. AI is quickly getting to the point that a very high quality experience is possible.
The premise is also that the bots are what enable the people to exist. The status quo is no interactive customer service at all.
This sounds to me like something that's better solved by RAG than by an AI manned call center.
Let's use Zuck's example, the lost password. Surely that's better solved with a form where you type things, such as your email address. If the problem is navigation, all we need to do is hook up a generative chat bot to the search function of the already existing knowledge site. Then you can ask it how to reset your password, and it'll send you to the form and write up instructions. The equivalent over a phone call sounds worse than this to me.
I think Zuck is wrong that 90% of the problems people would call in for can easily be solved by an AI. I was stuck in a limbo with Instagram for about 18 months, where I was banned for no clear reason, there was no obvious way to contact them about it, and once I did find a way, we proceeded with a weird dance where I provided ID verification, they unbanned me, and then they rebanned me, and this happened a total of 4 times before the unban process actually worked. I don't see any AI agent solving this; the cause was obviously process and/or technical problems at Meta. This is the only thing I ever wanted to call Meta for.
And there is another big class of issue that people want to call any consumer-facing business for, which AI can't solve: loneliness. The person is retired and lives alone and just wants to talk to someone for 20 minutes, and uses a minor customer service request as a justification. This happens all the time. Actually an AI can address this problem, but it's probably not the same agent we would build for solving customer requests, and I say address rather than solve as AI will not solve society's loneliness epidemic.
I try to enunciate very clearly: "What would you like to do?" - "Speak to a fcuking human. Speak to a fcuking human. Speak to a fcuking human. Speak to a fcuking human."
But there's also consolidation happening: Not every branch that is initially explored is still meaningful a few years later.
(At least that's what I got from reading old mathematical texts: People really delved deeply into some topics that are nowadays just subsumed by more convenient - or maybe trendy - machinery)
Yeah, I was a little credulous about what Zuck said there too.
Like, if AI is so good, then it'll just eat away at those jobs and get asymptotically close to 100% of the calls. If it's not that good, then you've got to loop in the product people and figure out why everyone is having a hard time with whatever it is.
Generally, I'd say that calls are just another feedback channel for the product. One that FB has thus far been fine without consulting, so I can't imagine its contribution can be all that high. (Zuck also goes on to talk about the experiments they run on people with FB/Insta/WA, and woah, it is crazy unethical stuff he casually throws out there to Dwarkesh)
Still, to the point here: I'm still seeing Ai mostly as a tool/tech, not something that takes on an agency of it's own. We, the humans, are still the thing that says 'go/do/start', the prime movers (to borrow a long held and false bit of ancient physics). The AIs aren't initiating things, and it seems to a large extent, we're not going to want them to do so. Not out of a sense of doom or lack-of-greed, but simply as we're more interested in working at the edge of the fractal.
"I'm still seeing Ai mostly as a tool/tech, not something that takes on an agency of it's own."
I find that to be a highly ironic thing. It basically says AI is not AI. Which we all know it is not yet, but then we can simply say it: The current crop of "AI" is not actually AI. It is not intelligence. It is a kind of huge encoded, non-transparent dictionary.
As someone who has been involved with customer support (on the in-house tech side) the very vast majority of contacts to a CS team will be very inane or extremely inane. If you can automate away the lowest tier of support with LLMs you'll improve response times for not just the simple questions but also for the hard ones.
I have had the problem with customer support that about 90% of the calls/chats I have placed should have been automated (on their side), and the remaining 10% needed escalation beyond the "customer service" escalation ladder. In America, sadly, that means one of two things: (1) you call a friend who works there or (2) you have your lawyer send a demand letter requesting something rather inane.
I agree with that common pattern but even without [current] AI there were ways to automate/improve the lowest tier: very often I don't find my basic questions in the typical corporation's FAQ.
I usually assume, that it is, because they do not want to answer those basic questions or want to hide the answers. For example some shop. No answer found in the FAQ how refunds work. Instant sus.
Isn't this literally just "productivity growth". You (and I think the article) are describing the ability to do more work with the same number of people, which seems like the economic definition of productivity.
This is like a mini parallel of the industrial revolution.
A lot of places starting with a large and unskilled workforce, getting into e.g. textile industry (which brings better RoI than farming). Then the automation arrives but it leaves a lot of people jobless (still being unskilled) while there's new jobs in maintaining the machinery etc.
I don't know about lawyering, but with engineering research, I can now ask ChatGPT's Deep Research to do a literature review on any topic. This used to take time and effort.
Definitely. When computers came out, jobs increased. When the Internet became widely used, jobs increased. AI is simply another tool.
The sad part is, do you think we'll see this productivity gain as an opportunity to stop the culture of over working? I don't think so. I think people will expect more from others because of AI.
If AI makes employees twice as efficient, do you think companies will decrease working hours or cut their employment in half? I don't think so. It's human nature to want more. If 2 is good, 4 is surely better.
So instead of reducing employment, companies will keep the same number of employees because that's already factored into their budget. Now they get more output to better compete with their competitors. To reduce staff would be to be at a disadvantage.
So why do we hear stories about people being let go? AI is currently a scapegoat for companies that were operating inefficiently and over-hired. It was already going to happen. AI just gave some of these larger tech companies a really good excuse. They weren't exactly going to admit their make a mistake and over-hired, now were they? Nope. AI was the perfect excuse.
As all things, it's cyclical. Hiring will go up again. AI boom will bust. On to the next thing. One thing is for certain though, we all now have a fancy new calculator.
Well, I think productivity gains should correlate with stock price growth.
If we want stock prices to increase exponentially, sales must also grow exponentially, which means we need to become exponentially more productive.
We can stop that — or we can stop tying company profitability to stock prices, which is already happening to some extent.
And when we talk about 'greedy shareholders,' remember that often means pension funds - essentially our savings and our hope for a decent retirement, assuming productivity continues to grow exponentially.
I do not agree, I think the book is much more interesting than the article. For example the type of jobs such as Box Tickers and Flunkies, as well some really interesting anecdotes
Loved the article. Bought the book. I put it down before halfway as it was poorly written longform pagefilling. If it came out today I would totally understand someone calling AI slop on it. "Here's my popular article, re-write it book lenght for me".
I feel like people in the comments are misunderstanding the findings in the article. It’s not that people save time with AI and then turn that time to novel tasks; it’s that perceived savings from using AI are nullified by new work which is created by the usage of AI: verification of outputs, prompt crafting, cheat detection, debugging, whatever.
This seems observationally true in the tech industry, where the world’s best programmers and technologists are tied up fiddling with transformers and datasets and evals so that the world’s worst programmers can slap together temperature converters and insecure twitter clones, and meanwhile the quality of the consumer software that people actually use is in a nosedive.
I think the software quality nosedive significantly predates generative AI.
I think it's too early to say whether AI is exacerbating the problem (though I'm sympathetic to the view that it is) or improving it, or just maintaining the status quo.
The other night I was too tired to code so I decided to try vibe coding a test framework for the C/C++ API I help maintain. I've tried this a couple times so far with poor results but I wanted to try again. I used Claude 3.5 IIRC.
The AI was surprisingly good at filling in some holes in my specification. It generated a ton of valid C++ code that actually compiled(except it omitted the necessary #includes). I built and ran it and... the output was completely wrong.
OK, great. Now I have a few hundred lines of C++ I need to read through and completely understand to see why it's incorrect.
I don't think it will be a complete waste of time because the exercise spurred my thinking and showed me some interesting ways to solve the problem, but as far as saving me a bunch of time, no. In fact it may actually cost me more time trying to figure out what it's doing.
With all due respect to folks working on web and phone apps, I keep getting the feeling that AI is great for high level, routine sorts of problems and still mostly useless for systems programming.
> With all due respect to folks working on web and phone apps, I keep getting the feeling that AI is great for high level, routine sorts of problems and still mostly useless for systems programming.
As one of those folks, no it's pretty bad in that world as well. For menial crap it's a great time saver, but I'd never in a million years do the "vibe coding" thing, especially not with user-facing things or especially not for tests. I don't mind it as a rubber duck though.
I think the problem is that there's 2 groups of users, the technical ones like us and then the managers and C-levels etc. They see it spit out a hundred lines of code in a second and as far as they know (and care) it looks good, not realizing that someone now has to spend their time reviewing the 100 lines of code, plus having the burden of maintenance of those 100 lines going into the future. But, all they see is a way to get the pesky, expensive devs replaced or at least a chance squeeze more out of them. The system is so flashy and impressive looking, and you can't even blame them for falling for the marketing and hype, after all that's what all the AIs are being sold as, omnipotent and omniscient worker replacers.
Watching my non-technical CEO "build" things with AI was enlightening. He prompts it for something fairly simple, like a TODO List application. What it spits out works for the most part, but the only real "testing" he does is clicking on things once or twice and he's done and satisfied, now convinced that AI can solve literally everything you throw at it.
However if he were testing the solution as a proper dev would, he'd see that the state updates break after a certain amount of clicks, and that the list was glitching out sometimes, and that adding things breaks on scroll and overflows the viewport, and so on. These are all real examples of an "app" he made by vibe coding, and after playing around with it myself for all of 3 minutes I noticed all these issues and more in his app.
> With all due respect to folks working on web and phone apps, I keep getting the feeling that AI is great for high level, routine sorts of problems and still mostly useless for systems programming.
As someone working on routine problems in mainstream languages where training data is abundant, LLMs are not even great for that. Sure, they can output a bunch of code really quickly that on the surface appears correct, but on closer inspection it often uses nonexistent APIs, the logic is subtly wrong or convoluted for no reason, it does things you didn't tell it to do or ignores things you did, it has security issues and other difficult to spot bugs, and so on.
The experience is pretty much what you summed up. I've also used Claude 3.5 the most, though all other SOTA model have the same issues.
From there, you can go into the loop of copy/pasting errors to the LLM or describing the issues you did see in the hopes that subsequent iterations will fix them, but this often results in more and different issues, and it's usually a complete waste of time.
You can also go in and fix the issues yourself, but if you're working with an unfamiliar API in an unfamiliar domain, then you still have to do the traditional task of reading the documentation and web searching, which defeats the purpose of using an LLM to begin with.
To be clear: I don't think LLMs are a useless technology. I've found them helpful at debugging specific issues, and implementing small and specific functionality (i.e. as a glorified autocomplete). But any attempts of implementing large chunks of functionality, having them follow specifications, etc., have resulted in much more time and effort spent on my part than if I had done the work the traditional way.
The idea of "vibe coding" seems completely unrealistic to me. I suspect that all developers doing this are not even checking whether the code does what they want to, let alone reviewing the code for any issues. As long as it compiles they consider it a success. Which is an insane way of working that will lead to a flood of buggy and incomplete applications, increasing the dissatisfaction of end users in our industry, and possibly causing larger effects not unlike the video game crash of 1983 or the dot-com bubble.
> The idea of "vibe coding" seems completely unrealistic to me.
That's what happens to "AI art" too. Anyone as a non-artist can create images in seconds, and they will look kind of valid or even good to them, much like those "vibe coded" things look to CEOs.
AI is great at generating crap really fast and efficiently. Not so good at generating stuff that anyone actually needs and which must actually work. But we're also discovering that a lot of what we consume can be crap and be acceptable. An endless stream of generated synthwave in the background while I work is pretty decent. People wanting to decorate their podcasts or tiktoks with something that nobody is going to pay attention to, AI art can do that.
For vibe coding, right now it seems that prototyping and functional mockups seems to be quite a viable use.
> With all due respect to folks working on web and phone apps, I keep getting the feeling that AI is great for high level, routine sorts of problems and still mostly useless for systems programming.
I agree. AI is great for stuff that's hard to figure out but easy to verify.
For example, I wanted to know how to lay out something a certain way in SwiftUI and asked Gemini. I copied what it suggested, ran it and the layout was correct. I would have spent a lot more time searching and reading stuff compared to this.
> OK, great. Now I have a few hundred lines of C++ I need to read through and completely understand to see why it's incorrect.
I think it's often better to just skip this and delete the code. The cool thing about those agents is that the cost of trying this out is extremely cheap, so you don't have to overthink it and if it looks incorrect, just revert it and try something else.
I've been experimenting with Junie for past few days, and had very positive experience. It wrote a bunch of tests for me that I've been postponing for quite some time it was mostly correct from a single sentence prompt. Sometimes it does something incorrect, but I usually just revert it and move on, try something else later. There's definitely a sweet spot for things tasks it does well and you have to experiment a bit to find it out.
> where the world’s best programmers and technologists are tied up fiddling with transformers and datasets and evals so that the world’s worst programmers can slap together temperature converters and insecure twitter clones
> slap together temperature converters and insecure twitter clones
because those "best programmers" don't want to be making temperature converters nor twitter clones (unless they're paid mega bucks). This enables the low paid "worst" programmers to do those jobs for peanuts.
Let's assume that I'm closer to best programmers than worst programmers, for a second; I definitely will build a temperature converter, at my usual hourly rate. I don't think we should consider any task "beneath us", doing so detaches us from reality, makes us entitled, and ultimately stumps our growth
But do we actually need more temperature converters? Maybe it would be better if they were hard to make such that people didn't waste their time, and the bad programmers went out and did some yard work.
Personally, having worked in professional enterprise software for ~7 years now I've come to a pretty hard conclusion.
Most software should not exist.
That's not even meant in the tasteful "Its a mess" way. From a purely money making efficiency standpoint upwards of 90% of the code I've written in this time has not meaningfully contributed back to the enterprise, and I've tried really hard to get that number lower. Mind you, this is professional software. If you consider the vibe coder guys, I'll estimate that number MUCH higher.
It just feels like the whole way we've fit computing into the world is misaligned. We spent days building UIs that dont help the people we serve and that break at the first change to the process, and because of the support burden of that UI we never get to actually automate anything.
I still think computers are very useful to humanity, but we have forgot how to use them.
And not only that, but most >>changes<< to software shouldn't happen, especially if it's user facing. Half my dread in visiting support web sites is that they've completely rearranged yet again, and the same thing I've wanted five times requires a fifth 30 minutes figuring out where they put it.
>it’s that perceived savings from using AI are nullified by new work which is created by the usage of AI:
I mean, isn't that obvious looking at economic output and growth? The Shopify CEO recently published a memo in which he claimed that high achievers saw "100x growth". Odd that this isn't visible in the Spotify market cap. Did they fire 99% of their engineers instead? Maybe the memo was AI written too.
Are there any 5 man software companies that do the work of 50? I haven't seen them. I wonder how long this can go on with the real world macro data so divorced from what people have talked themselves into.
the state of consumer software is already so bad & LLMs are trained on a good chunk of that so their output can possible produce worse software right? /s
Modern AI tools are amazing, but they’re amazing like spell check was amazing when it came out. Does it help with menial tasks? Yes, but it creates a new baseline that everyone has and just moves the bar. Theres scant evidence that we’re all going to just sit on a beach while AI runs your company anytime soon.
There’s little sign of any AI company managing to build something that doesn’t just turn into a new baseline commodity. Most of these AI products are also horribly unprofitable, which is another reality that will need to be faced sooner rather than later.
It's got me wondering: do any of my hard work actually matter? Or is it all just pointless busy-work invented since the industrial revolution to create jobs for everyone, when in reality we would be fine if like 5% of society worked while the rest slacked off? Don't think we'd have as many videogames, but then again, we would have time to play, which I would argue is more valuable than games.
To paraphrase Lee Iacocca:
We must stop and ask ourselves, how much videogames do we really need?
> It's got me wondering: do any of my hard work actually matter?
I recently retired from 40 years in software-based R&D and have been wondering the same thing. Wasn't it true that 95% of my life's work was thrown away after a single demo or a disappointingly short period of use?
And I think the answer is yes, but this is just the cost of working in an information economy. Ideas are explored and adopted only until the next idea replaces it or the surrounding business landscape shifts yet again. Unless your job is in building products like houses or hammers (which evolve very slowly or are too expensive to replace), the cost of doing of business today is a short lifetime for any product; they're replaced in increasingly fast cycles, useful only until they're no longer competitive. And this evanescent lifetime is especially the case for virtual products like software.
The essence of software is to prototype an idea for info processing that has utility only until the needs of business change. Prototypes famously don't last, and increasingly today, they no longer live long enough even to work out the bugs before they're replaced with yet another idea and its prototype that serves a new or evolved mission.
Will AI help with this? Only if it speeds up the cycle time or reduces development cost, and both of those have a theoretical minimum, given the time needed to design and review any software product has an irreducible minimum cost. If a human must use the software to implement a business idea then humans must be used to validate the app's utility, and that takes time that can't be diminished beyond some point (just as there's an inescapable need to test new drugs on animals since biology is a black box too complex to be simulated even by AI). Until AI can simulate the user, feedback from the user of new/revised software will remain the choke point on the rate at which new business ideas can be prototyped by software.
I think about this a lot with various devices I owned over the years that were made obsolete by smartphones. Portable DVD players and digital cameras are the two that stand out to me; each of them cost hundreds of dollars but only had a marketable life of about 5 years. To us these are just products on a shelf, but every one of them had a developer, an assembly line, and a logistics network behind them; all of these have to be redeployed whenever a product is made obsolete.
This is what makes software interesting. It theoretically works forever and has zero marginal production cost, but it's durability is driven by business requirements and hardware and OS changes. Some software might have a 20 year life. Some might only be 6 months.
A house is way more durable. My house is older than all software and I expect it to outlive most software written (either today or ever). Except voyager perhaps!
Yes... basically in life, you have to find the definition of "to matter" that you can strongly believe in. Otherwise everything feels aimless, the very life itself.
The rest of what you ponder in your comment is the same. And I'd like to add that baselines shifted a lot over the years of civilization. I like to think about one specific example: painkillers. Painkillers were not used during medical procedures in a widespread manner until some 150 years ago, maybe even later. Now, it's much less horrible to participate in those procedures, for everyone involved really, and also the outcomes are better just for this factor - because the patients moves around less while anesthetized.
But even this is up for debate. All in all, it really boils down to what the individual feels like it's a worthy life. Philosophy is not done yet.
Well, from a societal point of view, meaningful work would be work that is necessary to either maintain or push that baseline.
Perhaps my initial estimate of 5% of the workforce was a bit optimistic, say 20% of current workforce necessary to have food, healthcare, and maybe a few research facilities focused on improving all of the above?
I'm pretty sure it's not impossible, but rather just improbable, because of how human nature works. In other words, we are not incentivized to do that, and that is why we don't do that, and even when we did, it always fell apart.
You are very right that AI will not change this. As neither did any other productivity improvement in the past (directly).
Power itself seems to be the goal, and the reasons for it is human DNA I think. I have doubts that we can build anything different than this (on a sufficiently long run).
Unless you propose slaves how are you going to choose the 5%?
Who in their right mind would work when 95 out of 100 people around them are slacking off all day? Unless you pay them really well. So well that they prefer to work than to slack off. But then the slackers will want nicer things to do in their free time that only the workers can afford. And then you'd end up at the start.
Nope. The current system may be misdirecting 95% of labor, but until we have sufficiently modeled all of nature to provide perfect health and brought world peace, there is work to do.
> Don't think we'd have as many videogames, but then again, we would have time to play, which I would argue is more valuable than games.
Would we have fewer video games? If all our basic needs were met and we had a lot of free time, more people might come together to create games together for free.
I mean, look at how much free content (games, stories, videos, etc) is created now, when people have to spend more than half their waking hours working for a living. If people had more free time, some of them would want to make video games, and if they weren’t constrained by having to make money, they would be open source, which would make it even easier for someone else to make their own game based on the work.
Mine doesn't, and I am fine with that, never needed such validation. I derive fulfillment from my personal life and achievements and passions there, more than enough. With that optics, office politics and promotion rat race and what people do in them just makes me smile. Seeing how otherwise smart folks ruin (or miss out) their actual lives and families in pursuit of excellence in a very narrow direction, often hard underappreciated by employers and not rewarded adequately. I mean, at certain point you either grok the game and optimize, or you don't.
The work brings over time modest wealth, allows me and my family to live in long term safe place (Switzerland) and builds a small reserve for bad times (or inheritance, early retirement etc. this is Europe, no need to save up for kids education or potentially massive healthcare bills). Don't need more from life.
Agree. Now I watch the rat racers with bemusement while I put in just enough to get a paycheck. I have enough time and energy to participate deeply in my children’s upbringing.
I’m in America so the paychecks are very large, which helps with private school, nanny, stay at home wife, and the larger net worth needed (health care, layoff risk, house in a nicer neighborhood). I’ve been fortunate, so early retirement is possible now in my early 40s. It really helps with being able to detach from work, when I don’t even care if I lose my job. I worry for my kids though. It won’t be as easy for them. AI and relentless human resources optimization will make tech a harder place to thrive.
>It's got me wondering: do any of my hard work actually matter?
It mattered enough for someone to pay you money to do it, and that money put food on the table and clothes on your body and a roof over your head and allowed you to contribute to larger society through paying taxes.
Is it the same as discovering that E = MC2 or Jonas Salk's contributions? No, but it's not nothing either.
Most work is redundant and unnecessary. Take for example the classic gas station on every corner situation that often emerges. This turf war between gas providers (or their franchisees by proxy they granted a license to this location for) is not because three or four gas stations are operating at maximum capacity. No, this is 3 or 4 fisherman with a line in the river, made possible solely because inputs (real estate, gas, labor, merchandise) are cheap enough where the gas station need not ever run even close to capacity and still return a profit for the fisherman.
Who benefits from the situation? You or I who don’t have to make a u turn to get gas at this intersection, perhaps, but that is not much benefit in comparison for the opportunity cost of not having 3 prime corner lots squandered on the same single use. The clerk at the gas station for having a job available? Perhaps although maybe their labor in aggregate would have been employed in other less redundant uses that could benefit out society otherwise than selling smokes and putting $20 on 4 at 3am. The real beneficiary of this entire arrangement is the fisherman, the owner or shareholder who ultimately skims from all the pots thanks to having what is effectively a modern version of a plantation sharecropper, spending all their money in the company store and on company housing with a fig leaf of being able to choose from any number of minimum wage jobs, spend their wages in any number of national chain stores, and rent any number of increasingly investor owned property. Quite literally all owned by the same shareholders when you consider how people diversify their investments into these multiple sectors.
It's weird to read the same HN crowd that decries monopolies and extols the virtues of competition turn around and complain about job duplication and "bullshit jobs" like marketing and advertising that arise from competition.
I've been thinking similarly. Bertrand Russell once said: "there are two types of work. One, moving objects on or close to the surface of the Earth. Two, telling other people to do so". Most of us work in buildings that don't actually manufacture, process or anything. Instead, we process information that describes manufacturing and transport. Or we create information for people to consume when they are not working (entertainment). Only a small faction of human beings are actually producing things that are necessary for physiological survival. Rest of us are at best, helping them optimize that process, or at worst, leeching off of them in the name of "management" of their work.
Its why executive types are all hyped about AI. Being able to code 2x more will mean they get 2x more things (roughly speaking), but the workers aren’t going to get 2x the compensation.
Indeed. And AI does its work without those productivity-hindering things like need for recreation and sleep, ethical treatment, and a myriad of others. It's a new resource to exploit, and that makes everyone excited who is building on some resource.
AI can’t do our jobs today, but we’re only 2.5 years from the release of chatGPT. The performance of these models might plateau today, but we simply don’t know. If they continue to improve at the current rate for 3-5 more years, it’s hard for me to see how human input would be useful at all in engineering.
Most software engineering jobs aren't about creativity, but about putting some requirements stated in a slightly vague fashion, and actualizing it for the stakeholder to view and review (and adjust as needed).
The areas for which creativity is required are likely related to digital media software (like SFX in movies, games, and perhaps very innovative software). In these areas, surely the software developer working there will have the creativity required.
To the extent it’s measurable, LLMs are becoming more creative as the models improve. I think it’s a bold statement to say they’ll NEVER be creative. Once again, we’ll have to see. Creativity very well could be emergent from training on large datasets. But also it might not be. I recommend not speaking in such absolutes about a technology that is improving every day.
I agree, and I think most people would say the current models would rank low on creativity metrics however we define them. But to the main point, I don’t see how the quality we call creativity is unique to biological computing machines vs electronic computing machines. Maybe one day we’ll conclusively declare creativity to be a human trait only, but in 2025 that is not a closed question - however it is measured.
We were talking about LLM here, not computing machines in general. LLM are trained to mimic not to produce novel things, so a person can easily think LLM wont get creative even though some computer program in the future could.
The cost, in money or time, for getting certain types of work done decreases. People ramp up demand to fill the gap, "full utilization" of the workers.
Its a very old claim that the next technology will lead to a utopia where we don't have to work, or we work drastically less often. Time and again we prove that we don't actually want that.
My hypothesis (I'm sure its not novel or unique) is that very few people know what to do with idle hands. We tend to keep stress levels high as a distraction, and tend to freak out in various ways if we find ourselves with low stress and nothing that "needs" to be done.
> Its a very old claim that the next technology will lead to a utopia where we don't have to work, or we work drastically less often. Time and again we prove that we don't actually want that.
It actually does but due to wrong distribution of reward gained from that tech(automation) it does not work for the common folks.
Lets take a simple example, you, me and 8 other HN users work in Bezos’ warehouse. We each work 8h/day. Suddenly a new tech comes in which can now do the same task we do and each unit of that machine can do 2-4 of our work alone. If Bezos buys 4 of the units and setting each unit to work at x2 capacity, then 8 of us now have 8h/day x 5 days x 4 weeks = 160h leisure.
Problem is, now 8 of us still need money to survive(food, rent, utilities, healthcare etc). So, according to tech utopians, 8 of us now can use 160h of free time to focus on more important and rewarding works.(See in context of all the AI peddlers, how using AI will free us to do more important and rewarding works!). But to survive my rewarding work is to do gig work or something of same effort or more hours.
So in theory, the owner controlling the automation gets more free time to attend interviews and political/social events. The people getting automated away fall downward and has to work harder to maintain their survivality. Of course, I hope our over enthusiastic brethren who are paying LLM provider for the priviledge of training their own replacements figure the equation soon and don’t get sold by the “free time to do more meaningful work” same way the Bezos warehouse gave some of us some leisure while the automation were coming online and needed some failsafe for a while. :)
I think a lot of people would be fine being idle if they had a guaranteed standard of living. When I was unemployed for a while, I was pretty happy in general but stressed about money running out. Without the money issue the last thing I would want to do is to sell my time to a soulless corporation. I have enough interests to keep me busy. Work just sucks up time I would love to spend on better things.
I would have said lazy rather than idle if that what I meant.
For most people lazy implies that there are things you things you really ought to get done but you're choosing to avoid doing it to the point where its a problem that whatever the thing is still isn't taken care of.
Idle just means you don't feel like you have anything that needs to be done, you aren't avoiding things to the point that it causes a problem.
Of course our economic system prefers people to be "fully utilized" rather than idle, but who cares? I don't owe an economic system anything, we could change the system whenever we want, and ultimately an economy is only useful to analyze the comparative output that already happened - it has nothing to do with the present or future.
Food production is a class case where once productivity is high enough you simply get fewer farmers.
We are currently a long way from that kind of change as current AI tools suck by comparison to literally 1,000x increases in productivity. So, in well under 100 years programming could become extremely niche.
We are seeing an interesting limit in the food case though.
We increased production and needed fewer farmers, but we now have so few farmers that most people have very little idea of what food really is, where it comes from, or what it takes to run our food system.
Higher productivity is good to a point, but eventually it risks becoming too fragile.
100%. In fact, this exact scenario is playing out in the cattle industry.
Screwworm, a parasite that kills cattle in days is making a comeback. And we are less prepared for it this time because previously (the 1950s-1970s) we had a lot more labor in the industry to manually check each head of cattle. Bloomberg even called it out specifically.
Ranchers also said the screwworm would be much deadlier if it were to return, because of a lack of labor. “We can’t fight it like we did in the ’60s, we can’t go out and rope every head of cattle and put a smear on every open wound,” Schumann said.
This sounds like the kind of labor problem that could quickly be solved by hiring more people. So really the worst case here is that beef will cost a little more for a little while. Hardly an existential threat.
Its not just a problem needing a signature. We have the policies we have today because a lot of people want them, or at least agree with the general direction.
If the right person signs a change that magically fixes a labor shortage in a rural area we're right back to where we were, and much of the public would be up in arms about it.
(This doesn't actually reflect my opinion on immigration laws to be clear, just my view on where we are today in the US)
So I mean it could depend on your definition of productivity, if anything that increases shareholder returns at the expense of a good product or robust supply chain is considered more "productivity," sure. Just as monopolies are the most "productive" businesses ever for their shareholders, but generally awful for everyone else, and are not what most people would think of as productive.
The human definition of productivity is - less inputs producing more and better outputs.
The cartel doublespeak definition is - the product got worse and the margins improved, which seems to describe US Big Ag at present
At least in US agriculture, when they speak of productivity they generally refer to pounds per acre for crops. For livestock it's a bit less clear, they sometimes refer to pounds of feed to final live weight. You generally have to schedule a slaughter day months out and you estimate the final weight, you don't get paid as well if you are too far off the weight in either direction. Its less common generally, but in the cattle industry I've heard the accuracy of hitting that targets talked about as productivity.
I agree with you on the double speak though, really I think its just a lack of the public really understanding the meaning given to "productive" in the industry though. The industry doesn't hide what it means by the word, most just don't care about any version of productive that measures things like nutrient value, sustainability, soil health, animal welfare, etc.
> Food production is a class case where once productivity is high enough you simply get fewer farmers.
Yes, but.
There are more jobs in other fields that are adjacent to food production, particularly in distribution. Middle class does not existed and retail workers are now a large percentage of workers in most parts of the world.
Sure, but when farmers where 90% of the labor force many of the remaining 10% also related to food distribution and production, a village blacksmith was mostly in support of farming, salt production/transport for food storage, etc.
Food is just a smaller percentage of the economy overall.
Was there ever a time when 90% of labor was in farming and we had anything resembling an economy?
I would have assumed that if 90% of people are farming its largely subsistence and any trade or happened on a much more local scale, potentially without any proper currency involved.
Globally perhaps not as fishing and hunting have been major food sources in antiquity especially when you include North America etc. Similarly slavery meant a significant portion of the population was in effect outside the economy.
That said, there’s been areas where 90% of the working population was at minimum helping with the harvest up until the Middle Ages.
Nope, where a family might struggle to efficiently manage 50 acres under continuous cultivation even just a few hundred years ago, now it’s not uncommon to see single family farms with 20,000 acres each of which is several times more productive.
It’s somewhat arbitrary where you draw the line historically but it’s not just maximum productivity worth remembering crops used to fail from drought etc far more frequently.
Small hobby farms are also a thing these days, but that’s a separate issue.
For those 20,000 acre farms, by what measure are they more productive?
In my experience they're very productive by poundage yield, but horribly unproductive when it comes to inputs required, chemicals used, biodiversity, soil health, etc.
The difference is so extreme vs historic methods you can skip pesticides, avoid harming soil health or biodiversity vs traditional methods etc without any issues here and still be talking 1,000x.
Though really growing crops for human consumption is something of a rounding error here. It’s livestock, biofuels, cotton, organic plastics, wood, flowers, etc that’s consuming the vast majority of output from farms.
If that's the metric, sure we have gotten very good at producing more pounds of food per human hour of labor.
Two things worth noting though, pounds of food say little about the nutritional value to consumers. I don't have hood links handy so I won't make any specific claims, just worth considering if weight is the right metric.
As far as human labor hours goes, we've gotten very good at outsourcing those costs. Farm labor hours ignores all the hours put in to their off-farm inputs (machinery, pesticides and fertilizers, seed production, etc). We also leverage an astronomical amount of (mostly) diesel fuel to power all of it. The human labor hours are small, but I've seen estimates of a single barrel of oil being comparable to 25,000 hours of human labor or 12.5 years of full employment. I'd be interested to do the math now, but I expect we have seen a fraction of that 25,000x multiplier materialize in the reduction of farm hours worked over the last century (or back to the industrial revolution).
You really can’t. Human labor is productive, a barrel of oil on its own isn’t going to accomplish crap.
You likely get less useful work out of a gallon of gas in your car than it took to extract, refine, transport, and distribute that gallon of gas. Just as an example gas pumps use electricity that isn’t coming from oil.
> The human labor hours are small, but I've seen estimates of a single barrel of oil being comparable to 25,000 hours of human labor
That’s just wildly wrong by several orders of magnitude, to the point I question your judgment to even consider it a valid possibility.
Not only would the price be inherently much higher but if everyone including infants working 50 hours per week we’d still would produce less than 1/30th the current world’s output of oil and going back we’ve been extracting oil at industrial scale for over 100 years.
To get even close to those numbers you’d need to assume 100% of human labor going back into prehistory was devoted purely to oil extraction.
What are you claiming is widely wrong exactly? The estimate of comparison between the amount of energy in a barrel of oil and the average amount of energy a human can produce in an hour?
Burning food can produce more useful work in a heat engine than you get from humans doing labor so I’m baffled by what about this comparison seems to make sense to you.
Ignoring that you’re still off by more than an order of magnitude. 100% of the energy content of oil can’t even be turned directly into work without losses. You get about 10% of its nominal energy content as useful work, less if you’re including energy costs of production, refining, and transport.
Even if look at an oil well fire it’s incomplete combustion and not useful work.
You were comparing amount of energy between human labor and a barrel of oil? That's such a baffling metric that neither they nor I realized that's what you meant. It's not like you can replace a human with a solar panel, but if you could that would be astoundingly impressive and not diminished toward "horribly unproductive" by the fact that the solar panel is delivering more watts to do the same thing.
The earlier comment or was talking about the massive reduction in the amount of human labor required to cultivate land and the relative productivity of the land.
That comparison comes down to amount of work done. Whether that work is done by a human swinging a scythe or a human driving a diesel powered tractor is irrelevant, the work is measured in joules at the end of the day. We have drastically fewer human hours put into farm labor because we found a massive multiplier effect in fossil fuel energy.
I'm not sure where solar panels came in, but sure they can also be used to store watts and produce joules of work if that's your preferred source of energy.
The confusion lies in why we would measure the efficiency of human labor in joules per unit of work instead of hours of human effort per unit of work.
In particular, if we can make a machine that spends more joules than a human, but reduces the human effort by orders of magnitude, why would that be "horribly unproductive"? Most people would call that amazingly productive. And when they want to broaden the view to consider the inputs too, they're worried about the labor that goes into the inputs, not the joules.
(And if the worry is the limited amount of fossil fuels in particular, we can do the same with renewable energy.)
Joules are just a measure of work, and this all started by an attempt to say how productive we are because we need fewer farmers today. My argument is that we only need fewer farmers because we found a cheap source of energy and have been using that to replace farmers.
When looking at joules its an attempt to compare something like a human cutting a field with a scythe and a tractor cutting it with an implement. The tractor is way more efficient at cutting it when considering only the human hours of labor cutting the field. But of course it is, a single barrel of oil has way more energy potential and even a small tractor will be run with fuel milage tracked by gallons per hour.
>technology will lead to a utopia where we don't have to work
I'm kind of ok with doing more work in the same time, though if I'm becoming way more effective I'll probably start pushing harder on my existing discussions with management about 4 day work weeks (I'm looking to do 4x10s, but I might start looking to negotiate it to "instead of a pay increase, let's keep it the same but a 4x8 week").
If AI lets me get more done in the same time, I'm ok with that. Though, on the other hand, my work is budgeting $30/mo for the AI tools, so I'm kind of figuring that any time that personally-purchased AI tools are saving me, I deduct from my work week. ;-)
>very few people know what to do with idle hands
"Millions long for immortality that don't know what to do with themselves on a rainy Sunday afternoon." -- Susan Ertz
I don’t think it’s the consequence of most individuals’ preferences. I think it’s just the result of disproportionate political influence held by the wealthy, who are heavily incentivized to maximize working hours. Since employers mostly have that incentive, and since the political system doesn’t explicitly forbid it, there aren’t a ton of good options for workers seeking shorter hours.
> there aren’t a ton of good options for workers seeking shorter hours.
But you do have that option, right? Work 20 hours a week instead of 40. You just aren't paid for the hours that you don't work. In a world where workers are exchanging their labor for wages, that's how it's supposed to work.
For there to be a "better option" (as in, you're paid money for not working more hours) what are you actually being paid to do?
For all the thoughts that come to mind when I say "work 20 hours a week instead of 40" -- that's where the individual's preference comes in. I work more hours because I want the money. Nobody pays me to not work.
Not really. Lots of kinds of work don’t hire part timers in any volume period. There are very limited jobs where the only tradeoff if you want to work fewer hours is a reduction in compensation proportional to the reduction in hours worked, or even just a reduction in compensation even if disproportionate to the reduction in hours worked.
>nobody pays me not to work.
If you’re in the US, then in theory you’re getting overtime for going over 40hrs a week. That’s time and a half for doing nothing, correct? I’d expect your principles put you firmly against overtime pay.
>But you do have that option, right? Work 20 hours a week instead of 40. You just aren't paid for the hours that you don't work. In a world where workers are exchanging their labor for wages, that's how it's supposed to work.
Look the core of your opinion is the belief that market dynamics naturally lead to desirable outcomes always. I simply don’t believe that, and I think interference to push for desirable outcomes which violate principles of a free market is often good. We probably won’t be able to agree on this.
> I’d expect your principles put you firmly against overtime pay.
No.. if society wants to disincentive over working by introducing overtime, that's fine by me. I'm not making any moral judgement. You just seem to live in a fantasy world where people aren't exchanging their labor for money.
> Look the core of your opinion is the belief that market dynamics naturally lead to desirable outcomes always.
I didn't say that, and I don't believe that. If you're just going to hallucinate what I think, what's the point in replying?
>You just seem to live in a fantasy world where people aren't exchanging their labor for money.
Where did you get that? My entire contention centers around a lack of good options for workers seeking to work fewer hours. A logical assumption, then, would be that I want policies which would give said workers more options. Examples include stronger protections for unions, higher minimum wages, etc. Since I saw these as the logical extrapolations from what I'd said originally, I figured your issue was gov interference in the labor market itself, since you said things like
>In a world where workers are exchanging their labor for wages, that's how it's supposed to work.
>(as in, you're paid money for not working more hours)
You took issue with more money for the same hours, did you not? Why wouldn't overtime be an obvious example? The reason I assumed you were just a libertarian or something was because it doesn't seem like there's an obvious logical juncture to draw a line at. If you're fine with society altering the behavior of the labor market to achieve certain desirable results, then why would this be any different fundamentally?
At least in the US part time work often not really a thing. A while ago I talked to HR about reducing to 32 hours and they didn't seem to get the idea at all. It's either all in or nothing. In the US there is also the health insurance question.
For my relatives in Germany going part time seems easier and more accepted by companies.
Thank you! I didn't know this had a name. I remember thinking something along these lines in seventh grade social studies when we learned that Eli Whitney's cotton gin didn't actually end up improving conditions for enslaved people.
I suspected this would be the case with AI too. A lot of people said things like "there won't be enough work anymore" and I thought, "are you kidding? Do you use the same software I use? Do you play the same games I've played? There's never enough time to add all of the features and all of the richness and complexity and all of the unit tests and all of the documentation that we want to add! Most of us are happy if we can ship a half-baked anything!"
The only real question I had was whether the tech sector would go through a prolonged, destructive famine before realizing that.
Econ 101: supply is finite, demand infinite. Increased efficiency of production means that demand will meet the new price point, not that demand will cease to exist.
There are probably plenty of goods that are counter examples, but time utilization isn't one of them, I don't think.
I don't think we can so easily pin it on capitalism. Capitalism brings incentives that drive work hours and expectations up for sure, but that's not the only thing in play.
Workers are often looking to make more money, take more responsibility, or build some kind of name or reputation for themselves. There's absolutely nothing wrong with that, but that goal also incentivizes to work harder and longer.
There's no one size fits all description for workers, everyone's different. The same is true for the whole system though, it doesn't roll up to any one cause.
What you say is true, but the dominant effect in the system driving it towards more exertion than anyone would find desirable is the profit incentive of owners to drive their workers harder.
How do you narrow it down to capitalism as the root cause though? It seems like a reasonable guess, but our entire system is capitalist - we have no way to isolate or compare against to see how a roughly similar system would play out without capitalism.
We have seen other systems, socialist systems, that are much kinder to workers and give them more security and free time. The capitalists managed to destroy most competing examples and forced the remaining ones to somewhat liberalize via the IMF and other trade regimes, to make it appear as if there is only one choice. Not so!
Unions had early wins that mostly either didn't go anywhere, or the companies worked around. The real win that normalized it was for capitalistic reasons, when Henry Ford shortened the workday/week because he wanted his workers to buy (and have reason to buy) his cars. Combined with other changes, he figured he'd retain workers better and reduce mistakes from fatigue, and when he remained competitive others followed suit.
I wish people could handle an idle mind, I expect we'd all be better off. But yeah, realistically most people when idle would do a lot of damage.
Its always possible that risk would be transitional. Anyone alive today, at least in western style societies, likely doesn't know a life without high levels of stress and distraction. It makes sense that change would cause people to lash out, maybe people growing up in that new system would handle it better (if they had the chance).
I think distraction is doing a lot of work here. Many people could play video games 24/7 with no issues. The urge to be productive and/or make personal progress is something a lot of people feel, (which is fantastic), but video games do a really good job of replacing those feelings along with other emotional experiences.
Many shows and movies can play a similar role.
I think we would/will see a lot more of that. Even in transitional periods where people can multitask more now as ai starts taking over moment to moment thinking.
I don’t think people would be idle. They’d just be concerned with different things, like social dynamics, games/competition/sports, raising family etc.
Oh sure, I didn't actually mean to describe it being idle as in sitting and literally doing nothing. I more meant idle in comparison to how much people work today and how much they think about or stress over work.
Take 7 hours out if the day because an LLM makes you that much more productive and I expect people wouldn't know what to do with themselves. That could be wrong, but I'd expect a lot more societal problems than we already have today if a year from now a large number of people only worked 4 or 5 hours a week.
That's not even getting to the Shopify CEOs ridiculous claim that employees will get 100x more work done [1].
What an absurd straw man. Moving the needle away from “large portions of the population are a few paychecks away from being homeless” does not constitute “the devil’s playground”.
Where’s all of the articles that HN loves about kids these days not being bored anymore? What about google’s famous 20% time?
Others have already said so, but the same is true for automation and anything else. We've had the technology to do less work for a long time, but it doesn't seem to be in our psychology. Not necessarily that we're intentionally choosing to work 40 hours for no reason. But, it feels like we're a bit stuck, and individuals who would try to work less just set themselves back compared to others, and so no one can move.
It’s Solow’s paradox: “You can see the computer age everywhere, except in productivity statistics.”
— Nobel Prize-winning American economist Robert Solow, in 1987
When it comes to programming, I would say AI has about doubled my productivity so far.
Yes, I spend time on writing prompts. Like "Never do this. Never do that. Always do this. Make sure to check that.". To tell the AI my coding preferences. Bot those prompts are forever. And I have written most of them months ago, so that now I just capitalize on them.
I'm always a little bit skeptical whenever people say that AI has resulted in anything more than a personal 50% increase in productivity.
Like, just stop and think about it for a second. You're saying that AI has doubled your productivity. So, you're actually getting twice as much done as you were before? Can you back this up with metrics?
I can believe AI can make you waaaaaaay more productive in selective tasks, like writing test conditions, making quick disposable prototypes, etc, but as a whole saying you get twice as much done as you did before is a huge claim.
It seems more likely that people feel more productive than they did before, which is why you have this discrepancy between people saying they're 2x-10x more productive vs workplace studies where the productivity gain is around 25% on the high end.
I'm surprised there are developers who seem to not get twice as much done with AI than they did without.
I see it happening right in front of my eyes. I tell the AI to implement a feature that would take me an hour or more to implement and after one or two tries with different prompts, I get a solution that is almost perfect. All I need to do is fine-tune some lines to my liking, as I am very picky when it comes to code. So the implementation time goes down from an hour to 10 minutes. That is something I see happening on a daily basis.
Have you actually tried? Spend some time to write good prompts, use state of the art models (o3 or gemini-2.5 pro) and let AI implement features for you?
Even if what you are saying is true, a significant part of a developer's time is not writing code, but doing other things like thinking about how to best solve a problem, thinking about the architecture, communicating with coworkers, and so on.
So, even if AI helps you write code twice as fast, it does not mean that it makes you twice as productive in your job.
Then again, maybe you really have a shitty job at a ticket factory where you just write boilerplate code all day. In which case, I'm sorry!
I've found that AI is incredibly valuable as a general thinking assistant for those tasks as well. You still need enough expertise to know when to reach for it, what to prompt it with, and how to validate the utility and correctness of its output, but none of that consumes as much time as the time saved in my experience.
I think of it like a sort of coprocessor that's dumber in some ways than my subconscious, but massively faster at certain tasks and with access to vastly more information. Like my subconscious, its output still needs to be processed by my conscious mind in order to be useful, but offloading as much compute as possible from my conscious mind to the AI saves a ton of time and energy.
That's before even getting into its value in generating content. Maybe the results are inconsistent, but when it works, it writes code much more quickly than any human could possibly type. Programming aside, I've objectively saved significant amounts of time and money by using AI to help not only review but also revise and write first drafts of legal documents before roping in lawyers. The latter is something I wouldn't have considered worthwhile to attempt in most cases without AI, but with AI I can go from "knowing enough to be dangerous" to quickly preparing a passable first draft on my own and having my lawyers review the language and tighten up some minor details over email. That's a massive efficiency improvement over the old process of blocking off an hour with lawyers to discuss requirements on the phone, then paying the hourly rate for them to write the first draft, and then going through Q&A/iteration with them over email. YMMV, and you still need to use your best judgement on whether trying this with a given legal task will be a productive use of time, but life is a lot easier with the option than without. Deep research is also pretty ridiculous when you find yourself with a use case for it.
In theory, there's not really anything in particular that I'd say AI lets me do that I couldn't do on my own*, given vastly more hours in the day. In practice, I find that I'm able to not only finish certain tasks more quickly, but also do additional useful things that I wouldn't otherwise have done. It's just a massive force multiplier. In my view, the release of ChatGPT has been about as big a turning point for knowledge work as computers and the Internet were.
*: Actually, that's not even strictly true. I've used AI to generate artwork, both for fun/personal reasons and for business, which I couldn't possibly have produced by hand. (I mean with infinite time I could develop artistic skills, but that's a little reductive.) Video generation is another obvious case like this, which isn't even necessarily just a matter of individual skill, but can also be a matter of having the means and justification to invest money in actors, costumes, props, etc.
> I'm surprised there are developers who seem to not get twice as much done with AI than they did without.
I think it depends a lot on what you work on. There are tasks that are super LLM friendly, and then there are things that have so many constraints that LLM can basically never get it right.
For example, atm we have some really complicated pieces of code that needs to be carefuly untangled and retangled to accomodate a change, and we have to be much more strategic about it to make sure we don't regress anything during the process.
Can you share a little bit about what your prompting is like, especially for large code bases? Do you typically restrict context to a single file/module or are you able to manage project wide changes? I'm struggling to do any large scale changes as it just eats through tokens and gets expensive very fast. And the quality of output also drops off as the context grows.
I mean, I don't disagree with you when you say that something that would take an hour or more to implement would only take 10 minutes or so with AI. That kind of aligns with my personal experience. If something takes an hour, it's probably something that the LLM can do, and I probably should have the LLM do it unless I see some value in doing it myself for knowledge retention or whatever.
But working on features that can fit within a timebox of "an hour or more" takes up very little of my time.
That's what I mean, there are certain contexts where it makes sense to say "yeah, AI made me 2x-10x more productive", but taken as a whole just how productive have you become? Actually being 2x productive as a whole would have a profound impact.
Would you be comfortable sharing a bit about the kind of work you do? I’m asking because I mostly write iOS code in Swift, and I feel like AI hasn’t been all that helpful in that area. It tends to confidently spit out incorrect code that, even when it compiles, usually produces bad results and doesn’t really solve the problem I’m trying to fix.
That said, when I had to write a Terraform project for a backend earlier this year, that’s when generative AI really shined for me.
For ios/swift the results reflect the quality of the information available to the LLM.
There is a lack of training data; Apple docs arent great or really thorough, much documentation is buried in WWDC videos and requires an understanding of how the APIs evolved over time to avoid confusion when following stackoverflow posts, which confused newcomers as well as code generators. Stackoverflow is also littered with incorrect or outdated solutions to iOS/Swift coding questions.
Cannot comment on swift but I presume training data for it might be less avaialble online. Whereas Python, what I use and in my anecdotal experience, it can produce quite decent code, and some sparks of brilliance here and there. But I use it for boilerplate code I find boring, not the core stuff. I would say as time progresses and these models get more data it may help with Swift too (though this issue may take a while cause I remember a convo with another person online who said the swift code GPT3.5 produced was bad, referencing libraries that did not exist.)
Which LLMs have you used? Everything from o3-mini has been very useful to me. Currently I use o3 and gemini-2.5 pro.
I do full stack projects, mostly Python, HTML, CSS, Javascript.
I have two decades of experience. Not just my work time during these two decades but also much of my free time. As coding is not just my work but also my passion.
So seeing my productivity double over the course of a few months is quite something.
My feeling is that it will continue to double every few months from now on. In a few years we can probably tell the AI to code full projects from scratch, no matter how complex they are.
Depends on what one defines as a criteria for "better". Getting something to run and work, or actually writing good, readable, mostly self-explanatory, maintainable, easily testable, parallelizable, code.
LLMs are better at languages that are forgiving, like those two, because if something is not exactly right the interpreter will often be able to just continue on
> When it comes to programming, I would say AI has about doubled my productivity so far
For me it’s been up to 10-100x for some things, especially starting from scratch
Just yesterday, I did a big overhaul of some scrapers, that would have taken me at least a week to get done manually (maybe doing 2-4 hrs/day for 5 days ~ 15hrs). With the help of ChatGPT, I was done in less than 2 hours
So not only it was less work, it was a way shorter delivery time
Most of the changes in the end were relatively straightforward, but I hadn’t read the code in over a year.
The code also implemented some features I don’t use super regularly, so it would’ve taken me a long time to load everything up in my head, to fully understand it enough, to confidently make the necessary changes
Without ai, it would have also required a lot of google searches finding documentation and instructions for setting up some related services that needed to be configured
And, it would have also taken a lot more communication with the people depending on these changes + having someone doing the work manually while the scrapers were down
So even though it might have been a reduction of 15hrs down to 1.5hrs for me, it saved many people a lot of time and stress
Not so sure given how fast ai can understand the code already written
Personally, I do try to keep a comment at the top of every major file, with a comment with bullets points, explaining the main functionality implemented and the why
That way, when I pass the code to a model, it can better “understand” what the code is meant to do and can provide better answers
(A lot of times, when a chat session gets too long and seems like the model is getting stuck without good solutions, I ask it to create the comment, and then I start a new chat, passing the code that includes the comment, so it has better initial context for the task)
Have you tested them across different models? It seems to me that even if you manage to cajole one particular model into behaving a particular way, a different model would end up in a different state with the same input, so it might need a completely different prompt. So all the prompts would become useless whenever the vendor updates the model.
What is it like to maintain the code? How long have they been in production? How many iterations (enhancements, refactoring, ...) cycles have you seen with this type of code?
GG, you do twice the work, twice the mental strain for same wage. And you spend time on writing prompts instead of mastering your skills, thus becoming less competitive as a professional (as anyone can use ai, thats a given level now)
Sounds like a total win.
I let the AI implement features on its own, then look at the commit diffs and then use VIM to finetune them.
I wrote my own tool for it. But I guess it is similar to cursor, aider and many other tools that do this. Also what Microsoft offers via the AI "edit" tool I have seen in GitHub codespaces. Maybe that is part of VScode?
No. Aren't there enough "Hey AI here is a codebase, please implement the following feature" tools out there yet?
I have not tried them, but I guess aider, cursor and others offer this? One I tried is copilot in "edit" mode on github codespaces. And it seems similar.
The past month or so I've been largely using Claude Code (similar to aider, which I haven't used in 6+ months, and OpenAI Codex I gather), from the CLI, for the "vibe coding" portion, and then hop into vim for regular coding when I need to "take the stick". I don't have any AI tools integrated into vim (though I do have LSP, etc). This method has been pretty effective, though I would like to have some AI built into the editor as well, my experiences with Cursor and Zed haven't been as rewarding as I'd like so I've iterated towards my current Claude Code. My first serious project, a fastapi-based replacement for an ancient Ruby on Rails project is just in "dev test" mode and going out to production probably in 2.5 weeks.
AI has certainly created new work for the GCC project. They had to implement a scraper protection from the bots run by corporations who benefit for free from GCC but want to milk it even further:
The real problem is with lower skilled positions. Either people in easier roles or more junior people. We will end up with a significant percent of the population who are unemployable because we lack positions commensurate with their skills.
Yep, I'm talking about non-office jobs, such as in warehouses and retail. Why do you need sales associates when you can just ask an AI associate that knows everything.
But, the study is also about LLMs currently impacting wages and hours. We're still in the process of creating targeted models for many domains. It's entirely possible the customer representatives and clerks will start to be replaced in part by AI tools. It also seems that the current increase in work could mean that headcount is kept flat, which is great for a business, but bad for employment.
It’s the same since industrialisation, it’s not that we have less work, we have less of some type of work.
The issue is that after automation the “old” jobs often don’t pay well, and the new jobs that do are (by virtue of the multiplier of technology) actually scarcer than the ones it replaced.
While in a craftsmanship society you had people painting plates for the well to do, factories started mass painting plates for everyone to own.
Now this solved the problem of scarcity, which is great. But it created a new problem which is all those craftsmen are now factory workers whose output is more replaceable. If you’re more replaceable your wages are lower due to increased competition.
Now for some things this is great, but Marx’s logic was that if technology kept making Capital able to use less and less Labour (increasing profits) then eventually a fairly small number of people would own almost everything.
Like most visionaries he was incredibly off on his timeline, and he didn’t predict a service economy after we had overabundance of goods.
So yet again Marx’s logic will be put to the test and yet again we will see the results. I still find that his logic seems fairy solid, although like many others I don’t agree with the solutions.
I wonder how well the this will hold up against AI.
That's the story of all technology and the argument AI won't take jobs pmarca etc has been predicting for a while now. Our focus will be able to shift into ever narrower areas. Cinema was barely a thing 100 years ago. A hundred years from now we'll get some totally new industry thanks to freeing up labor.
Tough to say how it maps but with cinema, you have so many different skill sets needed for every single film. Costumes, builders for sets, audio engineers, the crews, the caterers, location scouts, composers, etc.
In live theater it would be mostly actors, some one time set and costume work, and some recurring support staff.
But then again, there are probably more theaters and theater production by volume.
Also the nature of software is that the more software is written the more software needs to be written to manage, integrate, and make use of all the software that has been written.
AI automating software production could hugely increase demand for software.
The same thing happened as higher level languages replaced manual coding in assembly. It allowed vastly more software and more complex and interesting software to be built, which enlarged the industry.
> AI automating software production could hugely increase demand for software
Let's think this through
1: AI automates software production
2: Demand for software goes through the roof
3: AI has lowered the skill ceiling required to make software, so many more can do it with a 'good-enough' degree of success
4: People are making software for cheap because the supply of 'good enough' AI prompters still dwarfs the rising demand for software
5: The value of being a skilled software engineer plummets
6: The rich get richer, the middle class shrinks even further, and the poor continue to get poorer
This isn't just some kind of wild speculation. Look at any industry over the history of mankind. Look at Textiles
People used to make a good living crafting clothing, because it was a skill that took time to learn and master. Automation makes it so anyone can do it. Nowadays, automation has made it so people who make clothes are really just operating machines. Throughout my life, clothes have always been made by the cheapest overseas labour that capital could find. Sometimes it has even turned out that companies were using literal slaves or child labour.
Meanwhile the rich who own the factories have gotten insanely wealthy, the middle class has shrunk substantially, and the poor have gotten poorer
Do people really not see that this will probably be the outcome of "AI automates literally everything"?
Yes, there will be "more work" for people. Yes, overall society will produce more software than ever
McDonalds also produces more hamburgers than ever. The company makes tons of money from that. The people making the burgers usually earn the least they can legally be paid
The agricultural revolution did in fact reduce the amount of work in society by a lot though. That's why we can have week-ends, vacation, retirement and study instead of working from non stop 12yo to death like we did 150 years earlier.
Reducing the amount of work done by humans is a good thing actually, though the institutional structures must change to help spread this reduction to society as a whole instead of having mass unemployment + no retirement before 70 and 50 hours work week for those who work.
AI isn't a problem, unchecked capitalism can be one.
That's not really why (at least in the U.S.) - it was due to strong labor laws otherwise post industrial revolution you'd still have people working 12 hours a day 7 days a week - though with minimum wage stagnation one could argue that many people have to do this anyway just to make ends meet.
The agricultural revolution has been very beneficial for feeding more people with less labor inputs, but I'm kind of skeptical of the claim that it led to weekends (and the 40hr workweek). Those seem to have come from the efforts of the labor movement on the manufacturing side of things (late 19th, early 20th century). Business interests would have continued to work people 12hrs a day 7 days a week (plus child labor) to maximize profits regardless of increasing agricultural efficiency.
Agricultural work is seasonal. For most of the year you aren't working in the fields. Yes planting and harvesting can require longer hours because you need the planting and harvest done as fast as possible in order to maximize yield and reduce spoilage, but you aren't harvesting and planting the fields for the entire year working non-stop. And even then most people worked at their own pace, not every farm was as labor productive as another or even had to be as productive. Some people valued their time and health and comfort, some people valued being able to brew more beer with their 5% higher yield, some valued leisure time more, but it was a personal choice that people made. The industrial revolution is the outlier point in making people work long non-stop hours all the time. Living a subsidence farming lifestyle doesn't mean you are just hanging on a bare thread of survival the entire time like a lot of pop-media likes to portray.
If you need a supercomputer to run your AGI then it's probably not worth it for any task that a human can do, because humans happen to be much cheaper than supercomputers.
Also, it's not clear if AGI doesn't mean it's necessarily better than existing AIs: a 3 years old child has general intelligence indeed, but it's far less helpful than even a sub-billion parameters LLM for any task.
Is there any evidence that AGI is a meaningful concept? I don't want to call it "obviously" a fantasy, but it's difficult to paint the path towards AGI without also employing "fantasize".
No, we know planetary ecosystems can use energy gradients to sustain intelligent lifeforms. Intelligence is not a feature of the human brain, it's a feature of Earth. Without the ecosystem there are no cells, no organisms, no specialization, no neurons, no mammals. It isn't the human brain that achieved intelligence, it's the entire system capable of producing, sustaining and selecting brains.
Sure, but what does that have to do with AGI? I don't think anyone is proposing simulating an entire brain (yet, anyway).
Like you could have "AGI" if you simply virtualized the universe. I don't think we're any closer to that than we are to AGI; hell, something that looks like a human mouth output is a lot easier and cheaper to model than virtualize.
Unless you believe humans have something mystical like a soul, our brains are evidence that “general intelligence” is achievable in a relatively small, energy efficient form.
Ok, but very few people contest that consciousness is computable. It's basically just Penrose (and other folks without the domain knowledge to engage). This doesn't imply that at any point during all human existence will computing consciousness be economically feasible or worthwhile.
Actual AGI presumably implies a not-brain involved.
And this isn't even broaching the subject of "superintelligence", which I would describe as "superunbelievable".
We are literally talking about problem solving computers. They are goal to action mappers. It's reasonable to talk about goal to action mappers that are more general than the ones we have now. They might even become more general than the general intelligences we have now on message boards.
How would you pay for those robots without a job? Or do you think whoever makes them will give them to you for free? Maybe the AI overlord will, but I doubt it.
In the world of abundance you don’t have to pay for this.
If theres nothing for people to do a new economy will arise where government will supply you with whatever you need at least at basic level.
Or the wars will start and everything will burn.
Obviously if there are no jobs no one will sit on their ass starving. People will get food , clothes, housing etc either via distribution or via force.
This reminds me of a thought I had about driver-less trucks. The truck drivers who get laid off will be re-employed as security guards to protect the automated trucks from getting robbed.
Only if that’s cheaper than security guards. “Just” hiring security guards may be more cost-effective than “just” making trucks more robbery-resistant.
If a truck has a lifetime of 20 years, that's 20 years' worth of paying a security guard for it.
You really think it could take 20 years' worth of human effort in labor and materials to make a truck more secure? The price of the truck itself in the first place doesn't even come close to that.
I feel that I spend a lot more time Looking out for hidden Easter eggs in code reviews. Easter eggs being small errors that look right but hard to catch, but obvious to the one who wrote it. The problem is that the LLM wrote it so we have no benefit of the code author during review or testing.
2023-24 models couldn’t be relied on at smaller levels thanks to hallucinations and poor instruction following; newer models are much better and that trend will keep going. That low level reliability allows models to be a building block for bigger systems. Check out this personal assistant done by Nate Herk, a youtuber who builds automations with n8n:
It’s early. There are new skills everyone is just getting the hang of. If the evolution of AI was mapped to the evolution of computing we would be in the era of “check out this room-sized bunch of vacuum tubes that can do one long division at a time”.
But it’s already exciting, so just imagine how good things will get with better models and everyone skilled in the art of work automation!
This is what the "AI will be a normal technology" camp is telling the "AI is going to put us all out of work!" camp all along. It's always been like this.
Wasn't this covered a few days ago? One point here is that the data is from late 2023, before LLMs were any good. Another point is that the data was collected from remaining workers after any layoffs.
""The adoption of these chatbots has been remarkably fast," Humlum told The Register about the study. "Most workers in the exposed occupations have now adopted these chatbots... But then when we look at the economic outcomes, it really has not moved the needle."
How does that comply with the GDPR? OpenAI now has all sensitive data?
The article markets the study as Danish. However, the working paper is from the Becker Friedman Institute of the University of Chicago:
It is no wonder that the Chicago School of Economics will not find any impact of AI on employment. Calling it Danish to imply some European "socialist" values is deceptive.
> Indeed, the reported productivity benefits were modest in the study. Users reported average time savings of just 2.8 percent of work hours (about an hour per week).
> Our main finding is that AI chatbots have had minimal impact on adopters’ economic outcomes. Difference-in-differences estimates for earnings, hours, and wages are all precisely estimated zeros, with confidence intervals ruling out average effects larger than 1%. At the occupation level, estimates are similarly close to zero, generally excluding changes greater than 6%.
All of our communications at my organization that have clearly been run through Copilot (as we seem to keep championing in some kind of bizarre wankfest) lead me to have to waste a significant sum of time to read and decipher the slop.
What could have been a single paragraph turns into five separate bulleted lists and explanations and fluff.
Is the communication for you or for other AI tools? Meaning is your eventual role just making sure it’s within reason and keeping the AI to AI ecosystem functioning properly? If the output is missing something or misrepresenting something, you update.
Your responsibility is now as an AI response mechanic. And someone else that’s ingesting your AI’s output is making sure their AI’s output on your output is reasonable.
This obviously doesn’t scale well but does move the “doing” out of human hands, replacing that time with a guardrail responsibility.
It’s somewhat exciting to see the commodification of AI models and hardware. At first I was concerned that the hyperscalers would just own the whole thing as a service that keeps squeezing you.
But if model development and self hosting become financially feasible for the majority of organizations then this might really be a “democratized” productivity boost.
The hyperscalers will always own the best models, and even if you're willing to excuse that, requiring organization-levels of funding to run a decent model locally hardly makes the tech "democratized". Sure, you'll always be able to run ${LOCAL_MODEL} on your personal hardware, but that might be akin to programming using Notepad if the gap with the best models in the market is wide enough.
> Indeed, the reported productivity benefits were modest in the study. Users reported average time savings of just 2.8 percent of work hours (about an hour per week).
Yeah, that finding about verification tasks eating the time savings makes total sense. Since AI output is probabilstic, you always need a independent human check, right ? .. that also feels like a shifting bottleneck.. maybe you speed up the coding part but then get bogged down in testing or integration, or the scope just expands to fill the saved time. Plus, how much AI actually helps seems super task dependent, and can vary quite a bit depending on what you are doing
> Indeed, the reported productivity benefits were modest in the study. Users reported average time savings of just 2.8 percent of work hours (about an hour per week).
> Our main finding is that AI chatbots have had minimal impact on adopters’ economic outcomes. Difference-in-differences estimates for earnings, hours, and wages are all precisely estimated zeros, with confidence intervals ruling out average effects larger than 1%. At the occupation level, estimates are similarly close to zero, generally excluding changes greater than 6%.
No surprise here, same can be true of IT. I remember a time before PCs and most work was done on Mainframes and paper w/file cabinets.
Compared to now, the amount of work is about the same, or maybe a bit more than back then. But the big difference is the amount of data being processed and kept, that increased exponentially since then and is still increasing.
So I expect the same with AI, maybe the work is a bit different, but work will be the same or more as data increases.
> No surprise here, same can be true of IT. I remember a time before PCs and most work was done on Mainframes and paper w/file cabinets.
I understand your point but it lacks accuracy in that mainframes, paper and filing cabinets are deterministic tools. AI is neither deterministic nor a tool.
You keep repeating this in this thread, but as has been refuted elsewhere, this doesn't mean AI is not productive. A tool it definitely can be. Your handwriting is non deterministic, yet you could write reports with it.
Unironically it's a form of occult divination.
I know it sounds crazy but it really is the synthesis of humans' collective works combined with some dice rolls.
I'm quite honestly surprised someone more superstitious than I am hasn't raised this point yet (that I've seen).
It’s just math, we tend to like to add and add, more and more. To think AI will take out all work for humans is likely false. Humans always find a problem. You solved your money problem? You are gonna have another problem like and existential crisis problem and that creates more stuff. Just an extreme example
> Indeed, the reported productivity benefits were modest in the study. Users reported average time savings of just 2.8 percent of work hours (about an hour per week).
> Our main finding is that AI chatbots have had minimal impact on adopters’ economic outcomes. Difference-in-differences estimates for earnings, hours, and wages are all precisely estimated zeros, with confidence intervals ruling out average effects larger than 1%. At the occupation level, estimates are similarly close to zero, generally excluding changes greater than 6%.
This has probably been true of all invention / automation: when we went from handwashing to using washing machines, did we start doing more leisurely things for the hours that were saved by that 'labour saving' device?
> Now it is true that the needs of human beings may seem to be insatiable. But they fall into two classes --those needs which are absolute in the sense that we feel them whatever the situation of our fellow human beings may be, and those which are relative in the sense that we feel them only if their satisfaction lifts us above, makes us feel superior to, our fellows. Needs of the second class, those which satisfy the desire for superiority, may indeed be insatiable; for the higher the general level, the higher still are they. But this is not so true of the absolute needs-a point may soon be reached, much sooner perhaps than we are all of us aware of, when these needs are satisfied in the sense that we prefer to devote our further energies to non-economic purposes.
[…]
> For many ages to come the old Adam will be so strong in us that everybody will need to do some work if he is to be contented. We shall do more things for ourselves than is usual with the rich to-day, only too glad to have small duties and tasks and routines. But beyond this, we shall endeavour to spread the bread thin on the butter-to make what work there is still to be done to be as widely shared as possible. Three-hour shifts or a fifteen-hour week may put off the problem for a great while. For three hours a day is quite enough to satisfy the old Adam in most of us!
* John Maynard Keynes, "Economic Possibilities for our Grandchildren" (1930)
An essay putting forward / hypothesizing four reasons on why the above did not happen (We haven't spread the wealth around enough; People actually love working; There's no limit to human desires; Leisure is expensive):
We probably have more leisure time (and fewer hours worked: five versus six days) in general, but it's still being filled (probably especially in the US where being "productive" is an unofficial religion).
One additional factor to consider is that in most cases those setting the leisure hours (i.e. employers) are not the same ones enjoying the leisure (i.e. employees). While the leisure/productivity tradeoff applies to an individual, an economically rational employer only really values productivity and will only offer as much leisure time as necessary to attract and retain employees. So while social forces do generally push for additional leisure over time, such as shorter work weeks, it's often challenging for people to find the type of employment situation where they have significant flexibility in trading off income for leisure time.
As an example, I have a pretty good paying, full-time white collar job. It would be much more challenging if not impossible to find an equivalent job making half as much working 20 hours a week. Of course I could probably find some way to apply the same skills half-time as a consultant or whatever, but that comes with a lot of tradeoffs besides income reduction and is less readily available to a lot of people.
Maybe the real exception here is at the top of the economic ladder, although at that point the mechanism is slightly different. Billionaires have pretty infinite flexibility on leisure time because their income is almost entirely disconnected from the amount of "labor" they put in.
Washing machines are deterministic. Automation is deterministic. AI is not deterministic. AI is not a tool. AI is destined to be what it is now, a parlor trick designed to passify and amuse.
You will have to explain your logic to go from determinism to usefulness. Are you dismissing people's experiences because they don't fit in your analysis frame so they HAVE to be misled because your analysis HAS to be right?
Driving is not deterministic, yet commercial trucking is a core part of the US economy and definitely a productivity boost over trains, mules, and whatever else was before.
This is an insane clickbait, and none of the comments seem to have read further than the title.
There are two metrics in the study:
> AI chatbots save time across all exposed occupations (for 64%–90% of users)
and
> AI chatbots have created new job tasks for 8.4% of workers
There's absolutely no indication anywhere in the study that the time saved is offset by the new work created. The percentages for the two metrics are so vastly different that it's fairly safe to assume it's not the case.
> Indeed, the reported productivity benefits were modest in the study. Users reported average time savings of just 2.8 percent of work hours (about an hour per week).
> Our main finding is that AI chatbots have had minimal impact on adopters’ economic outcomes. Difference-in-differences estimates for earnings, hours, and wages are all precisely estimated zeros, with confidence intervals ruling out average effects larger than 1%. At the occupation level, estimates are similarly close to zero, generally excluding changes greater than 6%.
Again, none of this, especially the calculation about economic output, indicates that the new work it generated led offset the time it saved.
If people save an hour a week and use that to browse HackerNews, they've saved time but haven't produced any economic value, but it doesn't mean they didn't save time.
there's always more work to do. the workforce is always tied up in a few areas of work. once they're freed, they're able to work in new areas. the unemployment due to technological development isn't due to a reduction in work (as in quantity of work available and/or necessary). the more efficient we become, the more work areas we open up.
> Indeed, the reported productivity benefits were modest in the study. Users reported average time savings of just 2.8 percent of work hours (about an hour per week).
> Our main finding is that AI chatbots have had minimal impact on adopters’ economic outcomes. Difference-in-differences estimates for earnings, hours, and wages are all precisely estimated zeros, with confidence intervals ruling out average effects larger than 1%. At the occupation level, estimates are similarly close to zero, generally excluding changes greater than 6%.
I think we may be reaching a point where tech is better at almost everything. When I look at my workplace , there are only a few people who do stuff that’s truly creative. Everybody else does work that’s maybe difficult but fundamentally still very mechanical and in principle automatable.
Add to that progress in robotics and we may reach a point where humans are not needed anymore for most tasks. Then the capitalists will have fully automated factories but nobody who can buy their products.
Maybe capitalism had a good run for the last 200 years and a new economic system needs to arise. Whatever that will be.
Based on the history of technology, this is overwhelmingly the expected result of technology-enabled automation - despite every time pundits claiming "but this time it'll be different."
I can't find the article anymore but I remember reading almost 10 years ago an article on the economist saying that the result of automation was not removal of jobs but more work + less junior employment positions.
The example they gave was search engine + digital documents removed the junior lawyer headcount by a lot. Prior to digital documents, a fairly common junior lawyer task was: "we have a upcoming court case. Go to the (physical) archive and find past cases relevant to current case. Here's things to check for:" and this task would be assigned to a team of junior (3-10 people). But now one junior with a laptop suffice. As a result the firm can also manage more cases.
Seems like a pretty general pattern.
Dwarkesh had a good interview with Zuck the other week. And in it, Zuck had an interesting example (that I'm going to butcher):
FB has long wanted to have a call center for its ~3.5B users. But that call center would automatically be the largest in history and cost ~15B/yr to run. Something that is cost ineffective in the extreme. But, with FB's internal AIs, they're starting to think that a call center may be feasible. Most of the calls are going to be 'I forgot my password' and 'it's broken' anyways. So having a robot guide people along the FAQs in the 50+ languages is perfectly fine for ~90% (Zuck's number here) of the calls. Then, with the harder calls, you can actually route it to a human.
So, to me, this is a great example of how the interaction of new tech and labor is a fractal not a hierarchy. In that, with each new tech that your specific labor sector finds, you get this fractalization of the labor in the end. Zuck would have never thought of a call center, denying the labor of many people. But this new tech allows for a call center that looks a lot like the old one, just with only the hard problems. It's smaller, yes, but it looks the same and yet is slightly different (hence a fractal).
Look, I'm not going to argue that tech is disruptive. But what I am arguing is that tech makes new jobs (most of the time), it's just that these new jobs tend to be dealing with much harder problems. Like, we''re pushing the boundaries here, and that boundary gets more fractal-y, and it's a more niche and harder working environment for your brain. The issue, of course, is that, like a grad student, you have to trust in the person working at the boundary is actually doing work and not just blowing smoke. That issue, the one of trust, I think is the key issue to 'solve'. Cal Newport talks a lot about this now and how these knowledge worker tasks really don't do much for a long time, and then they have these spats of genius. It's a tough one, and not an intellectual enterprise, but an emotional one.
I worked in automated customer support, and I agree with you. By default, we automated 40% of all requests. It becomes harder after that, but not because the problems the next 40% face are any different, but because they are unnecessarily complex.
A customer who wants to track the status of their order will tell you a story about how their niece is visiting from Vermont and they wanted to surprise her for her 16th birthday. It's hard because her parents don't get along as they used to after the divorce, but they are hoping that this will at the very least put a smile on her face.
The AI will classify the message as order tracking correctly, and provide all the tracking info and timeline. But because of the quick response, the customer will write back to say they'd rather talk to a human and ask for a phone number they can call.
The remaining 20% can't be resolved by neither human nor robot.
Between the lines, you highlight a tangental issue: execs like Zuckerberg think easy/automatable stuff is 90%. People with skin in the game know it is much less (40% per your estimate).This isn't unique to LLMs. Overestimating the benefit of automation is a time-honored pastime.
Yeah I think I do already see this happening in my work. It's clearly very beneficial, but its benefit is also overestimated. This can lead to some disenchantment and even backlash where people conclude it's all useless.
But it isn't! It's very useful. Even if it isn't eliminating 90% of work, eliminating 40% is a huge benefit!
I’ve noticed this when trying to book a flight with American Airlines earlier this year. Their website booking was essentially broken, insisting that one of my flight segments was fully booked but giving no indication of which one and attempting alternate bookings which replaced each of the segments in turn still failed. They’d replaced most of their phone booking people with an AI system that also was nonfunctional and wanted to direct me to the website to book. After a great deal of effort, I managed to finally reach a human being who was able to place the booking in a couple minutes (and, it turned out, at a lower price than the website had been quoting).
I never call a customer service line unless the website doesn't work, but customer service robots try very hard to get me to hang up and go to the website.
It's super frustrating. These robots need to have an option like "I am technically savvy and I tried the website and it's broken."
This reminds me how Klarna fired their a large part of their customer support department to replace it with ai, only to eventually realize they couldn't do the job primarily using ai and had to rehire a ton of people.
That might have been their story, but Klarna is struggling to maintain their runway at the moment and that may have been the bigger driver
You're not buying toilet paper and doritos in 12 easy payments?
OT: just googled that name, info panel on the right in my language settings categorizes it as "金融の連鎖", or "cascading of finances". am not sure how to take that.
Pretty good description tbh
Their business model is an online payment provider (like e.g. PayPal/apple pay) that splits the payment into 3, 6 or 12 monthly payments, usually at 0% interest
The idea being that for the business the loss in revenue from an interest free loan is worth it if it causes an increase in sales
But isn't it supposed to be more like "financing franchise"?
Klarna is basically loan sharks but if you do it with an app is legal. Also Opera the browser moved to doing that.
Perhaps the value of believing in the 90% is the motivation it provides.
If you don’t believe in an exaggerated potential, you might never start exploiting it.
> A customer who wants to track the status of their order will tell you a story about how
I build NPCs for an online game. A non trivial percentage of people are more than happy to tell these stories to anything that will listen, including an LLM. Some people will insist on a human, but an LLM that can handle small talk is going to satisfy more people than you might think.
> But because of the quick response, the customer will write back to say they'd rather talk to a human
Is this implying it's because they want to wag their chins?
My experience recently with moving house was that most services I had to call had some problem that the robots didn't address. Fibre was listed as available on the website but then it crashed when I tried "I'm moving home" - turns out it's available in the general area but not available for the specific row of houses (had to talk to a human to figure it out). Water company, I had an account at house N-2, but at N-1 it was included, so the system could not move me from my N-1 address (no water bills) to house N (water bill). Pretty sure there was something about power and council tax too. With the last one I just stopped bothering, figuring that it's the one thing that they would always find me when they're ready (they got in touch eventually).
The world is imperfect and we are pretty good at spotting the actual needle in the haystack of imperfection. We are also good at utilizing a whole range of disparate signals + past experience to make reasonably accurate decisions. It'll take some working for AI to be able to successfully handle such things at a large scale - this is all still frontier days of AI.
> this is all still frontier days of AI
That's why it annoys me how much effort they put into not talking to me, when it's clear that their machine cannot solve my problem.
They don’t care about you. You are a number on a screen that happens to pay their company money sometimes. But by using recorded voices, the company hopes to tap into the empathetic part of your human brain to subconsciously make excuses for their crappy service.
When I get stellar customer service these days, I’m happy and try to call it out, but i don’t expect it anymore. My first expectation is always AI slop or a shitty phone tree. When I reframed it for myself, it was a lot easier not to get frustrated about something that I can’t control and not blame a person who doesn’t exist.
Zuck also said that AI is going to start replacing senior software engineers at Meta in 2025. His job isn’t to state objective facts but hype up his company’s products and share price.
Honestly I hope this is true. I recognize this is a risky thing to say, for my own employment prospects as a software engineer. But if companies like Facebook could run their operations with fewer engineers, and those people could instead start or join a larger diversity of smaller businesses, that would be a positive development.
I do think we're going to see less employment for "coding" but I remain optimistic that we're going to see more employment for "creating useful software".
Zuck is just bullshitting here, like most of what he says.
There is zero chance he wants to pay even a single person to sit and take calls from users.
He would eliminate every employee at Facebook it it were technically possible to automate what they do.
So would everyone that ever created a business. Nobody grows headcount if they don't have to. Why be responsible for other people's livelihoods if you can make it work with less people? Just more worries and responsibilities.
> Nobody grows headcount if they don't have to.
From my experience in corporations this is a false statement. The goal of each manager is to grow their headcount. More people under you - more weight you have and higher position you got.
There is a difference between business owners (who don't want to spend money unless they have to) ans managers (who want career growth and are not necessarily worried about the company 's bottom line wrt headcount)
A manager’s net worth is not tied to the valuation of the company. They get their salary regardless.
I think that once you have profit & loss responsibility that changes.
I think that once you have profit & loss responsibility that chanes.
> Why be responsible for other people's livelihoods if you can make it work with less people?
Because he is the fourth richest man on the planet and that demands some responsibility, which he refuses to take.
He owns 162,000,000,000 dollars. Metas net income 2024 was 50,000,000,000 dollars.
Most major corporations have increased head count in recent years when they didn’t have to via the creation of DEI roles. These positions might look good in the current cultural moment but add nothing to a company’s bottom line and so are an unnecessary drain on resources.
I don’t know about you, but for me, one of the greatest joys in life is being able to hire people and give them good jobs.
This doesn't seem true to me at all. Humans are not rational drones that analyze the business and coldly determines the required number of people. I would be surprised if CEOs didn't keep people around because it felt good to be a boss.
Facebook might be able to operate with half the headcount, but them Zuckerberg wouldn't be the boss of as many people, and I think he likes being the boss.
> I would be surprised if CEOs didn't keep people around because it felt good to be a boss.
If you had hired people (and been responsible for their salaries and benefits and HR issues), you would definitely not say that.
He can definitely fire most people at Facebook. He just doesn't because it would be like not providing a simple defense against a pawn move on a Chess board. No point in not matching the opposition's move if you can afford it. They hire, we hire, they fire, we fire.
FB would be run into the ground on day one, if he fired most people (>50%) at FB.
Why?
Other than on-call roles like Production Engineers, whose absence there would make the company fail within a day?
Because things would happen on the platform, that would be bad PR. Availability might even go down. Who knows what kind of automatized things need to be kept in check daily.
Twitter example shows it might not be true.
Twitter is dead now, right-wing echo hall. It basically ceased to exist in the way it did.
I will admit though, that it may be possible to continue existing in other ways, if he fired >50% of the people at FB.
You are showing your own biases here. Twitter did cease to exist the way it did. In its place is a platform mostly free of censorship and with new features added.
I’d rather see humanity in all of its good, bad, and ugly than have a feed sanitized for me by random Twitter employees who in many cases had their own agenda.
I would rather not see hate speech and incitement of violence online. If you think that Twitter in the form it has now doesn't have a hidden agenda ... That is a very naive believe to be held. Censorship is not the only negative thing that can happen to information. We should all have learned that lesson by now.
This has more to do with Musk's policy, though. It's still up and running, so clearly the tech side wasn't as affected as people thought it would be.
Most major companies and politicians still use twitter for communication. It sounds like you are the one in the "echo hall"?
Sounds like his own job could be automated…
The company is his property, ofc he won't fire himself
> Most of the calls are going to be 'I forgot my password' and 'it's broken' anyways. So having a robot guide people along the FAQs in the 50+ languages is perfectly fine for ~90% (Zuck's number here) of the calls.
No it isn't. Attempts to do this are why I mash 0 repeatedly and chant "talk to an agent" after being in a phone tree for longer than a minute.
And you don't think that this won't improve with better bots?
> And you don't think that this won't improve with better bots?
Actually, now that I think about it, yeah.
The whole purpose of the bots is to deflect you from talking to a human. For instance: Amazon's chatbot. It's gotten "better": now when I need assistance, it tries three times to deflect me from a person after it's already agreed to connect me to one.
Anything they'll allow the bot to do can probably can be done better by a customer facing webpage.
Maybe for you, but not for most people. Most people have problems that are answered online, but knowledge sites are hard to navigate, and they can't solve their own problems.
A high quality bot to guide people through their poorly worded questions will be hugely helpful for a lot of people. AI is quickly getting to the point that a very high quality experience is possible.
The premise is also that the bots are what enable the people to exist. The status quo is no interactive customer service at all.
This sounds to me like something that's better solved by RAG than by an AI manned call center.
Let's use Zuck's example, the lost password. Surely that's better solved with a form where you type things, such as your email address. If the problem is navigation, all we need to do is hook up a generative chat bot to the search function of the already existing knowledge site. Then you can ask it how to reset your password, and it'll send you to the form and write up instructions. The equivalent over a phone call sounds worse than this to me.
I think Zuck is wrong that 90% of the problems people would call in for can easily be solved by an AI. I was stuck in a limbo with Instagram for about 18 months, where I was banned for no clear reason, there was no obvious way to contact them about it, and once I did find a way, we proceeded with a weird dance where I provided ID verification, they unbanned me, and then they rebanned me, and this happened a total of 4 times before the unban process actually worked. I don't see any AI agent solving this; the cause was obviously process and/or technical problems at Meta. This is the only thing I ever wanted to call Meta for.
And there is another big class of issue that people want to call any consumer-facing business for, which AI can't solve: loneliness. The person is retired and lives alone and just wants to talk to someone for 20 minutes, and uses a minor customer service request as a justification. This happens all the time. Actually an AI can address this problem, but it's probably not the same agent we would build for solving customer requests, and I say address rather than solve as AI will not solve society's loneliness epidemic.
I try to enunciate very clearly: "What would you like to do?" - "Speak to a fcuking human. Speak to a fcuking human. Speak to a fcuking human. Speak to a fcuking human."
Just say “fucking”
No wonder the AI couldn't understand.
Let's watch your mood when AI answers your call.
I like the analogy of the fractal boundaries.
But there's also consolidation happening: Not every branch that is initially explored is still meaningful a few years later.
(At least that's what I got from reading old mathematical texts: People really delved deeply into some topics that are nowadays just subsumed by more convenient - or maybe trendy - machinery)
Weird to find out that some people still believe a thing that guy says.
Sorry for the acidity, just training my patience while waiting for the mythical FB/AI call center.
Yeah, I was a little credulous about what Zuck said there too.
Like, if AI is so good, then it'll just eat away at those jobs and get asymptotically close to 100% of the calls. If it's not that good, then you've got to loop in the product people and figure out why everyone is having a hard time with whatever it is.
Generally, I'd say that calls are just another feedback channel for the product. One that FB has thus far been fine without consulting, so I can't imagine its contribution can be all that high. (Zuck also goes on to talk about the experiments they run on people with FB/Insta/WA, and woah, it is crazy unethical stuff he casually throws out there to Dwarkesh)
Still, to the point here: I'm still seeing Ai mostly as a tool/tech, not something that takes on an agency of it's own. We, the humans, are still the thing that says 'go/do/start', the prime movers (to borrow a long held and false bit of ancient physics). The AIs aren't initiating things, and it seems to a large extent, we're not going to want them to do so. Not out of a sense of doom or lack-of-greed, but simply as we're more interested in working at the edge of the fractal.
Not to discredit anything you wrote, but:
"I'm still seeing Ai mostly as a tool/tech, not something that takes on an agency of it's own."
I find that to be a highly ironic thing. It basically says AI is not AI. Which we all know it is not yet, but then we can simply say it: The current crop of "AI" is not actually AI. It is not intelligence. It is a kind of huge encoded, non-transparent dictionary.
As someone who has been involved with customer support (on the in-house tech side) the very vast majority of contacts to a CS team will be very inane or extremely inane. If you can automate away the lowest tier of support with LLMs you'll improve response times for not just the simple questions but also for the hard ones.
I have had the problem with customer support that about 90% of the calls/chats I have placed should have been automated (on their side), and the remaining 10% needed escalation beyond the "customer service" escalation ladder. In America, sadly, that means one of two things: (1) you call a friend who works there or (2) you have your lawyer send a demand letter requesting something rather inane.
I agree with that common pattern but even without [current] AI there were ways to automate/improve the lowest tier: very often I don't find my basic questions in the typical corporation's FAQ.
I usually assume, that it is, because they do not want to answer those basic questions or want to hide the answers. For example some shop. No answer found in the FAQ how refunds work. Instant sus.
Isn't this literally just "productivity growth". You (and I think the article) are describing the ability to do more work with the same number of people, which seems like the economic definition of productivity.
We have an infinite capacity in 'making' work. It just shifts from real productivity to make-work overhead.
Pfiefdoms and empires will be maintained.
This is like a mini parallel of the industrial revolution.
A lot of places starting with a large and unskilled workforce, getting into e.g. textile industry (which brings better RoI than farming). Then the automation arrives but it leaves a lot of people jobless (still being unskilled) while there's new jobs in maintaining the machinery etc.
Copilot found this based on your description:
https://impact.economist.com/projects/responsible-innovation...
I don't know about lawyering, but with engineering research, I can now ask ChatGPT's Deep Research to do a literature review on any topic. This used to take time and effort.
Without junior positions there is no future senior positions.
Which works great for current seniors trying to continue getting paid, eh?
The Productivity Paradox is officially a thing. Maybe that’s what you’re thinking of?
https://en.m.wikipedia.org/wiki/Productivity_paradox
Definitely. When computers came out, jobs increased. When the Internet became widely used, jobs increased. AI is simply another tool.
The sad part is, do you think we'll see this productivity gain as an opportunity to stop the culture of over working? I don't think so. I think people will expect more from others because of AI.
If AI makes employees twice as efficient, do you think companies will decrease working hours or cut their employment in half? I don't think so. It's human nature to want more. If 2 is good, 4 is surely better.
So instead of reducing employment, companies will keep the same number of employees because that's already factored into their budget. Now they get more output to better compete with their competitors. To reduce staff would be to be at a disadvantage.
So why do we hear stories about people being let go? AI is currently a scapegoat for companies that were operating inefficiently and over-hired. It was already going to happen. AI just gave some of these larger tech companies a really good excuse. They weren't exactly going to admit their make a mistake and over-hired, now were they? Nope. AI was the perfect excuse.
As all things, it's cyclical. Hiring will go up again. AI boom will bust. On to the next thing. One thing is for certain though, we all now have a fancy new calculator.
Well, I think productivity gains should correlate with stock price growth. If we want stock prices to increase exponentially, sales must also grow exponentially, which means we need to become exponentially more productive. We can stop that — or we can stop tying company profitability to stock prices, which is already happening to some extent. And when we talk about 'greedy shareholders,' remember that often means pension funds - essentially our savings and our hope for a decent retirement, assuming productivity continues to grow exponentially.
You either believe that companies are trying to grow as much as possible within their current budget, or not.
Automation is one way to do that.
When the economic reports say ‘gains were due to improvements in economic efficiency’ that is exactly what they are describing.
Bullshit Jobs, both the article and the subsequent book explore this theme a lot
https://libcom.org/article/phenomenon-bullshit-jobs-david-gr...
Read the article. The book is just a long boring elongation without new content.
I do not agree, I think the book is much more interesting than the article. For example the type of jobs such as Box Tickers and Flunkies, as well some really interesting anecdotes
Loved the article. Bought the book. I put it down before halfway as it was poorly written longform pagefilling. If it came out today I would totally understand someone calling AI slop on it. "Here's my popular article, re-write it book lenght for me".
15 years ago I created my own LLC, work experience from some contracts, and had a friend answer the reference checks
I skipped over junior positions for the most part
I don’t see that not working now
I feel like people in the comments are misunderstanding the findings in the article. It’s not that people save time with AI and then turn that time to novel tasks; it’s that perceived savings from using AI are nullified by new work which is created by the usage of AI: verification of outputs, prompt crafting, cheat detection, debugging, whatever.
This seems observationally true in the tech industry, where the world’s best programmers and technologists are tied up fiddling with transformers and datasets and evals so that the world’s worst programmers can slap together temperature converters and insecure twitter clones, and meanwhile the quality of the consumer software that people actually use is in a nosedive.
I think the software quality nosedive significantly predates generative AI.
I think it's too early to say whether AI is exacerbating the problem (though I'm sympathetic to the view that it is) or improving it, or just maintaining the status quo.
The other night I was too tired to code so I decided to try vibe coding a test framework for the C/C++ API I help maintain. I've tried this a couple times so far with poor results but I wanted to try again. I used Claude 3.5 IIRC.
The AI was surprisingly good at filling in some holes in my specification. It generated a ton of valid C++ code that actually compiled(except it omitted the necessary #includes). I built and ran it and... the output was completely wrong.
OK, great. Now I have a few hundred lines of C++ I need to read through and completely understand to see why it's incorrect.
I don't think it will be a complete waste of time because the exercise spurred my thinking and showed me some interesting ways to solve the problem, but as far as saving me a bunch of time, no. In fact it may actually cost me more time trying to figure out what it's doing.
With all due respect to folks working on web and phone apps, I keep getting the feeling that AI is great for high level, routine sorts of problems and still mostly useless for systems programming.
> With all due respect to folks working on web and phone apps, I keep getting the feeling that AI is great for high level, routine sorts of problems and still mostly useless for systems programming.
As one of those folks, no it's pretty bad in that world as well. For menial crap it's a great time saver, but I'd never in a million years do the "vibe coding" thing, especially not with user-facing things or especially not for tests. I don't mind it as a rubber duck though.
I think the problem is that there's 2 groups of users, the technical ones like us and then the managers and C-levels etc. They see it spit out a hundred lines of code in a second and as far as they know (and care) it looks good, not realizing that someone now has to spend their time reviewing the 100 lines of code, plus having the burden of maintenance of those 100 lines going into the future. But, all they see is a way to get the pesky, expensive devs replaced or at least a chance squeeze more out of them. The system is so flashy and impressive looking, and you can't even blame them for falling for the marketing and hype, after all that's what all the AIs are being sold as, omnipotent and omniscient worker replacers.
Watching my non-technical CEO "build" things with AI was enlightening. He prompts it for something fairly simple, like a TODO List application. What it spits out works for the most part, but the only real "testing" he does is clicking on things once or twice and he's done and satisfied, now convinced that AI can solve literally everything you throw at it.
However if he were testing the solution as a proper dev would, he'd see that the state updates break after a certain amount of clicks, and that the list was glitching out sometimes, and that adding things breaks on scroll and overflows the viewport, and so on. These are all real examples of an "app" he made by vibe coding, and after playing around with it myself for all of 3 minutes I noticed all these issues and more in his app.
For esoteric config files (such as ntp or chrony) that would take me 10-15 mins to write and tweak, it gets done in seconds.
Over time, that adds up.
For simple utility programs and scripts, it also does a great job.
> With all due respect to folks working on web and phone apps, I keep getting the feeling that AI is great for high level, routine sorts of problems and still mostly useless for systems programming.
As someone working on routine problems in mainstream languages where training data is abundant, LLMs are not even great for that. Sure, they can output a bunch of code really quickly that on the surface appears correct, but on closer inspection it often uses nonexistent APIs, the logic is subtly wrong or convoluted for no reason, it does things you didn't tell it to do or ignores things you did, it has security issues and other difficult to spot bugs, and so on.
The experience is pretty much what you summed up. I've also used Claude 3.5 the most, though all other SOTA model have the same issues.
From there, you can go into the loop of copy/pasting errors to the LLM or describing the issues you did see in the hopes that subsequent iterations will fix them, but this often results in more and different issues, and it's usually a complete waste of time.
You can also go in and fix the issues yourself, but if you're working with an unfamiliar API in an unfamiliar domain, then you still have to do the traditional task of reading the documentation and web searching, which defeats the purpose of using an LLM to begin with.
To be clear: I don't think LLMs are a useless technology. I've found them helpful at debugging specific issues, and implementing small and specific functionality (i.e. as a glorified autocomplete). But any attempts of implementing large chunks of functionality, having them follow specifications, etc., have resulted in much more time and effort spent on my part than if I had done the work the traditional way.
The idea of "vibe coding" seems completely unrealistic to me. I suspect that all developers doing this are not even checking whether the code does what they want to, let alone reviewing the code for any issues. As long as it compiles they consider it a success. Which is an insane way of working that will lead to a flood of buggy and incomplete applications, increasing the dissatisfaction of end users in our industry, and possibly causing larger effects not unlike the video game crash of 1983 or the dot-com bubble.
> The idea of "vibe coding" seems completely unrealistic to me.
That's what happens to "AI art" too. Anyone as a non-artist can create images in seconds, and they will look kind of valid or even good to them, much like those "vibe coded" things look to CEOs.
AI is great at generating crap really fast and efficiently. Not so good at generating stuff that anyone actually needs and which must actually work. But we're also discovering that a lot of what we consume can be crap and be acceptable. An endless stream of generated synthwave in the background while I work is pretty decent. People wanting to decorate their podcasts or tiktoks with something that nobody is going to pay attention to, AI art can do that.
For vibe coding, right now it seems that prototyping and functional mockups seems to be quite a viable use.
> With all due respect to folks working on web and phone apps, I keep getting the feeling that AI is great for high level, routine sorts of problems and still mostly useless for systems programming.
I agree. AI is great for stuff that's hard to figure out but easy to verify.
For example, I wanted to know how to lay out something a certain way in SwiftUI and asked Gemini. I copied what it suggested, ran it and the layout was correct. I would have spent a lot more time searching and reading stuff compared to this.
> OK, great. Now I have a few hundred lines of C++ I need to read through and completely understand to see why it's incorrect.
I think it's often better to just skip this and delete the code. The cool thing about those agents is that the cost of trying this out is extremely cheap, so you don't have to overthink it and if it looks incorrect, just revert it and try something else.
I've been experimenting with Junie for past few days, and had very positive experience. It wrote a bunch of tests for me that I've been postponing for quite some time it was mostly correct from a single sentence prompt. Sometimes it does something incorrect, but I usually just revert it and move on, try something else later. There's definitely a sweet spot for things tasks it does well and you have to experiment a bit to find it out.
> where the world’s best programmers and technologists are tied up fiddling with transformers and datasets and evals so that the world’s worst programmers can slap together temperature converters and insecure twitter clones
This statement is incredibly accurate
> slap together temperature converters and insecure twitter clones
because those "best programmers" don't want to be making temperature converters nor twitter clones (unless they're paid mega bucks). This enables the low paid "worst" programmers to do those jobs for peanuts.
It's an acceptable outcome imho.
Let's assume that I'm closer to best programmers than worst programmers, for a second; I definitely will build a temperature converter, at my usual hourly rate. I don't think we should consider any task "beneath us", doing so detaches us from reality, makes us entitled, and ultimately stumps our growth
But do we actually need more temperature converters? Maybe it would be better if they were hard to make such that people didn't waste their time, and the bad programmers went out and did some yard work.
Personally, having worked in professional enterprise software for ~7 years now I've come to a pretty hard conclusion.
Most software should not exist.
That's not even meant in the tasteful "Its a mess" way. From a purely money making efficiency standpoint upwards of 90% of the code I've written in this time has not meaningfully contributed back to the enterprise, and I've tried really hard to get that number lower. Mind you, this is professional software. If you consider the vibe coder guys, I'll estimate that number MUCH higher.
It just feels like the whole way we've fit computing into the world is misaligned. We spent days building UIs that dont help the people we serve and that break at the first change to the process, and because of the support burden of that UI we never get to actually automate anything.
I still think computers are very useful to humanity, but we have forgot how to use them.
> Upwards of 90% ... of software should not exist ... it has not meaningfully contributed back to the enterprise
This is Sturgeon's law. (1)
And yes, but it's hard or impossible to identify the useful 10% ahead of time. It emerges after the fact.
1) https://en.wikipedia.org/wiki/Sturgeon%27s_law
And not only that, but most >>changes<< to software shouldn't happen, especially if it's user facing. Half my dread in visiting support web sites is that they've completely rearranged yet again, and the same thing I've wanted five times requires a fifth 30 minutes figuring out where they put it.
>it’s that perceived savings from using AI are nullified by new work which is created by the usage of AI:
I mean, isn't that obvious looking at economic output and growth? The Shopify CEO recently published a memo in which he claimed that high achievers saw "100x growth". Odd that this isn't visible in the Spotify market cap. Did they fire 99% of their engineers instead? Maybe the memo was AI written too.
Are there any 5 man software companies that do the work of 50? I haven't seen them. I wonder how long this can go on with the real world macro data so divorced from what people have talked themselves into.
the state of consumer software is already so bad & LLMs are trained on a good chunk of that so their output can possible produce worse software right? /s
Modern AI tools are amazing, but they’re amazing like spell check was amazing when it came out. Does it help with menial tasks? Yes, but it creates a new baseline that everyone has and just moves the bar. Theres scant evidence that we’re all going to just sit on a beach while AI runs your company anytime soon.
There’s little sign of any AI company managing to build something that doesn’t just turn into a new baseline commodity. Most of these AI products are also horribly unprofitable, which is another reality that will need to be faced sooner rather than later.
It's got me wondering: do any of my hard work actually matter? Or is it all just pointless busy-work invented since the industrial revolution to create jobs for everyone, when in reality we would be fine if like 5% of society worked while the rest slacked off? Don't think we'd have as many videogames, but then again, we would have time to play, which I would argue is more valuable than games.
To paraphrase Lee Iacocca: We must stop and ask ourselves, how much videogames do we really need?
> It's got me wondering: do any of my hard work actually matter?
I recently retired from 40 years in software-based R&D and have been wondering the same thing. Wasn't it true that 95% of my life's work was thrown away after a single demo or a disappointingly short period of use?
And I think the answer is yes, but this is just the cost of working in an information economy. Ideas are explored and adopted only until the next idea replaces it or the surrounding business landscape shifts yet again. Unless your job is in building products like houses or hammers (which evolve very slowly or are too expensive to replace), the cost of doing of business today is a short lifetime for any product; they're replaced in increasingly fast cycles, useful only until they're no longer competitive. And this evanescent lifetime is especially the case for virtual products like software.
The essence of software is to prototype an idea for info processing that has utility only until the needs of business change. Prototypes famously don't last, and increasingly today, they no longer live long enough even to work out the bugs before they're replaced with yet another idea and its prototype that serves a new or evolved mission.
Will AI help with this? Only if it speeds up the cycle time or reduces development cost, and both of those have a theoretical minimum, given the time needed to design and review any software product has an irreducible minimum cost. If a human must use the software to implement a business idea then humans must be used to validate the app's utility, and that takes time that can't be diminished beyond some point (just as there's an inescapable need to test new drugs on animals since biology is a black box too complex to be simulated even by AI). Until AI can simulate the user, feedback from the user of new/revised software will remain the choke point on the rate at which new business ideas can be prototyped by software.
> Ideas are explored and adopted only until the next idea replaces it or the surrounding business landscape shifts yet again.
“Creative destruction is a concept in economics that describes a process in which new innovations replace and make obsolete older innovations.”
https://en.wikipedia.org/wiki/Creative_destruction
I think about this a lot with various devices I owned over the years that were made obsolete by smartphones. Portable DVD players and digital cameras are the two that stand out to me; each of them cost hundreds of dollars but only had a marketable life of about 5 years. To us these are just products on a shelf, but every one of them had a developer, an assembly line, and a logistics network behind them; all of these have to be redeployed whenever a product is made obsolete.
Most of a chefs meals are now poo. Memories of those meals survive but eventually they will fade too.
There is a lot of value in being the stepping stone to tomorrow. Not everyone builds a pyramid.
I still have code running in production I wrote 20 years ago. Sure, it’s a small fraction, but arguably that’s the whole point.
So... to what extent is software a durable good?
Who said it's durable?
This is what makes software interesting. It theoretically works forever and has zero marginal production cost, but it's durability is driven by business requirements and hardware and OS changes. Some software might have a 20 year life. Some might only be 6 months.
A house is way more durable. My house is older than all software and I expect it to outlive most software written (either today or ever). Except voyager perhaps!
>do any of my hard work actually matter?
Yes... basically in life, you have to find the definition of "to matter" that you can strongly believe in. Otherwise everything feels aimless, the very life itself.
The rest of what you ponder in your comment is the same. And I'd like to add that baselines shifted a lot over the years of civilization. I like to think about one specific example: painkillers. Painkillers were not used during medical procedures in a widespread manner until some 150 years ago, maybe even later. Now, it's much less horrible to participate in those procedures, for everyone involved really, and also the outcomes are better just for this factor - because the patients moves around less while anesthetized.
But even this is up for debate. All in all, it really boils down to what the individual feels like it's a worthy life. Philosophy is not done yet.
Well, from a societal point of view, meaningful work would be work that is necessary to either maintain or push that baseline.
Perhaps my initial estimate of 5% of the workforce was a bit optimistic, say 20% of current workforce necessary to have food, healthcare, and maybe a few research facilities focused on improving all of the above?
I'm sure we could organize it if that would be the goal.
No, it would be impossible to organize. Planned economies have always failed at that scale, and always will. AI won't change that reality.
I'm pretty sure it's not impossible, but rather just improbable, because of how human nature works. In other words, we are not incentivized to do that, and that is why we don't do that, and even when we did, it always fell apart.
You are very right that AI will not change this. As neither did any other productivity improvement in the past (directly).
Right? So what's the current goal, and why is it better than this one?
Power itself seems to be the goal, and the reasons for it is human DNA I think. I have doubts that we can build anything different than this (on a sufficiently long run).
Power might be a goal for individuals, but surely it's not the goal for society as a whole?
Does society as a whole even have a goal currently? I don't really think it does. Like do ideologists even exist today?
I wish society was working towards some kind of idea of utopia, but I'm not convinced we're even trying for that. Are we?
I don't feel like we have goals as a society either.
Unless you propose slaves how are you going to choose the 5%?
Who in their right mind would work when 95 out of 100 people around them are slacking off all day? Unless you pay them really well. So well that they prefer to work than to slack off. But then the slackers will want nicer things to do in their free time that only the workers can afford. And then you'd end up at the start.
> when in reality we would be fine if like 5% of society worked while the rest slacked off?
if that were really true, who gets to decide who those 5% that gets to do work, while the rest leeches off them?
Coz i certainly would not want to be in that 5%.
Nope. The current system may be misdirecting 95% of labor, but until we have sufficiently modeled all of nature to provide perfect health and brought world peace, there is work to do.
> Don't think we'd have as many videogames, but then again, we would have time to play, which I would argue is more valuable than games.
Would we have fewer video games? If all our basic needs were met and we had a lot of free time, more people might come together to create games together for free.
I mean, look at how much free content (games, stories, videos, etc) is created now, when people have to spend more than half their waking hours working for a living. If people had more free time, some of them would want to make video games, and if they weren’t constrained by having to make money, they would be open source, which would make it even easier for someone else to make their own game based on the work.
Mine doesn't, and I am fine with that, never needed such validation. I derive fulfillment from my personal life and achievements and passions there, more than enough. With that optics, office politics and promotion rat race and what people do in them just makes me smile. Seeing how otherwise smart folks ruin (or miss out) their actual lives and families in pursuit of excellence in a very narrow direction, often hard underappreciated by employers and not rewarded adequately. I mean, at certain point you either grok the game and optimize, or you don't.
The work brings over time modest wealth, allows me and my family to live in long term safe place (Switzerland) and builds a small reserve for bad times (or inheritance, early retirement etc. this is Europe, no need to save up for kids education or potentially massive healthcare bills). Don't need more from life.
Agree. Now I watch the rat racers with bemusement while I put in just enough to get a paycheck. I have enough time and energy to participate deeply in my children’s upbringing.
I’m in America so the paychecks are very large, which helps with private school, nanny, stay at home wife, and the larger net worth needed (health care, layoff risk, house in a nicer neighborhood). I’ve been fortunate, so early retirement is possible now in my early 40s. It really helps with being able to detach from work, when I don’t even care if I lose my job. I worry for my kids though. It won’t be as easy for them. AI and relentless human resources optimization will make tech a harder place to thrive.
You're on the right path, don't fall back into the global gaslight. Go deeper.
http://youtube.com/watch?v=9lDTdLQnSQo
>It's got me wondering: do any of my hard work actually matter?
It mattered enough for someone to pay you money to do it, and that money put food on the table and clothes on your body and a roof over your head and allowed you to contribute to larger society through paying taxes.
Is it the same as discovering that E = MC2 or Jonas Salk's contributions? No, but it's not nothing either.
Most work is redundant and unnecessary. Take for example the classic gas station on every corner situation that often emerges. This turf war between gas providers (or their franchisees by proxy they granted a license to this location for) is not because three or four gas stations are operating at maximum capacity. No, this is 3 or 4 fisherman with a line in the river, made possible solely because inputs (real estate, gas, labor, merchandise) are cheap enough where the gas station need not ever run even close to capacity and still return a profit for the fisherman.
Who benefits from the situation? You or I who don’t have to make a u turn to get gas at this intersection, perhaps, but that is not much benefit in comparison for the opportunity cost of not having 3 prime corner lots squandered on the same single use. The clerk at the gas station for having a job available? Perhaps although maybe their labor in aggregate would have been employed in other less redundant uses that could benefit out society otherwise than selling smokes and putting $20 on 4 at 3am. The real beneficiary of this entire arrangement is the fisherman, the owner or shareholder who ultimately skims from all the pots thanks to having what is effectively a modern version of a plantation sharecropper, spending all their money in the company store and on company housing with a fig leaf of being able to choose from any number of minimum wage jobs, spend their wages in any number of national chain stores, and rent any number of increasingly investor owned property. Quite literally all owned by the same shareholders when you consider how people diversify their investments into these multiple sectors.
We benefit because when there’s only one gas station, they can charge more than if there are four.
It's weird to read the same HN crowd that decries monopolies and extols the virtues of competition turn around and complain about job duplication and "bullshit jobs" like marketing and advertising that arise from competition.
It's only weird if you model HN as a hivemind.
I've been thinking similarly. Bertrand Russell once said: "there are two types of work. One, moving objects on or close to the surface of the Earth. Two, telling other people to do so". Most of us work in buildings that don't actually manufacture, process or anything. Instead, we process information that describes manufacturing and transport. Or we create information for people to consume when they are not working (entertainment). Only a small faction of human beings are actually producing things that are necessary for physiological survival. Rest of us are at best, helping them optimize that process, or at worst, leeching off of them in the name of "management" of their work.
Spellcheck (and auto completion) is like AI - it solves one problem and creates another.
Now instead of misspelled words (which still happens all the time) we have incorrect words substituted in place of the correct ones.
Look at any long form article on any website these days and it will likely be riddled with errors, even on traditional news websites!
Its why executive types are all hyped about AI. Being able to code 2x more will mean they get 2x more things (roughly speaking), but the workers aren’t going to get 2x the compensation.
Keep in mind that competitors will also produce 2x, so even executives wont get 2x compensation
Indeed. And AI does its work without those productivity-hindering things like need for recreation and sleep, ethical treatment, and a myriad of others. It's a new resource to exploit, and that makes everyone excited who is building on some resource.
AI can’t do our jobs today, but we’re only 2.5 years from the release of chatGPT. The performance of these models might plateau today, but we simply don’t know. If they continue to improve at the current rate for 3-5 more years, it’s hard for me to see how human input would be useful at all in engineering.
And if my plane keeps the take-off acceleration up for 7 months we'd be at 95% the speed of light by then.
They will never be creative, and creativity is a pretty big deal.
Most software engineering jobs aren't about creativity, but about putting some requirements stated in a slightly vague fashion, and actualizing it for the stakeholder to view and review (and adjust as needed).
The areas for which creativity is required are likely related to digital media software (like SFX in movies, games, and perhaps very innovative software). In these areas, surely the software developer working there will have the creativity required.
> but about putting some requirements stated in a slightly vague fashion, and actualizing it for the stakeholder to view and review
sounds like a form of creativity to me!
To the extent it’s measurable, LLMs are becoming more creative as the models improve. I think it’s a bold statement to say they’ll NEVER be creative. Once again, we’ll have to see. Creativity very well could be emergent from training on large datasets. But also it might not be. I recommend not speaking in such absolutes about a technology that is improving every day.
"To the extent it's measurable" is very load-bearing in the semantics here. A lot of "creativity" is very hard to measure.
I agree, and I think most people would say the current models would rank low on creativity metrics however we define them. But to the main point, I don’t see how the quality we call creativity is unique to biological computing machines vs electronic computing machines. Maybe one day we’ll conclusively declare creativity to be a human trait only, but in 2025 that is not a closed question - however it is measured.
We were talking about LLM here, not computing machines in general. LLM are trained to mimic not to produce novel things, so a person can easily think LLM wont get creative even though some computer program in the future could.
> LLM are trained to mimic not to produce novel things
Which LLM? That’s not the purpose of training for any model that I know of.
Training LLMs is literally finding sets of numbers that make them better at mimicking human language.
Difficult to measure, but trivial to define.
Creativity means play, as in not following rules, adding something of yourself.
Something a computer just can't do.
This is effectively Jevans paradox[1] in action.
The cost, in money or time, for getting certain types of work done decreases. People ramp up demand to fill the gap, "full utilization" of the workers.
Its a very old claim that the next technology will lead to a utopia where we don't have to work, or we work drastically less often. Time and again we prove that we don't actually want that.
My hypothesis (I'm sure its not novel or unique) is that very few people know what to do with idle hands. We tend to keep stress levels high as a distraction, and tend to freak out in various ways if we find ourselves with low stress and nothing that "needs" to be done.
[1] https://en.m.wikipedia.org/wiki/Jevons_paradox
> Its a very old claim that the next technology will lead to a utopia where we don't have to work, or we work drastically less often. Time and again we prove that we don't actually want that.
It actually does but due to wrong distribution of reward gained from that tech(automation) it does not work for the common folks.
Lets take a simple example, you, me and 8 other HN users work in Bezos’ warehouse. We each work 8h/day. Suddenly a new tech comes in which can now do the same task we do and each unit of that machine can do 2-4 of our work alone. If Bezos buys 4 of the units and setting each unit to work at x2 capacity, then 8 of us now have 8h/day x 5 days x 4 weeks = 160h leisure.
Problem is, now 8 of us still need money to survive(food, rent, utilities, healthcare etc). So, according to tech utopians, 8 of us now can use 160h of free time to focus on more important and rewarding works.(See in context of all the AI peddlers, how using AI will free us to do more important and rewarding works!). But to survive my rewarding work is to do gig work or something of same effort or more hours.
So in theory, the owner controlling the automation gets more free time to attend interviews and political/social events. The people getting automated away fall downward and has to work harder to maintain their survivality. Of course, I hope our over enthusiastic brethren who are paying LLM provider for the priviledge of training their own replacements figure the equation soon and don’t get sold by the “free time to do more meaningful work” same way the Bezos warehouse gave some of us some leisure while the automation were coming online and needed some failsafe for a while. :)
I think a lot of people would be fine being idle if they had a guaranteed standard of living. When I was unemployed for a while, I was pretty happy in general but stressed about money running out. Without the money issue the last thing I would want to do is to sell my time to a soulless corporation. I have enough interests to keep me busy. Work just sucks up time I would love to spend on better things.
> a lot of people would be fine being idle if they had a guaranteed standard of living
just a lot of words for "lazy" - it's built in to living organisms.
The whole economic system today is constructed to ensure that one would suffer from being "lazy". And this would be the case until post-scarcity.
I would have said lazy rather than idle if that what I meant.
For most people lazy implies that there are things you things you really ought to get done but you're choosing to avoid doing it to the point where its a problem that whatever the thing is still isn't taken care of.
Idle just means you don't feel like you have anything that needs to be done, you aren't avoiding things to the point that it causes a problem.
Of course our economic system prefers people to be "fully utilized" rather than idle, but who cares? I don't owe an economic system anything, we could change the system whenever we want, and ultimately an economy is only useful to analyze the comparative output that already happened - it has nothing to do with the present or future.
Lazy is a pejorative term used to chastise people in a culture that promotes activity even if it’s pointless.
Oh for sure, I should have included that. I was thinking of people being idle by choice rather than circumstance.
Food production is a class case where once productivity is high enough you simply get fewer farmers.
We are currently a long way from that kind of change as current AI tools suck by comparison to literally 1,000x increases in productivity. So, in well under 100 years programming could become extremely niche.
We are seeing an interesting limit in the food case though.
We increased production and needed fewer farmers, but we now have so few farmers that most people have very little idea of what food really is, where it comes from, or what it takes to run our food system.
Higher productivity is good to a point, but eventually it risks becoming too fragile.
100%. In fact, this exact scenario is playing out in the cattle industry.
Screwworm, a parasite that kills cattle in days is making a comeback. And we are less prepared for it this time because previously (the 1950s-1970s) we had a lot more labor in the industry to manually check each head of cattle. Bloomberg even called it out specifically.
Ranchers also said the screwworm would be much deadlier if it were to return, because of a lack of labor. “We can’t fight it like we did in the ’60s, we can’t go out and rope every head of cattle and put a smear on every open wound,” Schumann said.
https://www.bloomberg.com/news/features/2025-05-02/deadly-sc...
This sounds like the kind of labor problem that could quickly be solved by hiring more people. So really the worst case here is that beef will cost a little more for a little while. Hardly an existential threat.
Who will they hire though? Cattle operations are often in very rural areas, and we are putting up huge blockers for immigrants and migrant workers.
you're conflating a political issue with a problem having zero possible solutions.
A stroke of the pen will fix any and all political issues, if/when the political desire comes about.
Vs a problem with no solution - no pen will fix any of it, regardless of political will.
Its not just a problem needing a signature. We have the policies we have today because a lot of people want them, or at least agree with the general direction.
If the right person signs a change that magically fixes a labor shortage in a rural area we're right back to where we were, and much of the public would be up in arms about it.
(This doesn't actually reflect my opinion on immigration laws to be clear, just my view on where we are today in the US)
This doesn't sound like an accurate description of US agriculture. Just off the top of my head
* The US is not producing enough food - it's now a net food importer
* The increasing problems we are seeing in the food supply chain are usually tied to producers cutting costs and padding margins
Matt Stoller has gone into this at length - https://www.thebignewsletter.com/p/is-america-losing-the-abi...
So I mean it could depend on your definition of productivity, if anything that increases shareholder returns at the expense of a good product or robust supply chain is considered more "productivity," sure. Just as monopolies are the most "productive" businesses ever for their shareholders, but generally awful for everyone else, and are not what most people would think of as productive.
The human definition of productivity is - less inputs producing more and better outputs.
The cartel doublespeak definition is - the product got worse and the margins improved, which seems to describe US Big Ag at present
At least in US agriculture, when they speak of productivity they generally refer to pounds per acre for crops. For livestock it's a bit less clear, they sometimes refer to pounds of feed to final live weight. You generally have to schedule a slaughter day months out and you estimate the final weight, you don't get paid as well if you are too far off the weight in either direction. Its less common generally, but in the cattle industry I've heard the accuracy of hitting that targets talked about as productivity.
I agree with you on the double speak though, really I think its just a lack of the public really understanding the meaning given to "productive" in the industry though. The industry doesn't hide what it means by the word, most just don't care about any version of productive that measures things like nutrient value, sustainability, soil health, animal welfare, etc.
> Food production is a class case where once productivity is high enough you simply get fewer farmers.
Yes, but.
There are more jobs in other fields that are adjacent to food production, particularly in distribution. Middle class does not existed and retail workers are now a large percentage of workers in most parts of the world.
Sure, but when farmers where 90% of the labor force many of the remaining 10% also related to food distribution and production, a village blacksmith was mostly in support of farming, salt production/transport for food storage, etc.
Food is just a smaller percentage of the economy overall.
Was there ever a time when 90% of labor was in farming and we had anything resembling an economy?
I would have assumed that if 90% of people are farming its largely subsistence and any trade or happened on a much more local scale, potentially without any proper currency involved.
Globally perhaps not as fishing and hunting have been major food sources in antiquity especially when you include North America etc. Similarly slavery meant a significant portion of the population was in effect outside the economy.
That said, there’s been areas where 90% of the working population was at minimum helping with the harvest up until the Middle Ages.
Did you type in an additional zero there?
Nope, where a family might struggle to efficiently manage 50 acres under continuous cultivation even just a few hundred years ago, now it’s not uncommon to see single family farms with 20,000 acres each of which is several times more productive.
It’s somewhat arbitrary where you draw the line historically but it’s not just maximum productivity worth remembering crops used to fail from drought etc far more frequently.
Small hobby farms are also a thing these days, but that’s a separate issue.
For those 20,000 acre farms, by what measure are they more productive?
In my experience they're very productive by poundage yield, but horribly unproductive when it comes to inputs required, chemicals used, biodiversity, soil health, etc.
We’re looking at hours of labor per lb of food.
The difference is so extreme vs historic methods you can skip pesticides, avoid harming soil health or biodiversity vs traditional methods etc without any issues here and still be talking 1,000x.
Though really growing crops for human consumption is something of a rounding error here. It’s livestock, biofuels, cotton, organic plastics, wood, flowers, etc that’s consuming the vast majority of output from farms.
If that's the metric, sure we have gotten very good at producing more pounds of food per human hour of labor.
Two things worth noting though, pounds of food say little about the nutritional value to consumers. I don't have hood links handy so I won't make any specific claims, just worth considering if weight is the right metric.
As far as human labor hours goes, we've gotten very good at outsourcing those costs. Farm labor hours ignores all the hours put in to their off-farm inputs (machinery, pesticides and fertilizers, seed production, etc). We also leverage an astronomical amount of (mostly) diesel fuel to power all of it. The human labor hours are small, but I've seen estimates of a single barrel of oil being comparable to 25,000 hours of human labor or 12.5 years of full employment. I'd be interested to do the math now, but I expect we have seen a fraction of that 25,000x multiplier materialize in the reduction of farm hours worked over the last century (or back to the industrial revolution).
> pounds of food say little about the nutritional value to consumers
Nah, it's not 100% but it says a lot about the nutritional value.
> inputs
You can approximate those with price. A barrel of oil might be a couple hours.
A couple hours of what? We can do drastically more work with a barrel of oil compared to a couple hours of human labor.
You really can’t. Human labor is productive, a barrel of oil on its own isn’t going to accomplish crap.
You likely get less useful work out of a gallon of gas in your car than it took to extract, refine, transport, and distribute that gallon of gas. Just as an example gas pumps use electricity that isn’t coming from oil.
> The human labor hours are small, but I've seen estimates of a single barrel of oil being comparable to 25,000 hours of human labor
That’s just wildly wrong by several orders of magnitude, to the point I question your judgment to even consider it a valid possibility.
Not only would the price be inherently much higher but if everyone including infants working 50 hours per week we’d still would produce less than 1/30th the current world’s output of oil and going back we’ve been extracting oil at industrial scale for over 100 years.
To get even close to those numbers you’d need to assume 100% of human labor going back into prehistory was devoted purely to oil extraction.
What are you claiming is widely wrong exactly? The estimate of comparison between the amount of energy in a barrel of oil and the average amount of energy a human can produce in an hour?
Ok, yep that’s 100% BS.
Burning food can produce more useful work in a heat engine than you get from humans doing labor so I’m baffled by what about this comparison seems to make sense to you.
Ignoring that you’re still off by more than an order of magnitude. 100% of the energy content of oil can’t even be turned directly into work without losses. You get about 10% of its nominal energy content as useful work, less if you’re including energy costs of production, refining, and transport.
Even if look at an oil well fire it’s incomplete combustion and not useful work.
You were comparing amount of energy between human labor and a barrel of oil? That's such a baffling metric that neither they nor I realized that's what you meant. It's not like you can replace a human with a solar panel, but if you could that would be astoundingly impressive and not diminished toward "horribly unproductive" by the fact that the solar panel is delivering more watts to do the same thing.
I'm not sure where that confusion lies.
The earlier comment or was talking about the massive reduction in the amount of human labor required to cultivate land and the relative productivity of the land.
That comparison comes down to amount of work done. Whether that work is done by a human swinging a scythe or a human driving a diesel powered tractor is irrelevant, the work is measured in joules at the end of the day. We have drastically fewer human hours put into farm labor because we found a massive multiplier effect in fossil fuel energy.
I'm not sure where solar panels came in, but sure they can also be used to store watts and produce joules of work if that's your preferred source of energy.
The confusion lies in why we would measure the efficiency of human labor in joules per unit of work instead of hours of human effort per unit of work.
In particular, if we can make a machine that spends more joules than a human, but reduces the human effort by orders of magnitude, why would that be "horribly unproductive"? Most people would call that amazingly productive. And when they want to broaden the view to consider the inputs too, they're worried about the labor that goes into the inputs, not the joules.
(And if the worry is the limited amount of fossil fuels in particular, we can do the same with renewable energy.)
The point of standardizing on joules of work is to account for externalized costs. You can focus only on human effort, but at what cost?
I'm still not sure why renewable are being brought up here. An earlier comment referenced solar, I never mentioned solar or renewables.
It's a silly method of accounting for externalized costs. Joules don't hurt anyone.
I only mention renewables because I'm grasping at straws to figure out why joules would matter.
Joules are just a measure of work, and this all started by an attempt to say how productive we are because we need fewer farmers today. My argument is that we only need fewer farmers because we found a cheap source of energy and have been using that to replace farmers.
When looking at joules its an attempt to compare something like a human cutting a field with a scythe and a tractor cutting it with an implement. The tractor is way more efficient at cutting it when considering only the human hours of labor cutting the field. But of course it is, a single barrel of oil has way more energy potential and even a small tractor will be run with fuel milage tracked by gallons per hour.
>technology will lead to a utopia where we don't have to work
I'm kind of ok with doing more work in the same time, though if I'm becoming way more effective I'll probably start pushing harder on my existing discussions with management about 4 day work weeks (I'm looking to do 4x10s, but I might start looking to negotiate it to "instead of a pay increase, let's keep it the same but a 4x8 week").
If AI lets me get more done in the same time, I'm ok with that. Though, on the other hand, my work is budgeting $30/mo for the AI tools, so I'm kind of figuring that any time that personally-purchased AI tools are saving me, I deduct from my work week. ;-)
>very few people know what to do with idle hands
"Millions long for immortality that don't know what to do with themselves on a rainy Sunday afternoon." -- Susan Ertz
I don’t think it’s the consequence of most individuals’ preferences. I think it’s just the result of disproportionate political influence held by the wealthy, who are heavily incentivized to maximize working hours. Since employers mostly have that incentive, and since the political system doesn’t explicitly forbid it, there aren’t a ton of good options for workers seeking shorter hours.
> there aren’t a ton of good options for workers seeking shorter hours.
But you do have that option, right? Work 20 hours a week instead of 40. You just aren't paid for the hours that you don't work. In a world where workers are exchanging their labor for wages, that's how it's supposed to work.
For there to be a "better option" (as in, you're paid money for not working more hours) what are you actually being paid to do?
For all the thoughts that come to mind when I say "work 20 hours a week instead of 40" -- that's where the individual's preference comes in. I work more hours because I want the money. Nobody pays me to not work.
> But you do have that option, right?
Not really. Lots of kinds of work don’t hire part timers in any volume period. There are very limited jobs where the only tradeoff if you want to work fewer hours is a reduction in compensation proportional to the reduction in hours worked, or even just a reduction in compensation even if disproportionate to the reduction in hours worked.
>nobody pays me not to work. If you’re in the US, then in theory you’re getting overtime for going over 40hrs a week. That’s time and a half for doing nothing, correct? I’d expect your principles put you firmly against overtime pay.
>But you do have that option, right? Work 20 hours a week instead of 40. You just aren't paid for the hours that you don't work. In a world where workers are exchanging their labor for wages, that's how it's supposed to work.
Look the core of your opinion is the belief that market dynamics naturally lead to desirable outcomes always. I simply don’t believe that, and I think interference to push for desirable outcomes which violate principles of a free market is often good. We probably won’t be able to agree on this.
> I’d expect your principles put you firmly against overtime pay.
No.. if society wants to disincentive over working by introducing overtime, that's fine by me. I'm not making any moral judgement. You just seem to live in a fantasy world where people aren't exchanging their labor for money.
> Look the core of your opinion is the belief that market dynamics naturally lead to desirable outcomes always.
I didn't say that, and I don't believe that. If you're just going to hallucinate what I think, what's the point in replying?
>You just seem to live in a fantasy world where people aren't exchanging their labor for money.
Where did you get that? My entire contention centers around a lack of good options for workers seeking to work fewer hours. A logical assumption, then, would be that I want policies which would give said workers more options. Examples include stronger protections for unions, higher minimum wages, etc. Since I saw these as the logical extrapolations from what I'd said originally, I figured your issue was gov interference in the labor market itself, since you said things like
>In a world where workers are exchanging their labor for wages, that's how it's supposed to work.
>(as in, you're paid money for not working more hours)
You took issue with more money for the same hours, did you not? Why wouldn't overtime be an obvious example? The reason I assumed you were just a libertarian or something was because it doesn't seem like there's an obvious logical juncture to draw a line at. If you're fine with society altering the behavior of the labor market to achieve certain desirable results, then why would this be any different fundamentally?
At least in the US part time work often not really a thing. A while ago I talked to HR about reducing to 32 hours and they didn't seem to get the idea at all. It's either all in or nothing. In the US there is also the health insurance question.
For my relatives in Germany going part time seems easier and more accepted by companies.
Thank you! I didn't know this had a name. I remember thinking something along these lines in seventh grade social studies when we learned that Eli Whitney's cotton gin didn't actually end up improving conditions for enslaved people.
I suspected this would be the case with AI too. A lot of people said things like "there won't be enough work anymore" and I thought, "are you kidding? Do you use the same software I use? Do you play the same games I've played? There's never enough time to add all of the features and all of the richness and complexity and all of the unit tests and all of the documentation that we want to add! Most of us are happy if we can ship a half-baked anything!"
The only real question I had was whether the tech sector would go through a prolonged, destructive famine before realizing that.
Econ 101: supply is finite, demand infinite. Increased efficiency of production means that demand will meet the new price point, not that demand will cease to exist.
There are probably plenty of goods that are counter examples, but time utilization isn't one of them, I don't think.
> Time and again we prove that we don't actually want that.
That's the capitalist system. Unions successfully fought to decrease the working day to 8 hrs.
I don't think we can so easily pin it on capitalism. Capitalism brings incentives that drive work hours and expectations up for sure, but that's not the only thing in play.
Workers are often looking to make more money, take more responsibility, or build some kind of name or reputation for themselves. There's absolutely nothing wrong with that, but that goal also incentivizes to work harder and longer.
There's no one size fits all description for workers, everyone's different. The same is true for the whole system though, it doesn't roll up to any one cause.
What you say is true, but the dominant effect in the system driving it towards more exertion than anyone would find desirable is the profit incentive of owners to drive their workers harder.
How do you narrow it down to capitalism as the root cause though? It seems like a reasonable guess, but our entire system is capitalist - we have no way to isolate or compare against to see how a roughly similar system would play out without capitalism.
We have seen other systems, socialist systems, that are much kinder to workers and give them more security and free time. The capitalists managed to destroy most competing examples and forced the remaining ones to somewhat liberalize via the IMF and other trade regimes, to make it appear as if there is only one choice. Not so!
Unions had early wins that mostly either didn't go anywhere, or the companies worked around. The real win that normalized it was for capitalistic reasons, when Henry Ford shortened the workday/week because he wanted his workers to buy (and have reason to buy) his cars. Combined with other changes, he figured he'd retain workers better and reduce mistakes from fatigue, and when he remained competitive others followed suit.
Yes in fact, to me it’s not a utopia that everyone’s going to paint landscapes, write poetry, or play musical instruments all day.
I worry more that an idle humanity will cause a lot more conflict. “An idle mind’s the devil’s playground” and all.
I wish people could handle an idle mind, I expect we'd all be better off. But yeah, realistically most people when idle would do a lot of damage.
Its always possible that risk would be transitional. Anyone alive today, at least in western style societies, likely doesn't know a life without high levels of stress and distraction. It makes sense that change would cause people to lash out, maybe people growing up in that new system would handle it better (if they had the chance).
I think distraction is doing a lot of work here. Many people could play video games 24/7 with no issues. The urge to be productive and/or make personal progress is something a lot of people feel, (which is fantastic), but video games do a really good job of replacing those feelings along with other emotional experiences.
Many shows and movies can play a similar role.
I think we would/will see a lot more of that. Even in transitional periods where people can multitask more now as ai starts taking over moment to moment thinking.
> Many people could play video games 24/7 with no issues
I disagree pretty strongly here. I've known a few people who lived this sort of gamer rotting lifestyle and they were miserable.
Many people like this wind up suicidal, or are prime candidates to shoot up a school.
Videogames, movies, shows, whatever, they do not replace the need for meaningful interaction with the real world.
I don’t think people would be idle. They’d just be concerned with different things, like social dynamics, games/competition/sports, raising family etc.
Oh sure, I didn't actually mean to describe it being idle as in sitting and literally doing nothing. I more meant idle in comparison to how much people work today and how much they think about or stress over work.
Take 7 hours out if the day because an LLM makes you that much more productive and I expect people wouldn't know what to do with themselves. That could be wrong, but I'd expect a lot more societal problems than we already have today if a year from now a large number of people only worked 4 or 5 hours a week.
That's not even getting to the Shopify CEOs ridiculous claim that employees will get 100x more work done [1].
[1] https://x.com/tobi/status/1909251946235437514
Tell that to any monk…
What an absurd straw man. Moving the needle away from “large portions of the population are a few paychecks away from being homeless” does not constitute “the devil’s playground”.
Where’s all of the articles that HN loves about kids these days not being bored anymore? What about google’s famous 20% time?
Idle time isn’t just important, it’s the point.
Others have already said so, but the same is true for automation and anything else. We've had the technology to do less work for a long time, but it doesn't seem to be in our psychology. Not necessarily that we're intentionally choosing to work 40 hours for no reason. But, it feels like we're a bit stuck, and individuals who would try to work less just set themselves back compared to others, and so no one can move.
My dad has a great quote on computers and automation:
"In the 1970s when office computers started to come out we were told:
'Computers will save you SO much effort you won't know what to do with all of your free time'.
We just ended up doing more things per day thanks to computers."
It’s Solow’s paradox: “You can see the computer age everywhere, except in productivity statistics.” — Nobel Prize-winning American economist Robert Solow, in 1987
I forget where I heard this but there was an interesting quote:
"In the early 1900s, 25% of the US population worked in agriculture.
Today it's 2%.
I would imagine that economists back then would be astounded by that change.
I should point out: there were also no pediatric oncologists back then."
Your dad sounds like a wise man!
When it comes to programming, I would say AI has about doubled my productivity so far.
Yes, I spend time on writing prompts. Like "Never do this. Never do that. Always do this. Make sure to check that.". To tell the AI my coding preferences. Bot those prompts are forever. And I have written most of them months ago, so that now I just capitalize on them.
I'm always a little bit skeptical whenever people say that AI has resulted in anything more than a personal 50% increase in productivity.
Like, just stop and think about it for a second. You're saying that AI has doubled your productivity. So, you're actually getting twice as much done as you were before? Can you back this up with metrics?
I can believe AI can make you waaaaaaay more productive in selective tasks, like writing test conditions, making quick disposable prototypes, etc, but as a whole saying you get twice as much done as you did before is a huge claim.
It seems more likely that people feel more productive than they did before, which is why you have this discrepancy between people saying they're 2x-10x more productive vs workplace studies where the productivity gain is around 25% on the high end.
I'm surprised there are developers who seem to not get twice as much done with AI than they did without.
I see it happening right in front of my eyes. I tell the AI to implement a feature that would take me an hour or more to implement and after one or two tries with different prompts, I get a solution that is almost perfect. All I need to do is fine-tune some lines to my liking, as I am very picky when it comes to code. So the implementation time goes down from an hour to 10 minutes. That is something I see happening on a daily basis.
Have you actually tried? Spend some time to write good prompts, use state of the art models (o3 or gemini-2.5 pro) and let AI implement features for you?
Even if what you are saying is true, a significant part of a developer's time is not writing code, but doing other things like thinking about how to best solve a problem, thinking about the architecture, communicating with coworkers, and so on.
So, even if AI helps you write code twice as fast, it does not mean that it makes you twice as productive in your job.
Then again, maybe you really have a shitty job at a ticket factory where you just write boilerplate code all day. In which case, I'm sorry!
I've found that AI is incredibly valuable as a general thinking assistant for those tasks as well. You still need enough expertise to know when to reach for it, what to prompt it with, and how to validate the utility and correctness of its output, but none of that consumes as much time as the time saved in my experience.
I think of it like a sort of coprocessor that's dumber in some ways than my subconscious, but massively faster at certain tasks and with access to vastly more information. Like my subconscious, its output still needs to be processed by my conscious mind in order to be useful, but offloading as much compute as possible from my conscious mind to the AI saves a ton of time and energy.
That's before even getting into its value in generating content. Maybe the results are inconsistent, but when it works, it writes code much more quickly than any human could possibly type. Programming aside, I've objectively saved significant amounts of time and money by using AI to help not only review but also revise and write first drafts of legal documents before roping in lawyers. The latter is something I wouldn't have considered worthwhile to attempt in most cases without AI, but with AI I can go from "knowing enough to be dangerous" to quickly preparing a passable first draft on my own and having my lawyers review the language and tighten up some minor details over email. That's a massive efficiency improvement over the old process of blocking off an hour with lawyers to discuss requirements on the phone, then paying the hourly rate for them to write the first draft, and then going through Q&A/iteration with them over email. YMMV, and you still need to use your best judgement on whether trying this with a given legal task will be a productive use of time, but life is a lot easier with the option than without. Deep research is also pretty ridiculous when you find yourself with a use case for it.
In theory, there's not really anything in particular that I'd say AI lets me do that I couldn't do on my own*, given vastly more hours in the day. In practice, I find that I'm able to not only finish certain tasks more quickly, but also do additional useful things that I wouldn't otherwise have done. It's just a massive force multiplier. In my view, the release of ChatGPT has been about as big a turning point for knowledge work as computers and the Internet were.
*: Actually, that's not even strictly true. I've used AI to generate artwork, both for fun/personal reasons and for business, which I couldn't possibly have produced by hand. (I mean with infinite time I could develop artistic skills, but that's a little reductive.) Video generation is another obvious case like this, which isn't even necessarily just a matter of individual skill, but can also be a matter of having the means and justification to invest money in actors, costumes, props, etc.
> I'm surprised there are developers who seem to not get twice as much done with AI than they did without.
I think it depends a lot on what you work on. There are tasks that are super LLM friendly, and then there are things that have so many constraints that LLM can basically never get it right.
For example, atm we have some really complicated pieces of code that needs to be carefuly untangled and retangled to accomodate a change, and we have to be much more strategic about it to make sure we don't regress anything during the process.
Can you share a little bit about what your prompting is like, especially for large code bases? Do you typically restrict context to a single file/module or are you able to manage project wide changes? I'm struggling to do any large scale changes as it just eats through tokens and gets expensive very fast. And the quality of output also drops off as the context grows.
There are specific subsets of work at which it can sometimes be a huge boost. That’s a far cry from making me 2x more productive at my job overall.
I mean, I don't disagree with you when you say that something that would take an hour or more to implement would only take 10 minutes or so with AI. That kind of aligns with my personal experience. If something takes an hour, it's probably something that the LLM can do, and I probably should have the LLM do it unless I see some value in doing it myself for knowledge retention or whatever.
But working on features that can fit within a timebox of "an hour or more" takes up very little of my time.
That's what I mean, there are certain contexts where it makes sense to say "yeah, AI made me 2x-10x more productive", but taken as a whole just how productive have you become? Actually being 2x productive as a whole would have a profound impact.
Would you be comfortable sharing a bit about the kind of work you do? I’m asking because I mostly write iOS code in Swift, and I feel like AI hasn’t been all that helpful in that area. It tends to confidently spit out incorrect code that, even when it compiles, usually produces bad results and doesn’t really solve the problem I’m trying to fix.
That said, when I had to write a Terraform project for a backend earlier this year, that’s when generative AI really shined for me.
For ios/swift the results reflect the quality of the information available to the LLM.
There is a lack of training data; Apple docs arent great or really thorough, much documentation is buried in WWDC videos and requires an understanding of how the APIs evolved over time to avoid confusion when following stackoverflow posts, which confused newcomers as well as code generators. Stackoverflow is also littered with incorrect or outdated solutions to iOS/Swift coding questions.
Cannot comment on swift but I presume training data for it might be less avaialble online. Whereas Python, what I use and in my anecdotal experience, it can produce quite decent code, and some sparks of brilliance here and there. But I use it for boilerplate code I find boring, not the core stuff. I would say as time progresses and these models get more data it may help with Swift too (though this issue may take a while cause I remember a convo with another person online who said the swift code GPT3.5 produced was bad, referencing libraries that did not exist.)
Which LLMs have you used? Everything from o3-mini has been very useful to me. Currently I use o3 and gemini-2.5 pro.
I do full stack projects, mostly Python, HTML, CSS, Javascript.
I have two decades of experience. Not just my work time during these two decades but also much of my free time. As coding is not just my work but also my passion.
So seeing my productivity double over the course of a few months is quite something.
My feeling is that it will continue to double every few months from now on. In a few years we can probably tell the AI to code full projects from scratch, no matter how complex they are.
I think LLMs are just better at Python and JS than other languages, probably because that's what they're more extensively trained on.
Depends on what one defines as a criteria for "better". Getting something to run and work, or actually writing good, readable, mostly self-explanatory, maintainable, easily testable, parallelizable, code.
LLMs are better at languages that are forgiving, like those two, because if something is not exactly right the interpreter will often be able to just continue on
As long as you're just rebuilding what already exists, yes.
I’ve found it to be really helpful with golang.
With swift it was somewhat helpful but not nearly as much. Eventually stopped using it for swift.
Sometimes it’s PEBCAK. You have to push back on bad code and it will do better. Also not specifying the model used is a red flag.
> When it comes to programming, I would say AI has about doubled my productivity so far
For me it’s been up to 10-100x for some things, especially starting from scratch
Just yesterday, I did a big overhaul of some scrapers, that would have taken me at least a week to get done manually (maybe doing 2-4 hrs/day for 5 days ~ 15hrs). With the help of ChatGPT, I was done in less than 2 hours
So not only it was less work, it was a way shorter delivery time
And a lot less stress
Agree! I love this aspect, coding feels so smooth and fun now
Yes! It has definitely brought back a lot of joy in coding for me
Is the code audit included in that 2 hours?
This didn’t require PRs
But, it did require passing tests
Most of the changes in the end were relatively straightforward, but I hadn’t read the code in over a year.
The code also implemented some features I don’t use super regularly, so it would’ve taken me a long time to load everything up in my head, to fully understand it enough, to confidently make the necessary changes
Without ai, it would have also required a lot of google searches finding documentation and instructions for setting up some related services that needed to be configured
And, it would have also taken a lot more communication with the people depending on these changes + having someone doing the work manually while the scrapers were down
So even though it might have been a reduction of 15hrs down to 1.5hrs for me, it saved many people a lot of time and stress
Who created those tests?
I did, years ago, before AI coding was a thing
But, from my experience now, I’d happily use AI to build the tests
At the end of the day: 1) a human is the ultimate evaluator of the code results anyway, 2) the thing either works or it doesn’t
Excellent question. Maybe people will use this newfound productivity to actually review, test, and document code. Maybe.
Not so sure given how fast ai can understand the code already written
Personally, I do try to keep a comment at the top of every major file, with a comment with bullets points, explaining the main functionality implemented and the why
That way, when I pass the code to a model, it can better “understand” what the code is meant to do and can provide better answers
(A lot of times, when a chat session gets too long and seems like the model is getting stuck without good solutions, I ask it to create the comment, and then I start a new chat, passing the code that includes the comment, so it has better initial context for the task)
AI doesn’t understand that’s the problem.
If the training data contains some mistakes often it will reproduce them more likely.
Unless there are preprogrammed rules to prevent them.
I’ve had really good results, but of course ymmv
As a side note, most good coding models now are also reasoning models, and spend a few seconds “thinking” before giving a reply
That’s by no means infalible, but they’ve come a long way even just in the last 12 months
Lol.
> those prompts are forever
Have you tested them across different models? It seems to me that even if you manage to cajole one particular model into behaving a particular way, a different model would end up in a different state with the same input, so it might need a completely different prompt. So all the prompts would become useless whenever the vendor updates the model.
What is it like to maintain the code? How long have they been in production? How many iterations (enhancements, refactoring, ...) cycles have you seen with this type of code?
It's not different from code I write myself.
I read each line of the commit diff and change it, if it is not how I would have done it myself.
GG, you do twice the work, twice the mental strain for same wage. And you spend time on writing prompts instead of mastering your skills, thus becoming less competitive as a professional (as anyone can use ai, thats a given level now) Sounds like a total win.
And...? Does it result in a double salary, perhaps?
Obviously not because AI is available to everyone and salary isn't only a function of work completed.
Exactly that!
Do you use vscode?
No, why?
Trying to figure out how can you double productivity, I try to use AI with neovim and can’t get more than 5% boost from it
The trick is to have a terrible baseline
I use normal VIM.
I let the AI implement features on its own, then look at the commit diffs and then use VIM to finetune them.
I wrote my own tool for it. But I guess it is similar to cursor, aider and many other tools that do this. Also what Microsoft offers via the AI "edit" tool I have seen in GitHub codespaces. Maybe that is part of VScode?
Hello fellow normal Vim user! Is your tool to open source?
No. Aren't there enough "Hey AI here is a codebase, please implement the following feature" tools out there yet?
I have not tried them, but I guess aider, cursor and others offer this? One I tried is copilot in "edit" mode on github codespaces. And it seems similar.
I thought you were talking about a Vim plugin. Sorry for taking an interest.
The past month or so I've been largely using Claude Code (similar to aider, which I haven't used in 6+ months, and OpenAI Codex I gather), from the CLI, for the "vibe coding" portion, and then hop into vim for regular coding when I need to "take the stick". I don't have any AI tools integrated into vim (though I do have LSP, etc). This method has been pretty effective, though I would like to have some AI built into the editor as well, my experiences with Cursor and Zed haven't been as rewarding as I'd like so I've iterated towards my current Claude Code. My first serious project, a fastapi-based replacement for an ancient Ruby on Rails project is just in "dev test" mode and going out to production probably in 2.5 weeks.
No, I don't believe you. AI isn't deterministic. It isn't a tool. What you're describing doesn't sound credible to me.
AI has certainly created new work for the GCC project. They had to implement a scraper protection from the bots run by corporations who benefit for free from GCC but want to milk it even further:
https://gcc.gnu.org/pipermail/gcc/2025-April/245954.html
The real problem is with lower skilled positions. Either people in easier roles or more junior people. We will end up with a significant percent of the population who are unemployable because we lack positions commensurate with their skills.
Lower skilled than office clerks and customer service representatives? Because they were in the study.
Yep, I'm talking about non-office jobs, such as in warehouses and retail. Why do you need sales associates when you can just ask an AI associate that knows everything.
But, the study is also about LLMs currently impacting wages and hours. We're still in the process of creating targeted models for many domains. It's entirely possible the customer representatives and clerks will start to be replaced in part by AI tools. It also seems that the current increase in work could mean that headcount is kept flat, which is great for a business, but bad for employment.
and we used to have people who were illiterate and yet, today we have almost everybody literate.
I think skills in using ai to augment work will become just a new form of literacy.
Approximately 1 in 5 adults in the US are illiterate currently. I wouldn't call that "almost everybody".
It’s the same since industrialisation, it’s not that we have less work, we have less of some type of work.
The issue is that after automation the “old” jobs often don’t pay well, and the new jobs that do are (by virtue of the multiplier of technology) actually scarcer than the ones it replaced.
While in a craftsmanship society you had people painting plates for the well to do, factories started mass painting plates for everyone to own.
Now this solved the problem of scarcity, which is great. But it created a new problem which is all those craftsmen are now factory workers whose output is more replaceable. If you’re more replaceable your wages are lower due to increased competition.
Now for some things this is great, but Marx’s logic was that if technology kept making Capital able to use less and less Labour (increasing profits) then eventually a fairly small number of people would own almost everything.
Like most visionaries he was incredibly off on his timeline, and he didn’t predict a service economy after we had overabundance of goods.
So yet again Marx’s logic will be put to the test and yet again we will see the results. I still find that his logic seems fairy solid, although like many others I don’t agree with the solutions.
I wonder how well the this will hold up against AI.
That's the story of all technology and the argument AI won't take jobs pmarca etc has been predicting for a while now. Our focus will be able to shift into ever narrower areas. Cinema was barely a thing 100 years ago. A hundred years from now we'll get some totally new industry thanks to freeing up labor.
Cinema created jobs though, it didn't reduce them. Furthermore the value of film is obvious. You need to extremely hedge an LLM to pitch it to anyone.
> Cinema created jobs though, it didn't reduce them.
Is it that straightforward? What about theater jobs? Vaudeville?
Tough to say how it maps but with cinema, you have so many different skill sets needed for every single film. Costumes, builders for sets, audio engineers, the crews, the caterers, location scouts, composers, etc.
In live theater it would be mostly actors, some one time set and costume work, and some recurring support staff.
But then again, there are probably more theaters and theater production by volume.
Fair point, but that's hardly applicable to the llm metaphor. If you're ok with shit work you can just run a program.
Also the nature of software is that the more software is written the more software needs to be written to manage, integrate, and make use of all the software that has been written.
AI automating software production could hugely increase demand for software.
The same thing happened as higher level languages replaced manual coding in assembly. It allowed vastly more software and more complex and interesting software to be built, which enlarged the industry.
> AI automating software production could hugely increase demand for software
Let's think this through
1: AI automates software production
2: Demand for software goes through the roof
3: AI has lowered the skill ceiling required to make software, so many more can do it with a 'good-enough' degree of success
4: People are making software for cheap because the supply of 'good enough' AI prompters still dwarfs the rising demand for software
5: The value of being a skilled software engineer plummets
6: The rich get richer, the middle class shrinks even further, and the poor continue to get poorer
This isn't just some kind of wild speculation. Look at any industry over the history of mankind. Look at Textiles
People used to make a good living crafting clothing, because it was a skill that took time to learn and master. Automation makes it so anyone can do it. Nowadays, automation has made it so people who make clothes are really just operating machines. Throughout my life, clothes have always been made by the cheapest overseas labour that capital could find. Sometimes it has even turned out that companies were using literal slaves or child labour.
Meanwhile the rich who own the factories have gotten insanely wealthy, the middle class has shrunk substantially, and the poor have gotten poorer
Do people really not see that this will probably be the outcome of "AI automates literally everything"?
Yes, there will be "more work" for people. Yes, overall society will produce more software than ever
McDonalds also produces more hamburgers than ever. The company makes tons of money from that. The people making the burgers usually earn the least they can legally be paid
The agricultural revolution did in fact reduce the amount of work in society by a lot though. That's why we can have week-ends, vacation, retirement and study instead of working from non stop 12yo to death like we did 150 years earlier.
Reducing the amount of work done by humans is a good thing actually, though the institutional structures must change to help spread this reduction to society as a whole instead of having mass unemployment + no retirement before 70 and 50 hours work week for those who work.
AI isn't a problem, unchecked capitalism can be one.
That's not really why (at least in the U.S.) - it was due to strong labor laws otherwise post industrial revolution you'd still have people working 12 hours a day 7 days a week - though with minimum wage stagnation one could argue that many people have to do this anyway just to make ends meet.
https://firmspace.com/theproworker/from-strikes-to-labor-law...
That's exactly what I'm saying! You need labor laws so that you can lower the amount of work across the board and not just in average.
But you can't have labor laws that cut the amount worked by half if you have no way to increase productivity.
The agricultural revolution has been very beneficial for feeding more people with less labor inputs, but I'm kind of skeptical of the claim that it led to weekends (and the 40hr workweek). Those seem to have come from the efforts of the labor movement on the manufacturing side of things (late 19th, early 20th century). Business interests would have continued to work people 12hrs a day 7 days a week (plus child labor) to maximize profits regardless of increasing agricultural efficiency.
Please re-read my comment, it says exactly the same thing as you are.
Agricultural work is seasonal. For most of the year you aren't working in the fields. Yes planting and harvesting can require longer hours because you need the planting and harvest done as fast as possible in order to maximize yield and reduce spoilage, but you aren't harvesting and planting the fields for the entire year working non-stop. And even then most people worked at their own pace, not every farm was as labor productive as another or even had to be as productive. Some people valued their time and health and comfort, some people valued being able to brew more beer with their 5% higher yield, some valued leisure time more, but it was a personal choice that people made. The industrial revolution is the outlier point in making people work long non-stop hours all the time. Living a subsidence farming lifestyle doesn't mean you are just hanging on a bare thread of survival the entire time like a lot of pop-media likes to portray.
Medical is our fastest growing employer and you could make the case that modern agriculture produced most of that demand:
Obesity, mineral depletion, pesticides, etc.
So in a way automation did make more work.
This assumes we won't achieve AGI. If we do, all bets are off. Perhaps neuromorphic hardware will get is there.
Let's achieve AGI first before making predictions.
When AGI is achieved, it will be able to make the predictions. And everything else. And off we the humans go to the reserve.
Only if it can do it in an affordable fashion.
If you need a supercomputer to run your AGI then it's probably not worth it for any task that a human can do, because humans happen to be much cheaper than supercomputers.
Also, it's not clear if AGI doesn't mean it's necessarily better than existing AIs: a 3 years old child has general intelligence indeed, but it's far less helpful than even a sub-billion parameters LLM for any task.
A child learns from experience, something still missing in LLMs.
Yep, but it won't deserve to be called AGI before it can learn too.
That doesn't seem sensible. We already have general intelligences. Let's infer possible outcomes before rushing head first.
Is there any evidence that AGI is a meaningful concept? I don't want to call it "obviously" a fantasy, but it's difficult to paint the path towards AGI without also employing "fantasize".
I mean, humans exist. We know a blob of fat is capable of thought.
No, we know planetary ecosystems can use energy gradients to sustain intelligent lifeforms. Intelligence is not a feature of the human brain, it's a feature of Earth. Without the ecosystem there are no cells, no organisms, no specialization, no neurons, no mammals. It isn't the human brain that achieved intelligence, it's the entire system capable of producing, sustaining and selecting brains.
The question is whether AGI makes sense as a concept without a moving, living, feeling body.
We have general intelligences that are bedridden.
Sure, but what does that have to do with AGI? I don't think anyone is proposing simulating an entire brain (yet, anyway).
Like you could have "AGI" if you simply virtualized the universe. I don't think we're any closer to that than we are to AGI; hell, something that looks like a human mouth output is a lot easier and cheaper to model than virtualize.
Unless you believe humans have something mystical like a soul, our brains are evidence that “general intelligence” is achievable in a relatively small, energy efficient form.
Ok, but very few people contest that consciousness is computable. It's basically just Penrose (and other folks without the domain knowledge to engage). This doesn't imply that at any point during all human existence will computing consciousness be economically feasible or worthwhile.
Actual AGI presumably implies a not-brain involved.
And this isn't even broaching the subject of "superintelligence", which I would describe as "superunbelievable".
Until you can create a definition of consciousness which can be tested externally from the tested object, then the whole subject is moot.
It obviously isn't if people are casually bringing up AGI like it's feasible.
AGI has nothing to do with consciousness, AGI is just about intelligence. There is no C for "Consciousness" in the acronym.
There is no point in these ill-formed hypothetical untestable assumptions.
- Assuming god comes to earth tomorrow, earth will be heaven
- Assuming an asteroid strikes earth in the future we need settlements on mars
etc, pointless discussion, gossip, and bs required for human bonding like on this forum or in a bierhauz
We are literally talking about problem solving computers. They are goal to action mappers. It's reasonable to talk about goal to action mappers that are more general than the ones we have now. They might even become more general than the general intelligences we have now on message boards.
Id we do it will be able to grow food, build houses etc using humanoids or other robots.
We won’t need jobs so we would be just fine.
How would you pay for those robots without a job? Or do you think whoever makes them will give them to you for free? Maybe the AI overlord will, but I doubt it.
In the world of abundance you don’t have to pay for this.
If theres nothing for people to do a new economy will arise where government will supply you with whatever you need at least at basic level.
Or the wars will start and everything will burn.
Obviously if there are no jobs no one will sit on their ass starving. People will get food , clothes, housing etc either via distribution or via force.
AI craze has been such an awful joke. We are burning Earth’s resources at an alarming rate for minimal gains at best.
This reminds me of a thought I had about driver-less trucks. The truck drivers who get laid off will be re-employed as security guards to protect the automated trucks from getting robbed.
That's an amusing idea, but won't happen. Trucks will just be made more secure, that much harder to open, if theft starts to increase.
Only if that’s cheaper than security guards. “Just” hiring security guards may be more cost-effective than “just” making trucks more robbery-resistant.
Of course it's going to be cheaper.
If a truck has a lifetime of 20 years, that's 20 years' worth of paying a security guard for it.
You really think it could take 20 years' worth of human effort in labor and materials to make a truck more secure? The price of the truck itself in the first place doesn't even come close to that.
Work will expand to fill the time available.
(I know this is not the commonly accepted meaning of Parkinson's law.)
I feel that I spend a lot more time Looking out for hidden Easter eggs in code reviews. Easter eggs being small errors that look right but hard to catch, but obvious to the one who wrote it. The problem is that the LLM wrote it so we have no benefit of the code author during review or testing.
2023-24 models couldn’t be relied on at smaller levels thanks to hallucinations and poor instruction following; newer models are much better and that trend will keep going. That low level reliability allows models to be a building block for bigger systems. Check out this personal assistant done by Nate Herk, a youtuber who builds automations with n8n:
https://youtube.com/watch?v=ZP4fjVWKt2w
It’s early. There are new skills everyone is just getting the hang of. If the evolution of AI was mapped to the evolution of computing we would be in the era of “check out this room-sized bunch of vacuum tubes that can do one long division at a time”.
But it’s already exciting, so just imagine how good things will get with better models and everyone skilled in the art of work automation!
This is what the "AI will be a normal technology" camp is telling the "AI is going to put us all out of work!" camp all along. It's always been like this.
Wasn't this covered a few days ago? One point here is that the data is from late 2023, before LLMs were any good. Another point is that the data was collected from remaining workers after any layoffs.
Previous discussion https://news.ycombinator.com/item?id=43830613
""The adoption of these chatbots has been remarkably fast," Humlum told The Register about the study. "Most workers in the exposed occupations have now adopted these chatbots... But then when we look at the economic outcomes, it really has not moved the needle."
How does that comply with the GDPR? OpenAI now has all sensitive data?
The article markets the study as Danish. However, the working paper is from the Becker Friedman Institute of the University of Chicago:
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5219933
It is no wonder that the Chicago School of Economics will not find any impact of AI on employment. Calling it Danish to imply some European "socialist" values is deceptive.
Seems obvious: If AI lets you produce more of your product then there would be more work added as well. Sales, maintenance, etc.
> Indeed, the reported productivity benefits were modest in the study. Users reported average time savings of just 2.8 percent of work hours (about an hour per week).
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5219933
> Our main finding is that AI chatbots have had minimal impact on adopters’ economic outcomes. Difference-in-differences estimates for earnings, hours, and wages are all precisely estimated zeros, with confidence intervals ruling out average effects larger than 1%. At the occupation level, estimates are similarly close to zero, generally excluding changes greater than 6%.
All of our communications at my organization that have clearly been run through Copilot (as we seem to keep championing in some kind of bizarre wankfest) lead me to have to waste a significant sum of time to read and decipher the slop.
What could have been a single paragraph turns into five separate bulleted lists and explanations and fluff.
Is the communication for you or for other AI tools? Meaning is your eventual role just making sure it’s within reason and keeping the AI to AI ecosystem functioning properly? If the output is missing something or misrepresenting something, you update.
Your responsibility is now as an AI response mechanic. And someone else that’s ingesting your AI’s output is making sure their AI’s output on your output is reasonable.
This obviously doesn’t scale well but does move the “doing” out of human hands, replacing that time with a guardrail responsibility.
Thats called a productivity increase. Finally. We were due for one.
It’s somewhat exciting to see the commodification of AI models and hardware. At first I was concerned that the hyperscalers would just own the whole thing as a service that keeps squeezing you.
But if model development and self hosting become financially feasible for the majority of organizations then this might really be a “democratized” productivity boost.
The hyperscalers will always own the best models, and even if you're willing to excuse that, requiring organization-levels of funding to run a decent model locally hardly makes the tech "democratized". Sure, you'll always be able to run ${LOCAL_MODEL} on your personal hardware, but that might be akin to programming using Notepad if the gap with the best models in the market is wide enough.
> Indeed, the reported productivity benefits were modest in the study. Users reported average time savings of just 2.8 percent of work hours (about an hour per week).
Yes. Companies aren’t going to allow you to relax with said new time
Yeah, that finding about verification tasks eating the time savings makes total sense. Since AI output is probabilstic, you always need a independent human check, right ? .. that also feels like a shifting bottleneck.. maybe you speed up the coding part but then get bogged down in testing or integration, or the scope just expands to fill the saved time. Plus, how much AI actually helps seems super task dependent, and can vary quite a bit depending on what you are doing
As long as they can capture some of the productivity gains, this is good news for workers.
> Indeed, the reported productivity benefits were modest in the study. Users reported average time savings of just 2.8 percent of work hours (about an hour per week).
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5219933
> Our main finding is that AI chatbots have had minimal impact on adopters’ economic outcomes. Difference-in-differences estimates for earnings, hours, and wages are all precisely estimated zeros, with confidence intervals ruling out average effects larger than 1%. At the occupation level, estimates are similarly close to zero, generally excluding changes greater than 6%.
No surprise here, same can be true of IT. I remember a time before PCs and most work was done on Mainframes and paper w/file cabinets.
Compared to now, the amount of work is about the same, or maybe a bit more than back then. But the big difference is the amount of data being processed and kept, that increased exponentially since then and is still increasing.
So I expect the same with AI, maybe the work is a bit different, but work will be the same or more as data increases.
> No surprise here, same can be true of IT. I remember a time before PCs and most work was done on Mainframes and paper w/file cabinets.
I understand your point but it lacks accuracy in that mainframes, paper and filing cabinets are deterministic tools. AI is neither deterministic nor a tool.
> AI is neither deterministic nor a tool
You keep repeating this in this thread, but as has been refuted elsewhere, this doesn't mean AI is not productive. A tool it definitely can be. Your handwriting is non deterministic, yet you could write reports with it.
Yes, but the one thing computers had going for them over humans was determinism, and we just threw that out the window.
We did not throw that out the window. AI is new capability on top of what the computer is already capable of
If AI isn’t either of those things, then what is it?
Unironically it's a form of occult divination. I know it sounds crazy but it really is the synthesis of humans' collective works combined with some dice rolls. I'm quite honestly surprised someone more superstitious than I am hasn't raised this point yet (that I've seen).
It's like when they widen a highway yet the traffic jam persists.
It’s just math, we tend to like to add and add, more and more. To think AI will take out all work for humans is likely false. Humans always find a problem. You solved your money problem? You are gonna have another problem like and existential crisis problem and that creates more stuff. Just an extreme example
So this study says people are producing more profit. The important question is whether they get it or someone else does.
It does not.
> Indeed, the reported productivity benefits were modest in the study. Users reported average time savings of just 2.8 percent of work hours (about an hour per week).
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5219933
> Our main finding is that AI chatbots have had minimal impact on adopters’ economic outcomes. Difference-in-differences estimates for earnings, hours, and wages are all precisely estimated zeros, with confidence intervals ruling out average effects larger than 1%. At the occupation level, estimates are similarly close to zero, generally excluding changes greater than 6%.
This has probably been true of all invention / automation: when we went from handwashing to using washing machines, did we start doing more leisurely things for the hours that were saved by that 'labour saving' device?
> Now it is true that the needs of human beings may seem to be insatiable. But they fall into two classes --those needs which are absolute in the sense that we feel them whatever the situation of our fellow human beings may be, and those which are relative in the sense that we feel them only if their satisfaction lifts us above, makes us feel superior to, our fellows. Needs of the second class, those which satisfy the desire for superiority, may indeed be insatiable; for the higher the general level, the higher still are they. But this is not so true of the absolute needs-a point may soon be reached, much sooner perhaps than we are all of us aware of, when these needs are satisfied in the sense that we prefer to devote our further energies to non-economic purposes.
[…]
> For many ages to come the old Adam will be so strong in us that everybody will need to do some work if he is to be contented. We shall do more things for ourselves than is usual with the rich to-day, only too glad to have small duties and tasks and routines. But beyond this, we shall endeavour to spread the bread thin on the butter-to make what work there is still to be done to be as widely shared as possible. Three-hour shifts or a fifteen-hour week may put off the problem for a great while. For three hours a day is quite enough to satisfy the old Adam in most of us!
* John Maynard Keynes, "Economic Possibilities for our Grandchildren" (1930)
* http://www.econ.yale.edu/smith/econ116a/keynes1.pdf
An essay putting forward / hypothesizing four reasons on why the above did not happen (We haven't spread the wealth around enough; People actually love working; There's no limit to human desires; Leisure is expensive):
* https://www.vox.com/2014/11/20/7254877/keynes-work-leisure
We probably have more leisure time (and fewer hours worked: five versus six days) in general, but it's still being filled (probably especially in the US where being "productive" is an unofficial religion).
One additional factor to consider is that in most cases those setting the leisure hours (i.e. employers) are not the same ones enjoying the leisure (i.e. employees). While the leisure/productivity tradeoff applies to an individual, an economically rational employer only really values productivity and will only offer as much leisure time as necessary to attract and retain employees. So while social forces do generally push for additional leisure over time, such as shorter work weeks, it's often challenging for people to find the type of employment situation where they have significant flexibility in trading off income for leisure time.
As an example, I have a pretty good paying, full-time white collar job. It would be much more challenging if not impossible to find an equivalent job making half as much working 20 hours a week. Of course I could probably find some way to apply the same skills half-time as a consultant or whatever, but that comes with a lot of tradeoffs besides income reduction and is less readily available to a lot of people.
Maybe the real exception here is at the top of the economic ladder, although at that point the mechanism is slightly different. Billionaires have pretty infinite flexibility on leisure time because their income is almost entirely disconnected from the amount of "labor" they put in.
What?? What do you think we’re doing instead of handwashing clothes exactly?
> What?? What do you think we’re doing instead of handwashing clothes exactly?
The average American spends almost 3 hours per day on social media. [1]
The average American spends 1.5 hours per day watching streaming media. [2]
That’s a lot washed clothes right there.
[1] https://soax.com/research/time-spent-on-social-media
[2] https://www.nielsen.com/news-center/2024/time-spent-streamin...
Washing machines are deterministic. Automation is deterministic. AI is not deterministic. AI is not a tool. AI is destined to be what it is now, a parlor trick designed to passify and amuse.
You will have to explain your logic to go from determinism to usefulness. Are you dismissing people's experiences because they don't fit in your analysis frame so they HAVE to be misled because your analysis HAS to be right?
Depends.
If you run your own LLM, and you don't update the training data, that IS deterministic.
And, it is a powerful tool.
Driving is not deterministic, yet commercial trucking is a core part of the US economy and definitely a productivity boost over trains, mules, and whatever else was before.
Always has been
This is an insane clickbait, and none of the comments seem to have read further than the title.
There are two metrics in the study:
> AI chatbots save time across all exposed occupations (for 64%–90% of users)
and
> AI chatbots have created new job tasks for 8.4% of workers
There's absolutely no indication anywhere in the study that the time saved is offset by the new work created. The percentages for the two metrics are so vastly different that it's fairly safe to assume it's not the case.
> Indeed, the reported productivity benefits were modest in the study. Users reported average time savings of just 2.8 percent of work hours (about an hour per week).
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5219933
> Our main finding is that AI chatbots have had minimal impact on adopters’ economic outcomes. Difference-in-differences estimates for earnings, hours, and wages are all precisely estimated zeros, with confidence intervals ruling out average effects larger than 1%. At the occupation level, estimates are similarly close to zero, generally excluding changes greater than 6%.
Again, none of this, especially the calculation about economic output, indicates that the new work it generated led offset the time it saved.
If people save an hour a week and use that to browse HackerNews, they've saved time but haven't produced any economic value, but it doesn't mean they didn't save time.
there's always more work to do. the workforce is always tied up in a few areas of work. once they're freed, they're able to work in new areas. the unemployment due to technological development isn't due to a reduction in work (as in quantity of work available and/or necessary). the more efficient we become, the more work areas we open up.
> Indeed, the reported productivity benefits were modest in the study. Users reported average time savings of just 2.8 percent of work hours (about an hour per week).
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5219933
> Our main finding is that AI chatbots have had minimal impact on adopters’ economic outcomes. Difference-in-differences estimates for earnings, hours, and wages are all precisely estimated zeros, with confidence intervals ruling out average effects larger than 1%. At the occupation level, estimates are similarly close to zero, generally excluding changes greater than 6%.
> there's always more work to do
Right on point
As shown by never-shrinking backlogs
Todo lists always grow
The crucial task ends up being prioritizing, ie. figuring out what to put in priority at the current moment
That's why they say never give 110 percent, because they'll come to expect that all the time. Workload abhors a vacuum.
I think we may be reaching a point where tech is better at almost everything. When I look at my workplace , there are only a few people who do stuff that’s truly creative. Everybody else does work that’s maybe difficult but fundamentally still very mechanical and in principle automatable.
Add to that progress in robotics and we may reach a point where humans are not needed anymore for most tasks. Then the capitalists will have fully automated factories but nobody who can buy their products.
Maybe capitalism had a good run for the last 200 years and a new economic system needs to arise. Whatever that will be.
Based on the history of technology, this is overwhelmingly the expected result of technology-enabled automation - despite every time pundits claiming "but this time it'll be different."