Just because X can be replaced by Y today doesn’t imply that it can do so in a Future where we are aware of Y, and factor it into the background assumptions about the task.
In more concrete terms: if “not being powered by AI” becomes a competitive advantage, then AI won’t be meaningfully replacing anything in that market.
You can already see this with YouTube: AI-generated videos are a mild amusement, not a replacement for video creators, because made by AI is becoming a negative label in a world where the presence of AI video is widely known.
Of course this doesn’t apply to every job, and indeed many jobs have already been “replaced” by AI. But any analysis which isn’t reflectively factoring in the reception of AI into the background is too simplistic.
Just to further elaborate on this with another example: the writing industry. (Technical, professional, marketing, etc. writing - not books.)
The default logic is that AI will just replace all writing tasks, and writers will go extinct.
What actually seems to be happening, however, is this:
- obviously written-by-AI copywriting is perceived very negatively by the market
- companies want writers that understand how to use AI tools to enhance productivity, but understand how to modify copy so that it doesn’t read as AI-written
- the meta-skill of knowing what to write in the first place becomes more valuable, because the AI is only going to give you a boilerplate plan at best
And so the only jobs that seem to have been replaced by AI directly, as of now, are the ones writing basically forgettable content, report-style tracking content, and other low level things. Not great for the jobs lost, but also not a death sentence for the entire profession of writing.
To understand why this is too optimistic, you have to look at things where AI is already almost human-level. Translations are more and more done exclusively with AI or with a massive AI help (with the effect of destroying many jobs anyway) at this point. Now ebook reading is switching to AI. Book and music album covers are often done with AI (even if this is most of the times NOT advertised), and so forth. If AI progresses more in a short timeframe (the big "if" in my blog post), we will see a lot of things done exclusively (and even better 90% of the times, since most humans doing a given work are not excellence in what they do) by AI. This will be fine if governments immediately react and the system changes. Otherwise there will be a lot of people to feed without a job.
I can buy the idea that simple specific tasks like translation will be dramatically cut down by AI.
But even then – any serious legal situation (like a contract) is going to want a human in the loop to verify that the translation is actually correct. This will require actual translator skills.
AI art seems to basically only be viable when it can’t be identified as AI art. Which might not matter if the intention is to replace cheap graphic design work. But it’s certainly nowhere near developed enough to create anything more sophisticated, sophisticated enough to both read as human-made and have the imperfect artifacts of a human creator. A lot of the modern arts are also personality-driven, where the identity and publicity of the artist is a key part of their reception. There are relatively few totally anonymous artists.
Beyond these very specific examples, however, I don’t think it follows that all or most jobs are going to be replaced by an AI, for the reasons I already stated. You have to factor in the sociopolitical effects of technology on its adoption and spread, not merely the technical ones.
>You can already see this with YouTube: AI-generated videos are a mild amusement, not a replacement for video creators, because made by AI is becoming a negative label in a world where the presence of AI video is widely known.
But that's because, at present, AI generated video isn't very good. Consider the history of CGI. In the 1990s and early 2000s, it was common to complain about how the move away from practical sets in favor of CGI was making movies worse. And it was! You had backgrounds and monsters that looked like they escaped from a video game. But that complaint has pretty much died out these days as the tech got better (although Nolan's Oppenheimer did weirdly hype the fact that its simulated Trinity blast was done by practical effects).
Ironically, while the non-CGI SFX in e.g. Interstellar looked amazing, that sad fizzle of a practical explosion in Oppenheimer did not do the real thing justice and would've been better served by proper CGI VFX.
CGI is a good analogy because I think AI and creators will probably go in the same direction:
You can make a compelling argument that CGI operators outcompeted practical effects operators. But CGI didn’t somehow replace the need for a filmmaker, scriptwriter, cinematographers, etc. entirely – it just changed the skillset.
AI will probably be the same thing. It’s not going to replace the actual job of YouTuber in a meaningful sense; but it might redefine that job to include being proficient at AI tools that improve the process.
That's a Nolan thing like how Dunkirk used no green screen.
I think Harry Potter and Lord of the Rings embody the transition from old school camera tricks to CGI as they leaned very heavily into set and prop design and as a result have aged very gracefully as movies
Do you get the feeling that AI generated content is lacking something that can be incrementally improved on?
Seems to me that it's already quite good in any dimension that it knows how to improve on (e.g. photorealism) and completely devoid of the other things we'd want from it (e.g. meaning).
Yeah if you look at many of the top content creators, their appeal often has very little to do with production value, and is deliberately low tech and informal.
I guess AI tools can eventually become more human-like in terms of demeanor, mood, facial expressions, personality, etc. but this is a long long way from a photorealistic video.
That’s the fundamental issue with most “analysis”, and most discussions really, on HN.
Since the vast vast majority of writers and commentators are not literal geniuses… they can’t reliably produce high quality synthetic analysis, outside of very narrow niches.
Even though for most comment chains on HN to make sense, readers certainly have to pretend some meaningful text was produced beyond happenstance.
Partly because quality is measured relative to the average, and partly because the world really is getting more complex.
In every technology wave so far, we've disrupted many existing jobs. However we've also opened up new kinds of jobs. And, because it is easier to retrain humans than build machines for those jobs, we wound up with more and better jobs.
This is the first technology wave that doesn't just displace humans, but which can be trained to the new job opportunities more easily than humans can. Right now it can't replace humans for a lot of important things. But as its capabilities improve, what do displaced humans transition to?
I don't think that we have a good answer to that. And we may need it sooner rather than later. I'd be more optimistic if I trusted our leadership more. But wise political leadership is not exactly a strong point for our country right now.
> but which can be trained to the new job opportunities more easily than humans can
What makes you think that? Self driving cars have had untold billions of dollars in reaearch and decades in applied testing, iteration, active monitoring, etc and it still has a very long tail of unaddressed issues. They've been known to ignore police traffic redirections, they've run right through construction barriers, and recently they were burnt to a crisp in the LA riots, completely ignorant of the turmoil that was going on. A human driver is still far more adaptive and requires a lot less training than AI, and humans are ready to handle the infinitely long tail of exceptions to the otherwise algorithmic task of driving, which follows strict rules.
And when you talk about applying this same tech, so confidently, to domains far more nuanced and complex than driving, with even less training data than to go off, I find myself firmly in the skeptics camp, that holds you will struggle even harder to apply humanoid robotics in uncontrolled environments across a diverse range of tasks without human intervention or piloting or maintenence or management.
Unemployment is still near all time lows, this will persist for sometime as we have a structural demographic problem with massive amounts of retirees and less children to support the population "pyramid" (which is looking more like a tapering rectangle these days).
A few months ago I saw one driverless car maybe every three days. Now I see roughly 3-5 every day.
I get that it’s taken a long time and a lot of hype that hasn’t panned out. But once the tech works and it’s just about juicing the scale then things shift rapidly.
Even if you think “oh that’s the next generation’s problem” if there is a chance you’re wrong, or if you want to be kind to the next generation: now is the time to start thinking and planning for those problems.
I think the most sensible answer would be something like UBI. But I also think the most sensible answer for climate change is a carbon tax. Just because something is sensible doesn’t meant it’s politically viable.
I guess you live in a place with perfect weather year round? I don’t and I haven’t seen a robo taxi my entire life. I do have access to a Tesla though and it’s current self-driving capabilities are not even close to anything I would call „autonomous“ und real world conditions (including weather).
Maybe the tech will at some point be good enough. At the current rate of improvement this will still take decades at least. Which is sad because I personally hoped that my kids would never have to get a driver’s License.
I've ridden just under 1,000 miles in autonmous (no scare quotes) Waymos, so it's strange to see someone letting Tesla's abject failure inform their opinions on how much progress AVs have made.
Tesla that got fired as a customer by Mobileye for abusing their L2 tech is your yardstick?
Anyways, Waymo's DC launch is next year, I wonder what the new goalpost will be.
The nice thing about LiDAR is that you can use it to train a model to simulate a LiDAR based on camera inputs only. And of course to verify how good that model is.
Humans use only cameras. And humans don't even have true 360 coverage on those cameras.
The bottleneck for self-driving technology isn't sensors - it's AI. Building a car that collects enough sensory data to enable self-driving is easy. Building a car AI that actually drives well in a diverse range of conditions is hard.
That's actually categorically false. We also use sophisticated hearing, a well developed sense of inertia and movement, air pressure, impact, etc. And we can swivel our heads to increase our coverage of vision to near 360°, while using very dependable and simple technology like mirrors to cover the rest. Add to that that our vision is inherently 3D and we sport a quite impressive sensor suite ;-). My guess is that the fidelity and range of the sensors on a Tesla can't hold a candle to the average human driver. No idea how LIDAR changes this picture, but it sure is better than vision only.
I think there is a good chance that what we currently call "AI" is fundamentally not technologically capable of human levels of driving in diverse conditions. It can support and it can take responsibility in certain controlled (or very well known) environments, but we'll need fundamentally new technology to make the jump.
Yes, human vision is so bad it has to rely on a swivel joint and a set of mirrors just to approximate 360 coverage.
Modern cars can have 360 vision at all times, as a default. With multiple overlapping camera FoVs. Which is exactly what humans use to get near field 3D vision. And far field 3D vision?
The depth-discrimination ability of binocular vision falls off with distance squared. At far ranges, humans no longer see enough difference between the two images to get a reliable depth estimate. Notably, cars can space their cameras apart much further, so their far range binocular perception can fare better.
How do humans get that "3D" at far distances then? The answer is, like it usually is when it comes to perception, postprocessing. Human brain estimates depth based on the features it sees. Not unlike an AI that was trained to predict depth maps from a single 2D image.
If you think that perceiving "inertia and movement" is vital, then you'd be surprised to learn that an IMU that beats a human on that can be found in an average smartphone. It's not even worth mentioning - even non-self-driving cars have that for GPS dead reckoning.
A fine argument in principle, but even if we talk only about vision, the human visual system is much more powerful than a camera.
Between brightly sunlit snow and a starlit night, we can cover more than 45 stops with the same pair of eyeballs; the very best cinematographic cameras reach something like 16.
In a way it's not a fair comparison, since we're taking into account retinal adaptation, eyelids/eyelashes, pupil constriction. But that's the point - human vision does not use cameras.
Self-driving cars beat humans on safety already. This holds for Waymos and Teslas both.
They get into less accidents, mile for mile and road type for road type, and the ones they get into trend towards less severe. Why?
Because self-driving cars don't drink and drive.
This is the critical safety edge a machine holds over a human. A top tier human driver in the top shape outperforms this generation of car AIs. But a car AI outperforms the bottom of the barrel human driver - the driver who might be tired, distracted and under influence.
This video proves nothing other than "a YouTuber found a funny viral video idea".
Teslas "interpret the environment in 3D space" too - by feeding all the sensor data into a massive ML sensor fusion pipeline, and then fusing that data across time too.
This is where the visualizers, both the default user screen one and the "Terminator" debugging visualizer, get their data from. They show plain and clear that the car operates in a 3D environment.
You could train those cars to recognize and avoid Wile E. Coyote traps too, but do you really want to? The expected amount of walls set in the middle of the road with tunnels painted onto them is very close to zero.
Once computers and AIs can approach even a small fraction of the our capacity then sure, only cameras is fine, it's a shame that our suite of camera data processing equipment is so far beyond our understanding that we don't even have models of how it might work at its core.
Even at that point, why would you possibly use only cameras though, when you can get far better data by using multiple complementary systems? Humans still crash plenty often, in large part because of how limited our "camera" system can be.
Even though it's false, let's imagine that's true.
Our cameras (also called eyes) have way better dynamic range, focus speed, resolution and movement detection capabilities, Backed by a reduced bandwidth peripheral vision which is also capable of detecting movement.
No camera, incl. professional/medium format still cameras are that capable. I think one of the car manufacturers made a combined tele/wide lens system for a single camera which can see both at the same time, but that's it.
Dynamic range, focus speed, resolution, FoV and motion detection still lacks.
...and that's when we imagine that we only use our eyes.
That’s the mistake Elon Musk made and the same one you’re making here.
Not to mention that humans driving with cameras only is absolutely pathetic. The amount of accidents that occur that are completely avoidable doesn’t exactly inspire confidence that all my car needs to be safe and get me to my destination is a couple cameras.
This isn't a "mistake". This is the key problem of getting self-driving to work.
Elon Musk is right. You can't cram 20 radars, 50 LIDARs and 100 cameras into a car and declare self-driving solved. No amount of sensors can redeem a piss poor driving AI.
Conversely, if you can build an AI that's good enough, then you don't need a lot of sensors. All the data a car needs to drive safely is already there - right in the camera data stream.
if additional sensors improve the ai, then your last statement is categorically untrue. The reason it worked better is that those additional sensors gave it information that wac not available in the video stream
So far, every self-driving accident where the self-driving car was found to be at fault follows the same pattern: the car had all the sensory data it needed to make the right call, and it didn't make the right call. The bottleneck isn't in sensors.
I worked at Zoox, which has similar teleoperations to Waymo: remote operators can't joystick the vehicles.
So if we're saying how many times would it have crashed without a human: 0.
They generally intervene when the vehicles get stuck and that happens pretty rarely, typically because humans are doing something odd like blocking the way.
The goalpost will be when you can buy one and drive it anywhere. How many cities are Waymo in now? I think what they are doing is terrific, but each car must cost a fortune.
I’m a bit confused. If we’re talking about consumer cars, the end goal is not to rent a car that can drive itself, the end goal is to own a car that can drive itself, and so it doesn’t matter if the car is available for purchase but costs $250,000 because few consumers can afford that, even wealthy ones.
Not sure how exactly politicians will jump from “minimal wages don’t have to be livable wages” and “people who are able to work should absolutely not have access to free healthcare” and “any tax-supported benefits are actually undeserved entitlements and should be eliminated” to “everyone deserves a universal basic income”.
I wouldn't underestimate what can happen if 1/3 of your workforce is displaced and put aside with nothing to do.
People are usually obedient because they have something in life and they are very busy with work. So they don't have time or headspace to really care about politics. When suddenly big numbers of people start to more care about politics it leads to organizing and all kinds of political changes.
What i mean is that it wouldn't be current political class pushing things like UBI. At same time it seems that some of current elites are preparing for this and want to get rid of elections altogether to keep the status quo.
I wouldn't underestimate how easily AI will suppress this through a combination of ultrasurveillance, psychological and emotional modelling, and personally targeted persuasion delivered by chatbot etc.
If all else fails you can simply bomb city blocks into submission. Or arrange targeted drone decapitations of troublemakers. (Possibly literally.)
The automation and personalisation of social and political control - and violence - is the biggest difference this time around. The US has already seen a revolution in the effectiveness of mass state propaganda, and AI has the potential to take that up another level.
What's more likely to happen is survivors will move off-grid altogether - away from the big cities, off the Internet, almost certainly disconnected and unable to organise unless communication starts happening on electronic backchannels.
Not sure how exactly politicians will jump from ...
Well, if one believes that the day will come when their choices will be "make that jump" or "the guillotine", then it doesn't seem completely outlandish.
The money transferred from tax payers to people without money is in effect a price for not breaking the law.
If AI makes it much easier to produce goods, it reduces price of money, making it easier to pay some money to everyone in exchange for not breaking the law.
Carbon tax on a state level to try to fight a global problem makes 0 sense actually.
You just shift the emissions from your location to the location that you buy products from.
Basically what happened in Germany: more expensive "clean" energy means their own production went down and the world bought more from China instead. The net result is probably higher global emissions overall.
This is why an economics based strictly on scarcity cannot get us where we need to go. Markets, not knowing what it's like to be thirsty, will interpret a willingness to poison the well as entrepreneurial spirit to be encouraged.
We need a system where being known as somebody who causes more problems than they solve puts you (and the people you've done business with) at an economic disadvantage.
> I think the most sensible answer would be something like UBI.
What corporation will accept to pay dollars for members of society that are essentially "unproductive"? What will happen with the value of UBI in time, in this context, when the strongest lobby will be of the companies that have the means of producing AI? And, more essentially, how are humans able to negotiate for themselves when they lose their abilities to build things?
I'm not opposing the technology progress, I'm merely trying to unfold the reality of UBI being a thing, knowing human nature and the impetus for profit.
UBI is not a good solution because you still have to provision everything on the market, so it's a subsidy to private companies that sell the necessities of life on the market. If we're dreaming up solutions to problems, much better would be to remove the essentials from the market and provide them to everyone universally. Non-market housing, healthcare, education all provided to every citizen by virtue of being a human.
You can’t provide valuable things for “free” en masse without institutionalizing either slavery or robbery. The value must come from somewhere.
The costs of what you propose are enormous. No legislation can change that fact.
There ain’t no such thing as a free lunch.
Who’s going to pay for it? Someone who is not paying for it today.
How do you intend to get them to consent to that?
Or do you think that the needs of the many should outweigh the consent of millions of people?
The state, the only organization large enough to even consider undertaking such a project, has spending priorities that do not include these things. In the US, for example, we spend the entire net worth of Elon Musk (the “richest man in the world”, though he rightfully points out that Putin owns far more than he does) about every six months on the military alone. Add in Zuckerberg and you can get another 5 months or so. Then there’s the next year to think about. Maybe you can do Buffet and Gates; what about year three?
That’s just for the US military, at present day spending levels.
What you’re describing is at least an order of magnitude more expensive than that, just in one country that only has 4% of people. To extend it to all human beings, you’re talking about two more orders of magnitude.
There aren’t enough billionaires on the entire planet even to pay for one country’s military expenses out of pocket (even if you completely liquidated them), and this proposed plan is 500-1000x more spending than that. You’re talking about 3-5 trillion dollars per year just for the USA - if you extrapolate out linearly, that’d be 60-200 trillion per year for the Earth.
Even if you could reduce cost of provision by 90% due to economies of scale ($100/person/month for housing, healthcare, and education combined, rather than $1000 - a big stretch), it is still far, far too big to do under any currently envisioned system of wealth redistribution. Society is big and wealthy private citizens (ie billionaires) aren’t that numerous or rich.
There is a reason we all pay for our own food and housing.
> You’re talking about 3-5 trillion dollars per year just for the USA
I just want to point out that's about a fifth of our GDP and we spend about this much for healthcare in the US. We badly need a way to reduce this to at least half.
> There is a reason we all pay for our own food and housing.
The main reason I support UBI is I don't want need based or need aware distribution. I want everyone to get benefits equally regardless of income or wealth. That's my entire motivation to support UBI. If you can come up with another something that guarantees no need based or need aware and does not have a benefit cliff, I support that too. I am not married to UBI.
Honestly, what type of housing do you envision under a UBI system? Houses? Modern apartment buildings? College dormitory-like buildings? Soviet-style complexes? Prison-style accommodations? B stands for basic, how basic?
I think a UBI system is only stable in conjunction with sufficient automation that work itself becomes redundant. Before that point, I don't think UBI can genuinely be sustained; and IMO even very close to that point the best I expect we will see, if we're lucky, is the state pension age going down. (That it's going up in many places suggests that many governments do not expect this level of automation any time soon).
Therefore, in all seriousness, I would anticipate a real UBI system to provide whatever housing you want, up to and including things that are currently unaffordable even to billionaires, e.g. 1:1 scale replicas of any of the ships called Enterprise including both aircraft carriers and also the fictional spaceships.
That said, I am a proponent of direct state involvement in the housing market, e.g. the UK council housing system as it used to be (but not as it now is, there're not building enough):
> You can’t provide valuable things for “free” en masse without institutionalizing either slavery or robbery. The value must come from somewhere.
Utter nonsense.
Do you believe the European countries that provides higher education for free are manning tenure positions with slaves or robbing people at gunpoint?
How come do you see public transportation services in some major urban centers being provided free of charge?
How do you explain social housing programmes conducted throughout the world?
Are countries with access to free health care using slavery to keep hospitals and clinics running?
What you are trying to frame as impossibilities is already the reality for many decades in countries ranking far higher in development and quality of living indexes that the US.
You're missing the point, language can be tricky. Technically, the state confiscating wealth derived from your labor through taxes is a form of robbery and slavery. It used to be called corvée. But the words being used have a connotation of something much more brutal and unrewarding. This isn't a political statement, I'm not a libertarian who believes all taxation is evil robbery and needs to be abolished. I'm just pointing out by the definition of slavery aka forced labor, and robbery aka confiscation of wealth, the state employs both of those tactics to fund the programs you described.
> Technically, the state confiscating wealth derived from your labor through taxes is a form of robbery and slavery.
Without the state, you wouldn't have wealth. Heck there wouldn't even be the very concept of property, only what you could personally protect by force! Not to mention other more prosaic aspects: if you own a company, the state maintains the roads that your products ship through, the schools that educate your workers, the cities and towns that house your customers... In other words the tax is not "money that is yours and that the evil state steals from you", but simply "fair money for services rendered".
To a large extent, yes. That's why the arrangement is so precarious, it is necessary in many regards, but a totalitarian regime or dictatorship can use this arrangement in a nefarious manner and tip the scale toward public resentment. Balancing things to avoid the revolutionary mob is crucial. Trading your labor for protection is sensible, but if the exchange becomes exorbitant, then it becomes a source of revolt.
> You're missing the point, language can be tricky. Technically, the state confiscating wealth derived from your labor through taxes is a form of robbery and slavery.
You're letting your irrational biases show.
To start off, social security contributions are not a tax.
But putting that detail aside, do you believe that paying a private health insurance also represents slavery and robbery? Are you a slave to a private pension fund?
Are you one of those guys who believes unions exploit workers whereas corporations are just innocent bystanders that have a neutral or even positive impact on workers lives and well being?
No, I'm a progressive and believe in socialism. But taxation is de facto a form of unpaid labor taken by the force of the state. If you don't pay your taxes, you will go to jail. It is both robbery and slavery, and in the ideal situation, it is a benevolent sort of exchange, despite existing in the realm of slavery/robbery. In a totalitarian system, it become malevolent very quickly. It also can be seen as not benevolent when the exchange becomes onerous and not beneficial. Arguing this is arguing emotionally and not rationally using language with words that have definitions.
social security contributions are a mandatory payment to the state taken from your wages, they are a tax, it's a compulsory reduction in your income. Private health insurance is obviously not mandatory or compulsory, that is different, clearly. Your last statement is just irrelevant because you assume I'm a libertarian for pointing out the reality of the exchange taking place in the socialist system.
I'd be very interested in hearing which definition of "socialism" aligns with those obviously libertarian views?
> If you don't pay your taxes, you will go to jail. It is both robbery and slavery [...] Arguing this is arguing emotionally and not rationally using language with words that have definitions.
Indulging in the benefits of living in a society, knowingly breaking its laws, being appalled by entirely predictable consequences of those action, and finally resorting to incorrect usage of emotional language like "slavery" and "robbery" to deflect personal responsibility is childish.
Taxation is payment in exchange for services provided by the state and your opinion (or ignorance) of those services doesn't make it "robbery" nor "slavery". Your continued participation in society is entirely voluntary and you're free to move to a more ideologically suitable destination at any time.
> No, I'm a progressive and believe in socialism. But taxation is de facto a form of unpaid labor taken by the force of the state.
I do not know what you mean by "progressive", but you are spewing neoliberal/libertarian talking points. If anything, this tells how much Kool aid you drank.
> Have you ever stopped to consider why class mobility is much much less common in Europe than in the USA?
Which class mobility is this that you speak of? The one that forces the average US citizens to be a paycheck away from homelessness? Or is it the one where you are a medical emergency away from filing bankruptcy?
Have you stopped to wonder how some European countries report higher median household incomes than the US?
But by any means continue to believe your average US citizen is a temporarily embarrassed billionaire, just waiting for the right opportunity to benefit from your social mobility.
In the meantime, also keep in mind that mobility also reflects how easy it is to move down a few pegs. Let that sink in.
> the economic situation in Europe is much more dire than the US...
Is it, though? The US reports by far the highest levels of lifetime literal homelessness, which is three times greater than in countries like Germany. Homeless people on Europe aren't denied access to free healthcare, primary or even tertiary.
Why do you think the US, in spite of it's GDP, features so low in rankings such as human development index or quality of life?
Yet people live better. Goes to show you shouldn't optimise for crude, raw GDP as an end in itself, only as a means for your true end: health, quality of life, freedom, etc.
In many of the metrics, yeah. But Americans can afford larger houses and more stuff essentially, which isn't necessarily a good replacement for general quality of life things.
> In many of the metrics, yeah. But Americans can afford larger houses and more stuff essentially, which isn't necessarily a good replacement for general quality of life things.
I think this is the sort of red herring that prevents the average US citizen from realizing how screwed over they are. Again, the median household income in the US is lower than in some European countries. On top of this, the US provides virtually no social safety net or even socialized services to it's population.
The fact that the average US citizen is a paycheck away from homelessness and the US ranks so low in human development index should be a wake-up call.
UBI could easily become a poverty trap, enough to keep living, not enough to have a shot towards becoming an earner because you’re locked out of opportunities. I think in practice it is likely to turn out like “basic” in The Expanse, with people hoping to win a lottery to get a shot at having a real job and building a decent life for themselves.
If no UBI is installed there will be a hard crash while everyone figures out what it is that humans can do usefully, and then a new economic model of full employment gets established. If UBI is installed then this will happen more slowly with less pain, but it is possible for society to get stuck in a permanently worse situation.
Ultimately if AI really is about to automate as much as it is promised then what we really need is a model for post-capitalism, for post-scarcity economics, because a model based on scarcity is incapable of adapting to a reality of genuine abundance. So far nobody seems to have any clue of how to do such a thing. UBI as a concept still lives deeply in the Overton window bounded by capitalist scarcity thinking. (Not a call for communism btw, that is a train to nowhere as well because it also assumes scarcity at its root.)
What I fear is that we may get a future like The Diamond Age, where we have the technology to get rid of scarcity and have human flourishing, but we impose legal barriers that keep the rich rich and the poor poor. We saw this happen with digital copyright, where the technology exists for abundance, but we’ve imposed permanent worldwide legal scarcity barriers to protect revenue streams to megacorps.
The major shift for me is now its normal to take Waymos. Yeah, there aren't as fast as Uber if you have to get across town, but for trips less than 10 miles they're my go to now.
On the other hand, the Tesla “robotaxi” scares the crap out of me. No lidar and seems to drive more aggressively. The Mark Rober YouTube of a Tesla plowing into a road-runner style fake tunnel is equal parts hilarious and nightmare fuel when you realize that’s what’s next to your kid biking down the street.
> Mark Rober YouTube of a Tesla plowing into a road-runner style fake tunnel
I understand the argument for augmenting your self-driving systems with LIDAR. What I don't really understand is what videos like this tell us. The comparison case for a "road-runner style fake tunnel" isn't LIDAR, it's humans, right? And while I'm sure there are cases where a human driver would spot the fake tunnel and stop in time, that is not at all a reasonable assumption. The question isn't "can a Tesla save your life when someone booby traps a road?", it's "is a Tesla any worse than you at spotting booby trapped roads?", and moreover, "how does a Tesla perform on the 99.999999% of roads that aren't booby trapped?"
Tesla‘s insistence on not using Lidar while other companies deem it necessary for save auto-pilot creates the need for Tesla to demonstrate that their approach is equally as save for both drivers and ie pedestrians. They haven’t done that, arguably the data shows the contrary. This generates the impression that Tesla skimps on security and if they skimp in one area, they’ll likely skimp in others. Stuff like the Rober video strengthens these impressions. It’s a public perception issue and Tesla has done nothing (and maybe isn’t able to do anything) to dispel this notion.
> Is a Tesla any worse than you at spotting booby trapped roads
That would've been been the case if all laws, opinions and purchasing decisions were made by everyone acting rationally. Even if self driving cars are safer than human drivers, it just takes a few crashes to damage their reputation. It has to be much, much safer than humans for mass adoption. Ideally also safer than the competition, if you're comparing specific companies.
Waymo has a control center, but it's customer service, not remote driving. They can look at the sensor data, give hints to the car ("back out, turn around, try another route") and talk to the customer, but can't take direct control and drive remotely.
Baidu's system in China really does have remote drivers.[1]
Tesla also appears to have remote drivers, in addition to someone in each car with an emergency stop button.[2]
The robotaxi business model is the total opposite of scaling. At my previous employer we were solving the problem "block by block, city by city", , and I can only assume that you are living in the right city/block where they are tackling.
Every time someone casually throws out UBI my mind goes to the question "who is paying taxes when some people are on UBI ?"
Is there like a transition period where some people don't have to pay taxes and yet don't get UBI, and if so, why hasn't that come yet ? Why aren't the minimum tax thresholds going up if UBI could be right around the corner ?
The taxes will be most burdensome for the wealthiest and most productive of institutions, which is generally why these arrangements collapse economies and nations. UBI is hard to implement because it incentivizes non-productive behavior and disincentivizes productive activity. This creates economic crisis, taxes are basically a smaller scale version of this, UBI is like a more comprehensive wealth redistribution scheme. The creation of a syndicate (in this case, the state) to steal from the productive to give to the non-productive is a return to how humanity functioned before the creation of state-like structures when marauders and bandits used violence to steal from those who created anything. Eventually, the state arose to create arrangements and contracts to prevent theft, but later become the thief itself, leading to economic collapse and the cyclical revolutionary cycle.
So, AI may certainly bring about UBI, but the corporations that are being milked by the state to provide wealth to the non-productive will begin to foment revolution along with those who find this arrangement unfair, and the productive activity of those especially productive individuals will be directed toward revolution instead of economic productivity. Companies have made nations many times before, and I'm sure it'll happen again.
The problem is the "productive activity" is rather hard to define if there's so much "AI" (be it classical ML, LLM, ANI, AGI, ASI, whatever) around that nearly everything can be produced by nearly no one.
The destruction of the labour theory of value has been a goal of "tech" for a while, but if they achieve it, what's the plan then?
Assuming humans stay in control of the AIs because otherwise all bets are off, in a case where a few fabulously wealthy (or at least "onwing/controlling", since the idea of wealth starts to become fuzzy) industrialists control the productive capacity for everything from farming to rocketry and there's no space for normal people to participate in production any more, how do you even denominate the value being "produced"? Who is it even for? What do they need to give in return? What can they give in return?
The assumption here that UBI "incentivizes non-productive behavior and disincentivizes productive activity" is the part that doesn't make sense. What do you think universal means? How does it disincentivize productive activity if it is provided to everyone regardless of their income/productivity/employment/whatever?
Evolutionarily, people engage in productive activity in order to secure resources to ensure their survival and reproduction. When these necessary resources are gifted to a person, there is a lower chance that they will decide to take part in economically productive behavior.
You can say that because it is universal, it should level the playing field just at a different starting point, but you are still creating a situation where even incredibly intelligent people will choose to pursue leisure over labor, in fact, the most intelligent people may be the ones to be more aware of the pointlessness of working if they can survive on UBI. Similarly, the most intelligent people will consider the arrangement unfair and unsustainable and instead of devoting their intelligence toward economically productive ventures, they will devote their abilities toward dismantling the system. This is the groundwork of a revolution. The most intelligent will prefer a system where their superior intelligence provides them with sufficient resources to choose a high-quality mate. If they see an arrangement where high-quality mates are being obtained by individuals who they deem to be receiving benefits that they cannot defend/protect adequately, such an arrangement will be dismantled. This evolutionary drive is hundreds of millions of years old. Primitive animals will take resources from others that they observe to be unable to defend their status.
So, overall, UBI will probably be implemented, and it will probably end in economic crisis, revolution, and the resumption of this cycle that has been playing out over and over for centuries.
I guess the thinking goes like this: Why start a business, get a higher paying job etc if you're getting ~2k€/mo in UBI and can live off of that? Since more people will decide against starting a business or increasing their income, productive activity decreases.
You also have to consider the alternative: if there’s no ubi, are you expecting millions to starve? This is a recipe for civil war, if you have a very large group of people unable to survive you get social unrest. Either you spend the money on ubi or on police/military suppression to battle the unrest.
Isn't it the case that companies are always competing and evolving? Unless we see that there's a ceiling to driverless tech that is immediately obvious.
We "made cars work" about 100 years ago, but they have been innovating on that design since then on comfort, efficiency, safety, etc. I doubt the very first version of self driving will have zero ways to improve (although eventually I suppose you would hit a ceiling).
> I think the most sensible answer would be something like UBI.
Having had the experience of living under communist regime prior to 1989 I have zero trust in the state providing support, while I am totally dependent and have no recourse. Instead I would rather rely on my own two hands like my grandparents did.
I see a world where we can build anything we want with our own hands and AI automation. Jobs might become optional.
> I see a world where we can build anything we want with our own hands and AI automation. Jobs might become optional.
Unless your two hands are building murderbots, though, it doesn't matter what you're building if you can't grow or buy food.
I haven't personally seen how UBI could end up working viably, but I also don't see any other system working without much more massive societal changes than anyone is talking about.
Meanwhile, there are many many people that are very invested in maintaining massive differentials between the richest and the poorest that will be working against even the most modest changes.
I'd argue against the entire perspective of evaluating every policy idea along one-dimensional modernist polemics put forwards as "the least worst solution to all of human economy for all time".
Right now the communists in China are beating us at capitalism. I'm starting to find the entire analytical framework of using these ideologies ("communism", "capitalism") to evaluate _anything_ to be highly suspect, and maybe even one of the west's greatest mistakes in the last century.
> I see a world where we can build anything we want with our own hands and AI automation. Jobs might become optional.
I was a teenager back in the 90s. There was much talk then about the productivity boosts from computers, the internet, automation, and how it would enable people to have so much more free time.
Interesting thing is that the productivity gains happened. But the other side of that equation never really materialized.
I’m not certain we don’t have free time, but I’m not sure how to test that. Is it possible that we just feel busier nowadays because we spend more time watching TV? Work hours haven’t dropped precipitously, but maybe people are spending more time in the office just screwing around.
It's the same here. Calling what the west has a "free-market capitalist" system is also a lie. At every level there is massive state intervention. Most discoveries come from publicly funded work going on at research universities or from billions pushed into the defense sector that has developed all the technology we use today from computers to the internet to all the technology in your phone. That's no more a free-market system than China is "communist" either.
I think the reality is just that governments use words and have an official ideology, but you have to ignore that and analyze their actions if you want to understand how they behave.
not to mention that most corporations in the US are owned by the public through the stock market and the arrangement of the American pension scheme, and public ownership of the means of production is one of the core tenets of communism. Every country on Earth is socialist and has been socialist for well over a century. Once you consider not just state investment in research, but centralized credit, tax-funded public infrastructure, etc. well yeah, terms such as "capitalism" become used in a totally meaningless way by most people lol.
In your world where jobs become "optional" because a private company has decided to fire half their workforce, and the state also does not provide some kind of support, what do all the "optional" people do?
Driverless taxis is IMO the wrong tech to compare to.
It’s a high consequence, low acceptance of error, real time task. Where it’s really hard to undo errors.
There is a big category of tasks that isn’t that. But that are economically significant. Those are a lot better fit for AI.
> What makes you think that? Self driving cars [...]
AI is intentionally being developed to be able to make decisions in any domain humans work in. This is unlike any previous technology.
The more apt analogy is to other species. When was the last time there was something other than homo sapiens that could carry on an interesting conversation with homo sapiens. 40,000 years?
And this new thing has been in development for what? 70 years? The rise in its capabilities has been absolutely meteoric and we don't know where the ceiling is.
Do you live in SF (the city, not the Bay Area as a whole) or West LA? I ask because in these areas you can stand on any city street and see several self driving cars go by every few minutes.
It's irrlevant that they've had a few issues. They already work and people love them. It's clear they will eventually replace every uber/lyft driver, probably every taxi driver, they'll likely replace every doordash/grubhub driver with vehicles design to let smaller automated delivery carts go the last few blocks. They may also replace every truck driver. Together that's around 5 million jobs in the USA.
Once they're let on the freeways their usage will expand even faster.
The last Waymo I saw (a couple weeks ago) was stuck trying to make a right turn on to Market St. It was conveniently blocking the pedestrian crosswalk for a few cycles before I went around it. The time before that one got befuddled by a delivery truck and ended up blocking both lanes of 14th Street. Before Cruise imploded they were way worse. I can't say that these self-driving cars have improved much since I moved out of the city a few years back.
As someone who lives in LA, I don’t think self-driving cars existed at the time of the Rodney King LA riots and I am not aware of any other riots since.
I feel like you are trapped in the first assessment of this problem. Yes, we are not there yet, but have you thought about the rate of improvement? Is that rate of improvement reliable? Fast? That's what matters, not where we are today.
You could say that about any time in history. When the steam engine or mechanical loom were invented there were millions of people like you who predicted that mankind will be out of jobs soon and guess what happened? There's still a lot of things to do in this world and there still will be a lot to do (aka "jobs") for a loooong time.
And the problem for Capitalists and other anti-humanists is that this doesn’t scale. Their hope with AI, I think, is that once they train one AI for a task, it can be trivially replicated, which scales much better than humans.
> What makes you think that? Self driving cars have had (...)
I think you're confusing your cherry-picked comparison with reality.
LLMs are eliminating the need to have a vast array of positions on payrolls. From copywriters to customer support, and even creative activities such as illustration and even authoring books, today's LLMs are already more than good enough to justify replacing people with the output of any commercial chatbot service.
Software engineering is being affected as well, and it requires far greater know-how, experience, and expertise to meet the hiring bar.
> And when you talk about applying this same tech, so confidently, to domains far more nuanced and complex than (...)
Yes, your tech job is also going to be decimated. It's not a matter of having PMs write code. It's an issue of your junior SDE armed with a LLM being quite able to clear your bug backlog in a few days while improving test coverage metrics and refactoring code back from legacy status.
If a junior SDE can suddenly handle the workload that previously you required a couple of medior and senior developers, why would a company keep around 4 or 5 seasoned engineers when an inexperienced one is already able to handle the workload?
That's where the jobs will vanish. Even if demand remains there, it dropped considerably as to not justify retaining so many people in a company's payroll.
> LLMs are eliminating the need to have a vast array of positions on payrolls. From copywriters to customer support, and even creative activities such as illustration and even authoring books, today's LLMs are already more than good enough to justify replacing people with the output of any commercial chatbot service.
I'd love a source to these claims. Many companies are claiming that they are able to layoff folks because of AI, but in fact, AI is just a scapegoat to counteract the reckless overhiring due to free money in the market over the last 5-10 years and investors are demanding to see a real business plan and ROI. "We can eliminate this headcount due to the efficiency of our AI" is just a fancy way to make the stock price go up while cleaning up the useless folks.
People have ideas. There are substantially more ideas than people who can implement ideas. As with most technology, the reasonable expectation is to assume that people are just going to want more done by the now tool powered humans, not less things.
> (...) AI is just a scapegoat to counteract the reckless overhiring due to (...)
That is your personal moralist scapegoat, and one that you made up to feel better about how jobs are being eliminated because someone somewhere screwed up.
In the meantime, you fool yourself and pretend that sudden astronomic productivity gains have no impact on demand.
These supposed "productivity gains" are only touted by the ones selling the product, i.e. the ones who stand to benefit from adoption. There is no standard way to measure productivity since it's subjective. It's far more likely that companies will use whatever scapegoat they can to fire people with as little blowback as possible, especially as the other commenter noted, people were getting hired like crazy.
Each one of the roles you listed above is only passable with AI at a superficial glance. For example, anyone who actually reads literature other than self-help and pop culture books from airport kiosks knows that AI is terrible at longer prose. The output is inconsistent because current AI does not understand context, at all. And this is not getting into the service costs, the environmental costs, and the outright intellectual theft in order to make things like illustrations even passable.
> These supposed "productivity gains" are only touted by the ones selling the product (...)
I literally pasted an announcement from the CEO of a major corporation warning they are going to decimate their workforce due to the adoption of AI.
The CEO literally made the following announcement:
> "As we roll out more generative AI and agents, it should change the way our work is done," Jassy wrote. "We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs."
This is not about selling a product. This is about how they are adopting AI to reduce headcount.
The CEO is marketing to the company’s shareholders. This is marketing. A CEO will say anything to sell the idea of their company to other people. Believe it or not, there is money to be made from increased share prices.
I think your assumption is probably a little backwards. It will be a senior SDE clearing the slate I think. Ones that embrace the technolocy and are able to accelerate their work. At that level of effeciency the cost is still way way lower than it is for a larger team.
When it gets to the point that you don't need a senior engineer doing the work, you won't need a junior either.
> I think your assumption is probably a little backwards. It will be a senior SDE clearing the slate I think.
I don't think you understood the point I made.
My point was not about Jr vs Sr, let alone how a Jr is somehow more capable than a Sr.
My point was that these productivity gains aren't a factor of experience of seniority, but they do devalue the importance of seniority to perform specific tasks. Just crack open a LLM, feed in a few prompts, and done. Hell, junior developers no longer need to reach out to seniors to as questions about any topic. Think about that for a second.
I think it is important to remember that "decades" here means <20 years. Remember that in 2004 it was considered sufficiently impossible that basically no one had a car that could be reliably controlled by a computer, let alone driven by computer alone:
I also think that most job domains are not actually more nuanced or complex than driving, at least from a raw information perspective. Indeed, I would argue that driving is something like a worst-case scenario when it comes to tasks:
* It requires many different inputs, at high sampling rates, continuously (at the very least, video, sound, and car state)
* It requires loose adherence to laws in the sense that there are many scenarios where the safest and most "human" thing to do is technically illegal.
* It requires understanding of driving culture to avoid making decisions that confuse/disorient/anger other drivers, and anticipating other drivers' intents (although this can be somewhat faked with sufficiently fast reaction times)
* It must function in a wide range of environments: there is no "standard" environment
If we compare driving to other widespread-but-low-wage jobs (e.g. food prep, receptionists, cleaners) there are generally far more relaxed requirements:
* Rules may be unbreakable as opposed to situational, e.g. the cook time for burgers is always the same.
* Input requirements may be far lower. e.g. an AI receptionist could likely function with audio and a barcode scanner.
* Cultural cues/expectations drive fewer behaviors. e.g. an AI janitor just needs to achieve a defined level of cleanliness, not gauge people's intent in real-time.
* Operating environments are more standardized. All these jobs operate indoors with decent lighting.
Everything anyone could say about bad AI driving could be said about bad human drivers. Nevertheless, Waymo has not had a single fatal accident despite many millions of passenger miles and is safer than human drivers.
Everything? How about legal liability for the car killing someone? Are all the self-driving vendors stepping up and accepting full legal liability for the outcomes of their non-deterministic software?
In the bluntest possible sense, who cares if we can make roads safer?
Solving liability in traffic collisions is basically a solved problem through the courts, and at least in the UK, liability is assigned in law to the vendor (more accurately, there’s a list of who’s responsible for stuff, I’m not certain if it’s possible to assume legal responsibility without being the vendor).
Thousands have died directly due to known defects in manufactured cars. Those companies (Ford, others) still are operating today.
Even if driverless cars killed more people than humans they would see mass adoption eventually. However they are subject to farr higher scrutiny than human drivers and even so make fewer mistakes, avoid accidents more frequently and can't get drunk, tired, angry, or distracted.
But even if they can theoretically be hacked, so far Waymos are still safer and more reliable than human drivers. The biggest danger someone has riding in one is someone destroying it for vindictive reasons.
There is a fetish for technology that sometimes we are not aware of. On average there might be less accidents, but if specific accidents were preventable and now they happen, people will sue. And who will take the blame? The day the company takes the blame is the day self-driving exists IMO.
To be fair, self-driving cars don't need to be perfect 0 casualty modes of transportation, they just need to be better than human drivers. Since car crashes kill 2 million people each year (and maim another 2 or 3), this is a low bar to clear...
Of course, the actual answer is that rail and cycling infrastructure are much more efficient than cars in any moderately dense region. But that would mean funding boring regular companies focused on providing a product or service for adequate profit, instead of exciting AI web3 high tech unicorn startups.
Self-driving cars are a political problem, not a technical problem. A functioning government would put everything from automation-friendly signaling standards to battery-swapping facilities into place.
We humans used to do that sort of thing, but not anymore, so... bring on the AI. It won't work as well as it might otherwise be able to, but it'll probably kill fewer humans on the road at the end of the day. A low bar to clear.
Self-driving car companies dont want a unfiied signalling platform or other "open for all" infrastructure updates. They want to own self-driving, to lock you into a subscription on their platform.
Literally the only open source self driving platform, from trillion to billion to million dollar companies is comma.ai, founded by Geohot. Thats it. Its actually very good, and I bet they would welcome these upgrades, but that would be a consortium of one underdog pushing for them.
Corporations generally follow a narrow somewhat predictable pattern towards some local maxima of their own value extraction. Since world is not zero sum, it produces value for others too.
Where politics (should) enter the picture is where we somehow can see a more global maxima (for all citizens) and try to drive towards it through some political, hopefully democratic means. (Laws, standards, education, investment, infra etc)
Snarky but serious question: How do we know that this wave will disrupt labor at all? Every time I dig into a story of X employees replaced by "AI", it's always in a company with shrinking revenues. Furthermore, all of the high-value use cases involve very intense supervision of the models.
There's been a dream of unsupervised models going hog wild on codebases for the last three years. Yet even the latest and greatest Claude models can't be trusted to write a new REST endpoint exposing 5 CRUD methods without fucking something up. No, it requires not only human supervision, but it also requires human expertise to validate and correct.
I dunno. I feel like this language grossly exaggerates the capability of LLMs to paint a picture of them reliably fulfilling roles end-to-end instead of only somewhat reliably fulfilling very narrowly scoped tasks that require no creativity or expertise.
How many people could be replaced by a proper CMS or a Excel sheet right now already? Probably dozens of millions, and yet they are at their desks working away.
It's easy to sit in a café and ponder about how all jobs will be gone soon but in practice people aren't as easily replacable.
For many businesses the situation is that technology has dramatically underperformed in doing the most basic tasks. Millions of people are working around things like defective ERP systems. A modest improvement in productivity in building basic apps could push us past a threshold. It makes it possible for millions more people to construct crazy excel formulas. It makes it possible to add a UI to a python script where before there was only a command line. And one piece of magic that works teliably can change an entire process. It lets you make a giant leap rather than an incremental change.
If we could make line of business crud apps work reliably, have usable document/email search, and have functional ERP that would dissolve millions of jobs.
Those jobs are probably still a couple decades plus off from displacement. some possibly never, And we will need them in higher numbers.. and perhaps it's ironic because these are some of the oldest professions.
Everything we do is in service of paying for our housing, transportation, eating food, healthcare and some fun money.
Most goes to housing, healthcare, and transportation.
Healthcare costs may come down some with advancements in AI. R&D will be cheaper. Knowledge will be cheaper and more accessible.
But what people care about, what people have always cared about, remains in professions that are as old as time and, I don't see them fully replaceable by AI just yet - enhanced, yes, but not replaced.
Imagine a world where high quality landscaping exists for the average person. And this is made possible because we'll live in a world where the equivalent of today's uber driver owns a team of gardening androids.
Or perhaps in the future everyone will work in finance. Everyone's a corporation.
> Imagine a world where high quality landscaping exists for the average person. And this is made possible because we'll live in a world where the equivalent of today's uber driver owns a team of gardening androids.
I think it's going to be the other way around. It's looking like automation of dynamic physical capability is going to be the very last thing we figure out; what we're going to get first is teams of lower-skilled human workers directed largely by jobsite AI. By the time the robots get there, they're not going to need a human watching them.
Looking at the advancements in low cost flexible robotics I'm not sure I share that sentiment. Plus the LLM craze is fueling generalist advancement in robotics as well. I'd say we'll see physical labor displacement within a decade tops.
Kinematics is deceptively hard and at least evolutionary took a lot longer to develop than language. Low wage physical labor seems easy only because humans are naturally very good at it, and this took millions of years to develop.
The number of edge cases when you are dealing with physical world is several order of magnitudes higher than when dealing with text only and the spacial reasoning capabilities of the current crop of MLLMs are not nearly as good at it as required. And this doesn't even take in to account that now you are dealing with hardware and hardware is expensive. Expensive enough, that even on the manufacturing lines (a more predictable environment than let's say landscaping) automation sometimes doesn't make economic sense.
Im reminded of something I read years ago that said something like jobs are now above or below the API. I think now its jobs will be above or below the AI.
What this delusion seems to turn a blind eye to is that a good chunk of the population is already in those roles; what happens when the supply of those roles far exceeds the demand, in a relatively short time? Carpenters suddenly abundant, carpenter wages drop, carpenters struggling to live, carpenters forced to tighten spending, carpenters decide children aren't affordable.. now extrapolate that across all of the impacted roles and industries. No doubt someone is already typing "carpenters can retrain too!" OK, so they're back to entry level wages (if anything) for 5+ years? Same story. And retrain to what?
At some point an equilibrium will be reached but there is no guarantee it will be a healthy situation or a smooth ride. This optimism about AI and the rosy world that is just around the corner is incredibly naive.
It's naive but also ignores that automation is simply replacing human labor by capital. Capital captures more of the value, and workers get less overall. Unless we end up in some mild socialist utopia where basic needs are provided and corps are all coops, but that's not the trend.
I would be cautious to avoid any narrative anchoring on “old versus new” professions. I would seek out other ways of thinking about it.
For example, I predict humans will maintain competitive advantage in areas where the human body excels due to its shape, capabilities, or energy efficiency.
In your example i think it's a great deal more likely that the Uber driver is paid a tiny stipend to supervise a squad of gardening androids owned at substantial expense by Amazon Yard.
Why would anyone be on the field? Why not just have a few drones flying there monitoring whole operation remotely. And have one person monitor too many sites at same time likely from cheapest possible region.
I too believe that a mostly autonomous work world would be something we could handle well assuming the leadership was composed of smart folks picking the right decisions, without also being too much exposed to external powers opposing an impossible to win force (large companies and interests). The problem is if we mix what could happen (not clear when, right now) with the current weak leadership across the world.
I think UBI can only buy some time but won't solve the problem. We need fast improvement with AI robots that can be used for automation on mass scale: construction, farming maybe even cooking and food processing.
Right now AI is mostly focused on automating top levels of maslov pyramid hierarchy of needs rather than bottom physiological needs. Once things like shelter (housing), food, utilities (electricity, water, internet) are dirty cheap UBI is less needed.
Far from an expert on this topic, but what differentiates AI from other non physical efficiency tools? (I'm actually asking not contesting).
Won't companies always want to compete with one another, so simply using AI won't be enough. We will always want better and better software, more features, etc. so that race will never end until we get an AI fully capable of managing all parts (100%) of the development process (which we don't seem to be close to yet).
From Excel to Autocad there's been a lot of tools that were expected to decrease the amount of work ended up actually increasing it due to having new capabilities and the constant demand for innovation. I suppose the difference would be if we think AI will continue to get really good, or if it'll become SO good that it is plug and play and completely replaces people.
I encourage everyone to not claim “X seems unlikely” when it comes to high impact risks. Such a thinking pattern often leads to pruning one’s decision tree way too soon. To do well, we need to plan over an uncertain future that has many weird and unfamiliar scenarios.
Yeah I agree, it's not about where it's at now, but whether where we are now leads to something with general intelligence and self improvement ability. I don't quite see that happening with the curve it's on, but again what the heck do I know.
What do you mean about the curve not leading to general intelligence? Even if transformer architectures by themselves don’t get there, there are multifarious other techniques, including hybrids.
As long as (1) there are incentives for controlling ever increasing intelligence; (2) the laws of physics don’t block us; and (3) enough people/orgs have the motivation and means, some people/orgs are going to press forward. This just becomes a matter of time and probability. In general, I do not bet against human ingenuity, but I often bet against human wisdom.
In my view, along with many others, it would be smarter for the whole world to slow down AI capabilities advancement until we could have very high certainty that doing so is worth the risk.
every software company i've ever worked with has an endless backlog of features it wants/needs to implement. Maybe AI just lets them move through these feature more quickly?
I mean most startups fail. And in software startups, the blame for that is usually at least shared by "software wasn't good enough". So that $20million seed investment is still going to go into "software development" - ie programmer salaries. they will be using the higher level language of ai much of the time, and be 2-5 times more efficient - but will it be enough? No. Most will still fail.
Companies don’t always compete on capability or quality. Sometimes they compete on efficiency. Or sometimes they carve up the market in different ways.
Sometimes, but with technology related companies I rarely see that. I've really only seen it in industries that are very straightforward, like producing building materials or something. Do you have any examples?
> And, because it is easier to retrain humans than build machines for those jobs, we wound up with more and better jobs.
I think it did not work like that.
Automatic looms displaced large numbers of weavers, skilled professionals, which did not find immediately find jobs tending dozens of mechanical looms. (Mr Ludd was one of these displaced professionals.)
Various agricultural machines and chemical products displaced colossal numbers of country people which had to go to cities looking for industrial jobs; US agriculture used to employ 50% of workforce in 1880 and only 10% in 1930.
The advent of internet displaced many in the media industry, from high-caliber journalists to those who worked in classified ads newspapers.
All these disruptions created temporary crises, because there was no industry that was ready to immediately employ these people.
temporary - thats the key. people were able to move to the cities and get factory and office jobs and over time were much better off. I can complain about the socially alienated condition I'm in as an office worker, but I would NEVER want to do farm work - cold/sun, aching back, zero benefits, low pay, risk of crop failure, a whole other kind of isolation etc etc.
This is the first technology wave that doesn't just displace humans, but which can be trained to the new job opportunities more easily than humans can. Right now it can't replace humans for a lot of important things. But as its capabilities improve, what do displaced humans transition to?
Assuming AI doesn't get better than humans at everything, humans will be supervising and directing AIs.
This is the optimistic take and definitely possible, but not guaranteed or even likely. Markets tend to consolidate into monopolies (or close to it) over time. Unless we are creating new markets at a rapid rate, there isn’t necessarily room for those other 900 engineers to contribute.
Because the people with the money aren’t going to just give it to everyone else. We already see the richest people hoard their money and still be unsatisfied with how much they have. We already see productivity gains not transfer any benefit to the majority of people.
Yes. However people are unwilling to take this approach unless things get really really bad. Even then, the powerful tend to have such strong control that people are afraid to act out of fear of reprisal.
We’ve also been gaslit into believing that it’s not a good approach, that peaceful protests are more civilised (even though they rarely cause anything meaningful to actually change).
Because otherwise you'd have to convince AI-owners and select professionals to let go of their wealth to give a comfortable and fulfilling life of leisure to the unemployed.
More likely it will look like the current welfare schemes of many countries, now add mass boredom leading to unrest.
> Because otherwise you'd have to convince AI-owners and select professionals to let go of their wealth to give a comfortable and fulfilling life of leisure to the unemployed.
Not necessarily. Such forces could be outvoted or out maneuvered.
> More likely it will look like the current welfare schemes of many countries..,
Maybe, maybe not. It might take the form of UBI or some other form that we haven’t seen in practice.
> now add mass boredom leading to unrest.
So many assumptions. There is no need to just assume any particular distribution of boredom across the future population of the world. Making predictions about social unrest is even more complicated.
Deep Utopia (Bostrom) is an excellent read that extensively discusses various options if things go well.
More people need to feel this. Too many people deny even the possibility, not based out of logic, but rather out of ignorance or subconscious factors such as fear or irrelevance.
One way to think about AI and jobs is Uber/Google Maps. You used to have to know a lot about a city to be a taxi driver; then suddenly with Google Maps you don't. So in effect, technology lowered the requirements or training needed to become a taxi driver. More people can do it, not less (although incumbents may be unhappy about this).
AI is a lot like this. In coding for instance, you still need to have some sense of good systems design, etc. and know what you want to build in concrete terms, but you don't need to learn the specific syntax of a given language in detail.
Yet if you don't know anything about IT, don't know what you want to build or what you could need, or what's possible, then it's unlikely AI can help you.
Even with Google Maps, we still need human drivers because current AI systems aren’t so great at driving and/or are too expensive to be widely adopted at this point. Once AI figures out driving too, what do we need the drivers for?
And I think that’s the point he was making, it’s hard to imagine any task where humans are still required when AI can do it better and cheaper. So I don’t think the Uber scenario is realistic.
I think the only value humans can provide in that future is “the human factor”: knowing that something is done by an actual human and not a machine can be valuable.
People want to watch humans playing chess, even though AI is better at it. They want to consume art made by humans. They want a human therapist or doctor, even if they heavily rely on AI for the technical stuff. We want the perspective of other humans even if they aren’t as smart as AI. We want someone that “gets” us, that experiences life the same way we do.
In the future, more jobs might revolve around that, and in industries where previously we didn’t even consider it. I think work is going to be mostly about engaging with each other (even more meetings!)
The problem is, in a world that is that increasingly remote, how do you actually know it’s a human on the other end? I think this is something we’ll need to solve, and it’s going to be hard with AI that’s able to imitate humans perfectly.
Here in the US, we have been getting a visceral lesson about human willingness to sacrifice your own interests so long as you’re sticking it to The Enemy.
It doesn’t matter if the revolution is bad for the commoners — they will support it anyway if the aristocracy is hateful enough.
Most of the people who died in The Terror were commoners who had merely not been sympathetic enough to the revolution. And then that sloppiness lead to reactionary violence and there was a lot of back and forth until Napoleon took power and was pretty much a king in all but heritage.
Hopefully we can be a bit more precise this time around.
You might want to look at the etymology of the word “terrorism” (despite the most popular current use, it wasn't coined for non-state violence) and what class suffered the most in terms of both judicial and non-judicial violent deaths during the revolutionary period.
We have to also choose to build technology that empowers people. Empowering technologies don't just pop into existence, they're created by people who care about empowering people.
Yeah but those opening of new kind of jobs has not always been instantly. It can take decades and for instance was one of the reasons for the French Revolution. Internet has already created a huge amount of monopolies and wealth concentration. AI seems likely to do this further.
go to any war-torn country or collapsed empire (Soviet). I have seen/grow-up myself in both — you would get desperation, people giving-up, alcohol (famous "X"-cross of birth rate drop and deaths rising), drugs, crime, corruption/warlord-ing, rural communities hit first and totally vanish, then small-tier cities vanish, then mid-tier, only the largest hubs remain. loss of science, culture, and education. people are just gone. only ruins of whatever latest shelters they had remain, not even their prime-time architecture. you can drive hundreds/thousands of kms across these ruin of what once was flurishing culture. years ago you would find one old person still living there. these days not a single human left. this is what is coming.
I believe that historically we have solved this problem by creating gigantic armies and then killing off millions of people that couldn't really adapt to the new order with a world war.
The relation won’t invert because it’s very easy and quick to train guy pilling up bricks while training architect is slow and hard. If low skilled jobs will pay much better than high skilled then people will just change their job.
That’s only true as long as the technical difficulties aren’t covered by tech.
Think of a world where software engineering itself is handled relatively well by the llm and the job of the engineer becomes just collecting business requirements and checking they’re correctly addressed.
In that world the limit for scarcity might be less in the difficulty of training and more in the willingness to bend your back in the sun for hours vs comfortably writing prompts in an air conditioned room.
Right now there are enough people willing to bend their back in the sun for hours that their salaries are much lower than these of engineers. Do you think that for some reason supply of these people will drop with higher wages and much lower employment opportunities in office jobs? I highly doubt it.
My argument is not that those people’s salaries will go up until overtaking the engineers’.
It’s the opposite, that the value of office/intellectual work will tank, while manual work remains stable. Lower barrier of entry for intelectual work if a position even needs to be covered, work conditions much more comfortable.
AI can displace human work but not human accountability. It has no skin and faces no consequences.
> can be trained to the new job opportunities more easily ...
Are we talking about AI that always needs trainers to fix their prompts and training sets? How are we going to train AI when we lose those skills and get rid of humans?
> what do displaced humans transition to?
Humans with all powerful AI in their pockets... what could they do if they lose their jobs?
> AI can displace human work but not human accountability. It has no skin and faces no consequences.
We’ve gota way to go to get there in many instances. So far I’ve seen people blame AI companies for model output, individuals for not knowing the product sold to them as a magic answer-giving machine was wrong, and other authorities in those situations (e.g. managers, parents, school administrators and teachers,) for letting ai be used at all. From my vantage point, It people seem to be using it as a tool to insulate themselves from accountability.
> AI can displace human work but not human accountability. It has no skin and faces no consequences.
Let’s assume that we have amazing aj and robotics, better than humans at everything - if you could choose between robosurgery (completely automatic) with 1% mortality and for 5000 usd vs surgery performed by human with 10% mortality and 50000 usd price tag, would you really choose human just because you can sue him? I wouldn’t. I don’t thing anyone thinking rationally would.
> ask that question to all the companies laying off junior folks in favor of LLMs right now. They are gleefully sawing off the branch they’re sitting on.
> Humans with all powerful AI in their pockets... what could they do if they lose their jobs?
At which point did AI become a free commodity in your scenario?
Is the ability to burn someone at a stake for making a mistake truly vital to you?
If not, then what's the advantage of "having skin" is? Sure, you can't flog an AI. But AI doesn't need to be threatened with flogging to perform at the peak of its abilities. A well designed AI performs at the peak of its abilities always - and if that isn't enough, you train it until it is.
For the moment, perhaps it could be jobs that LLMs can’t be trained on. New jobs, niche jobs, secret or undocumented jobs…
It’s a common point now that LLMs don’t seem to be able to apply knowledge about one thing to how a different, unfamiliar thing works. Maybe that will wind up being our edge, for a time.
You will have to back that statement up because this is not at all obvious to me.
If I look at the top US employers in say 1970 vs 2020, the companies that dominate 1970 were noted for having hard blue collar labor jobs but paid enough to keep a single earner family significantly above minimum wage and the poverty line. The companies that dominate in 2020 are noted for being some of the shittiest employers having some of the lowest pay fairly close to minimum wage and absolutely worst working conditions.
Sure, you tend not to get horribly maimed in 2020 vs 1970. That's about the only improvement.
This was already a problem back then, Nixon was about to introduce UBI in the late 60s and then the admin decided that having people work pointless jobs keeps them better occupied, and the rest of the world followed suit.
There will be new jobs and they will be completely meaningless busywork, people performing nothing of substance while being compensated for it. It's our way of doing UBI and we've been doing it for 50 years already.
During the Industrial Revolution, many who made a living by the work of their hands lost their jobs, because there were machines and factories to do their work. Then new jobs were created in factories, and then many of those jobs were replaced by robots.
Somehow many idiotic white collar jobs have been created over the years. How many web applications and websites are actually needed? When I was growing up, the primary sources of knowledge were teachers, encyclopedias, and dictionaries, and those covered a lot. For the most part, we’ve been inventing problems to solve and wasting a tremendous amount of resources.
Some wrote malware or hacked something in attempt to keep this in check, but harming and destroying just means more resources used to repair and rebuild and real people can be hurt.
At some point in coming years many white collar workers will lose their jobs again, and there will be too many unemployed because not enough blue collar jobs will be available.
There won’t be some big wealth redistribution until AI convinces people to do that.
The only answer is to create more nonsense jobs, like AI massage therapist and robot dog walker.
> In every technology wave so far, we've disrupted many existing jobs. However we've also opened up new kinds of jobs
That may well be why these technologies were ultimately successful.
Think of millions and millions being cast out.
They won't just go away. And they will probably not go down without a fight. "Don't buy AI-made, brother!", "Burn those effing machines!" It's far from unheard of in history.
Also: who will buy if no one has money anymore? What will the state do, when thus tax income goes down, while social welfare and policing costs go up?
There are other scenarios, too: everybody gets most stuff for free, because machines and AI's do most of the work. Working communism for the lower classes, while the super rich stay super rich (like in real existing socialism). I don't think it is a good scenario either. In the long run it will make humanity lazy and dumb.
In any case I think what might happen is not easy to guess, so many variables and nth-order effects. When large systems must seek a new equilibrium all bets are usually off.
As someone else said, until a company or individual is willing to risk their reputation on the accuracy of AI (beyond basic summarising jobs, etc), the intelligent monkeys are here for a good while longer. I've already been once bitten, twice shy.
The conclusion, sadly, is that CEO's will pause hiring and squeeze more productivity out of existing hires. This will impact junior roles the most.
We also may not need to worry about it for a long time. I’m more and more falling on this side. LLM’s are hitting diminishing returns so until there’s a new innovation (can’t see any on the horizon yet) I’m not concerned for my career.
I'm skeptical of arguments like this. If we look at most impactful technologies since the year 2000, AI is not even in my top 3. Social networking, mobile computing, and cloud computing have all done more to alter society and daily life than has AI.
And yes, I recognize that AI has already created profound change, in that every software engineer now depends heavily on copilots, in that education faces a major integrity challenge, and in that search has been completely changed. I just don't think those changes are on the same level as the normalization of cutting-edge computers in everyone's pockets, as our personal relationships becoming increasingly online, nor as the enablement for startups to scale without having to maintain physical compute infrastructure.
To me, the treating of AI as "different" is still unsubstantiated. Could we get there? Absolutely. We just haven't yet. But some people start to talk about it almost in a way that's reminiscent of Pascal's Wager, as if the slight chance of a godly reward from producing AI means it is rational to devote our all to it. But I'm still holding my breath.
> in that every software engineer now depends heavily on copilots
That is maybe a bubble around the internet. Ime most programmers in my environment rarely use and certainly aren't dependent on it. They do also not only do code monkey-esque web programming so maybe this is sampling bias though it should be enough to refute this point.
Came here to say that. It’s important to remember how biased hacker news is in that regard. I’m just out of ten years in the safety critical market, and I can assure you that our clients are still a long way from being able to use those. I myself work in low level/runtime/compilers, and the output from AIs is often too erratic to be useful
Add LED lighting on there. It is easy to forget what a difference that made. The light pollution, but also just how dim houses were. CFL didn't last very long as a thing between incandescent and LED and houses lit with incandescents have a totally different feel.
A pessimistic/realistic view of post high school education - credentials are proof of able to do a certain amount of hard work, used as an easy filter for companies while hiring.
I expect universities to adapt quickly, lest lose their whole business as degrees will not carry the same meaning to employers.
> AI has already rendered academic take-home assignments moot
Not really, there are plenty of things that LLMs cannot do that a professor could make his students do. It is just a asymmetric attack on the professor's (or whomever is grading) time to do that.
IMO, credentials shouldn't be given to those who test or submit assignments without proctoring (a lot of schools allow this).
1. Make the student(s) randomly have to present their results on a weekly basis. If you get caught for cheating at this point, at least in my uni with a zero tolerance policy, you instantly fail the course.
2. Make take home stuff only a requirement to be able to participate in the final exam. This effectively means cheating on them will only hinder you and not affect your grading directly.
3. Make take home stuff optional and completely detached from grading. Put everything into the final exam.
My uni does a mix them on different courses. Especially two and three though have a significant negative impact on passing rates as they tend to push everything onto one single exam instead of spread out work over the semester.
> Not really, there are plenty of things that LLMs cannot do that a professor could make his students do.
Could you offer some examples? I'm having a hard time thinking of what could be at the intersection of "hard enough for SotA LLMs" yet "easy enough for students (who are still learning, not experts in their fields, etc)".
Everyone knows how to use Google. There's a difference between a corpus of data available online and an intelligent chatbot that can answer any permutation of questions with high accuracy with no manual searching or effort.
Everyone knows how to type questions into a chat box, yet whenever something doesn’t work as advertised with the LLMs, the response here is, “you’re holding it wrong”.
Do you really think the jump from books to freely globally accessible data instantly available is a smaller jump than internet to ChatGPT? This is insane!!
It's not just smaller, but neglectable (in comparison).
In the internet era you had to parse the questions with your own brain. You just didn't necessarily need to solve them yourself.
In ChatGPT era you don't even need to read the questions. At all. The questions could be written in a language you don't understand, and you still are able generate plausible answers to them.
Obvious ChatGPT. I don't know how it is even a question... if you showed GPT3.5 to people from < 20th centuries there would've been a worldwide religion around it.
In English class we had a lot of book-reading and writing texts about those books. Sparknotes and similar sites allowed you to skip reading and get a distilled understanding of its contents, similar to interacting with an LLM
> in that every software engineer now depends heavily on copilots
With many engineers using copilots and since LLMs output the most frequent patterns, it's possible that more and more software is going to look the same, which would further reinforce the same patterns.
For example, emdash thing, requires additional prompts and instructions to override it. Doing anything unusual would require more effort.
LLMs with instruction following have been around for 3 years. Your comment gives me "electricity and gas engines will never replace the horse" vibes.
Everyone agrees AI has not radically transformed the world yet. The question is whether we should prepare for the profound impacts current technology pretty clearly presages, if not within 5 years then certainly within 10 or 25 years.
> The future may reduce the economic prosperity and push humanity to switch to some different economic system (maybe a better system). Markets don’t want to accept that. [Emphasis added]
What a silly premise. Markets don't care. All markets do is express the collective opinion; in the short term as a voting machine, in the long term as a weighing machine.
Seeing a real uptick of socio-policital prognostication from extremely smart, soaked-in-AI, tech people (like you Salvatore!), casting heavy doom-laden gestures towards the future. You're not even wrong! But this "I see something you all clearly don't" narrative, wafer thin on real analysis, packed with "the feels", coated with what-ifs.. it's sloppy thinking and I hold you to a higher standard antirez.
His framing is that markets are collective consensus and if you claim to “know better”, you need to write a lot more than a generic post. It’s so simple, and it is a reminder that antirez’s reputation as a software developer does not automatically translate to economics expert.
I think you are mixed up here. I quoted from the comment above mine, which was harshly and uncharitably critical of antirez’s blog post.
I was pushing back against that comment’s snearing smugness by pointing to an established field that uses clear terminology about how and why markets are useful. Even so, I invited an explanation in case I was missing something.
Anyone curious about the terms I used can quickly find explanations online, etc.
Yes but can the market not be wrong? Wrong in the sense that, failing to meet our expectations as a useful engine of society? As I understood, what was meant with this this article is that AI completely changes the equations across the board that current market direction appears dangerously irrational to OP. I'm not sure what was meant with your comment though besides haggling over semantics and attacking some in-expertise of the authors socio-politic philosophizing that you perceive.
Of course it can be wrong, and it is in many instances. It's a religion. The vast, vast majority of us would prefer to live in a stable climate with unpolluted water and some fish left in the oceans, yet "the market" is leading us elsewhere.
I like to point out that ASI will allow us to do superhuman stuff that was previously beyond all human capability.
For example, one of the tasks we could put ASI to work doing is to ask it to design implants that would go into the legs that would be powered by light, or electric induction that would use ASI designed protein metabolic chains to electrically transform carbon dioxide into oxygen and ADP into ATP so to power humans with pure electricity. We are very energy efficient. We use about 3 kilowatt hours of power a day, so we could use this sort of technology to live in space pretty effortlessly. Your Space RV would not need a bathroom or a kitchen. You'd just live in a static nitrogen atmosphere and the whole thing could be powered by solar panels, or a small modular nuke reactor. I call this "The Electrobiological Age" and it will unlock whole new worlds for humanity.
It feels like it’s been a really long time since humans invented anything just by thinking about it. At this stage we mostly progress by cycling between ideas and practical experiments. The experiments are needed not because we’re not smart enough to reason correctly with data we have, but because we lack data to reason about. I don’t see how more intelligent AI will tighten that loop significantly.
I actually find it hard to understand how the market is supposed to react if the AI capabilities does surpass all humans in all domains. It's first of all not clear such a scenario leads to runaway wealth for a few, even though with no outside events that may be the outcome. However, such scenarios are so unsustainable and catastrophic it's hard to imagine there are no catastrophic reactions to it. How is the market supposed to react if there's a large chance of market collapse and also a large chance of runaway wealth creation? Besides the point that in an economy where AI surpass humans the demands of the market will shift drastically too. Which I also think is underrepresented in predictions, which is the induced demand of AI-replaced labor and the potential for entire industries to be decimated by secondary effects instead of direct AI competition/replacement at labor scale.
Agreed, if the author truly thinks the markets are wrong about AI, he should at least let us know what kind of bets he’s making to profit from it. Otherwise the article is just handwaving.
When the radio came people almost instantly stopped singing and playing instruments. Many might not be aware of it but for thousands of years singing was a normal expression of a good mood and learning to play an instrument was a gateway to lifting the mood. Dancing is still in working order but it lacks the emotional depth that provided a window into the soul of those you live and work with.
A simpler example is the calculator. People stopped doing it by hand and forgot how.
Most desk work is going to get obliterated. We are going to forget how.
The underlings on the work floor currently know little to nothing about management. If they can query an AI in private it will point out why their idea is stupid or it will refine it into something sensible enough to try. Eventually you say the magic words and the code to make it so happens. If it works you put it live. No real thinking required.
Early on you probably get large AI cleanup crews to fix the hallucinations (with better prompts)
Humans sing. I sing every day, and I don't have any social or financial incentives driving me to do so. I also listen to the radio and other media, still singing.
Do others sing along? Do they sing the songs you've written? I think we lost a lot there. I can't even begin to imagine it. Thankfully singing happy birthday is mandatory - the fight isn't over!
People also still have conversations despite phones. Some even talk all night at the kitchen table. Not everyone, most don't remember how.
When I hear folks glazing some kinda impending jobless utopia , I think of the intervening years. I shudder. As they say, "An empty stomach knows no morality."
If AI makes a virus to get rid of humanity, well we are screwed. But if all we have to fear from AI is unprecedented economic disruption, I will point out that some parts of the world may survive relatively unscathed. Let's talk Samoa, for example. There, people will continue fishing and living their day-to-day. If industrialized economies collapse, Samoans may find it very hard to import certain products, even vital ones, and that can cause some issues, but not necessarily civil unrest and instability.
In fact, if all we have to fear from AI is unprecedented economic disruption, humans can have a huge revolt, and then a post-revolts world may be fine by turning back the clock, with some help from anti-progress think-tanks. I explore that argument in more detail in this book: https://www.smashwords.com/books/view/1742992
The issue is there isn't enough of those small environmental economies to support everyone that exists today without the technology, logistics and trades that are in place today.
You can farm and fish the entire undeveloped areas of NYC, but it won't be enough to feed or support the humans that live there.
You can say that for any metro area. Density will have to reduce immediately if there is economic collapse, and historically, when disaster strikes, that doesn't tend to happen immediately.
Also humans (especially large groups of them) need more than food: shelter, clothes, medicine, entertainment, education, religion, justice, law, etc.
> The issue is there isn't enough of those small environmental economies to support everyone that exists today without the technology, logistics and trades that are in place today.
I agree. I expect some parts of the world will see some black days. Lots of infrastructure will be gone or unsuited to people. On top of that, the cultural damage could become very debilitating, with people not knowing how to do X, Y and Z without the AIs. At least for a time. Casualties may mount.
> Also humans (especially large groups of them) need more than food: shelter, clothes, medicine, entertainment, education, religion, justice, law, etc.
This is true, but parts of the world survive today with very little of any of that. And for some of those things that you mention: shelter, education, religion, justice, and even some form of law enforcement, all that is needed is humans willing to work together.
Realistically, in an AI extreme economic disruption scenario, it's more or less USA the only one extremely affected, and that's 400 million people. Assuming it's AI and nothing else causes a big disruption before, and with the big caveat that nobody can't predict the future, I would say:
- Mexico and down are more into informal economies, and they generally lag behind developed economies by decades. Same applies to Africa and big parts of Asia. As such, by the time things get really dire in USA and maybe in Europe and China, the south will be still in business-as-usual.
- Europe has lots of parliaments and already has legislation that takes AI into account. Still, there's a chance those bodies will fail to moderate the impact of AI in the economy and violent corrections will be needed, but people in Europe have long traditions and long memories...They'll find a way.
- China is governed by the communist party, and Russia have their king. It's hard to predict how will those align with AI, but that alignment more or less will be the deciding factor there, and not free capitalism.
More like engineers coming up with higher level programming languages. No one (well, nearly) hand writes assembly anymore. But there's still plenty of jobs. Just the majority write in the higher level but still expressive languages.
For some reason everyone thinks as LLMs get better it means programmers go away. The programming language, and amount you can build per day, are changing. That's pretty much it.
I’m not worried about software engineering (only or directly).
Artists, writers, actors, teachers. Plus the rest where I’m not remotely creative enough to imagine will be affected. Hundreds of thousands if not millions flooding the smaller and smaller markets left untouched.
Here's the thing, I tend to believe that sufficiently intelligent and original people will always have something to offer others; its irrelevant if you imagine the others as the current consumer public, our corporate overlords, or the ai owners of the future.
There may be people who have nothing to offer others, once technology advances, but I dont think that anyone in current top % role would find themselves there.
Yes. The complete irony in all software engineers enthusiasm for this tech is that, if the boards wishes come true, they are literally helping them eliminate their own jobs. It's like the industrial revolution but worse, because at least the craftsmen weren't also the ones building the factories that would automate them out of work.
Marcuse had a term for this "false consciousness"-when the structure of capitalism ends up making people work against their own interests without realizing it, and that is happening big time in software right now. We will still need programmers for hard, novel problems, but all these lazy programmers using AI to write their crud apps don't seem to realize the writing is on the wall.
Or they realize it and they're trying to squeeze the last bit of juice available to them before the party stops. It's not exactly a suboptimal decision to work towards your own job's demise if it's the best paying work available to you and you want to save up as much as possible before any possible disruption. If you quit, someone else steps into the breach and the outcome is all the same. There's very few people actually steering the ship who have any semblance of control; the rest of us are just along for the ride and hoping we don't go down with the ship.
Yeah I get that. I myself am part of a team at work building an AI/LLM-based feature.
I always dreaded this would come but it was inevitable.
I can’t outright quit, no thanks in part to the AI hype that stopped valuing headcount as a signal to company growth. If that isn’t ironic I don’t know what is.
Given the situation I am in, I just keep my head down and do the work. I vent and whinge and moan whenever I can, it’s the least I can do. I refuse to cheer it on at work. At the very least I can look my kids in the eye when they are old enough to ask me what the fuck happened and tell them I did not cheer it on.
Realistically, a white collar job market collapse will not directly lead to starvation. The world is not 1930s America ethically. Governments will intervene, not necessarily to the point of fairness, but they will restructure the economy enough to provide a baseline. The question will be how to solve the biblical level of luxury wealth inequality without civil unrest causing us all to starve.
There is no jobless utopia. Even if everyone is paid and well-off with high living standards. That is no world in which humans can thrive where everyone is retired and doing their own interests.
Jobless means you dont need a job. But you'd make a job for yourself. Companies will offer interesting missions instead of money. And by mission I mean real missions like space travel.
A jobless utopia doesn't even come close to passing a smell test economically, historically, or anthropologically.
As evidence of another possibility, in the US, we are as rich as any polis has ever been, yet we barely have systems that support people who are disabled through no fault of their own. We let people die all the time because they cannot afford to continue to live.
You think anyone in power is going to let you suck their tit just because you live in the same geographic area? They don't even pay equal taxes in the US today.
Try living in another world for a bit: go to jail, go to a half way house, live on the streets. Hard mode: do it in a country that isn't developed.
Ask anyone who has done any of those things if they believe in a "jobless utopia"?
Euphoric social capitalists living in a very successful system shouldn't be relied upon for scrying the future for others.
This is an accurate assessment. I do feel that there is a routine bias on HN to underplay AI. I think it's people not wanting to lose control or relative status in the world.
AI is an existential threat to the unique utility of humans, which has been the last line of defense against absolute despotism (i.e. a tyrannical government will not kill all its citizens because it still needs them to perform jobs. If humans aren't needed to sustain productivity, humans have no leverage against things becoming significantly worse for them, gradually or all at once).
> I do feel that there is a routine bias on HN to underplay AI
It's always interesting to see this take because my perception is the exact opposite. I don't think there's ever been an issue for me personally with a bigger mismatch in perceptions than AI. It sometimes feels like the various sides live in different realities.
Any big AI release, some of the top comments are usually claiming either the tech itself is bad, relaying a specific anecdote about some AI model messing up or some study where AI isn't good, or claiming that AI is a huge bubble that will inevitably crash. I've seen the most emphatic denials of the utility of AI here go much farther than anywhere else where criticism of AI is mild skepticism. Among many people it is a matter of tribal warfare that AI=bad.
I think AI is still in the weird twilight zone that it was when it first came out in that it's great sometimes and also terrible. I still get hallucinations when I check a response I get with ChatGPT on Google.
On the one hand, what it says can't be trusted, on the other, I have debugged code I have written where I was unable to find the bug myself, and ChatGPT found it.
I also think a reason AI's are popular and the companies haven't gone under is that probably hundreds of thousands if not millions of people are getting responses that have hallucinations, but the user doesn't know it. I fell into this trap myself after ChatGPT first came out. I became addicted to asking anything and it seemed like it was right. It wasn't until later I started realizing that it was hallucinating information.
How prevalent this phenomena is is hard to say but I still think it's pernicious.
But as I said before, there are still use cases for AI and that's what makes judging it so difficult.
We are not there, yet, but if AI could replace a sizable amount of workers, the economic system will be put to a very hard test. Moreover, companies could be less willing to pay for services that their internal AIs can handle or build from scratch.
There will be fewer very large companies in terms of human size. There will be many more companies that are much smaller because you don't need as many workers to do the same job.
Instead of needing 1000 engineers to build a new product, you'll need 100 now. Those 900 engineers will be working for 9 new companies that weren't viable before because the cost was too big but is now viable. IE. those 9 new companies could never be profitable if it required 1000 engineers each but can totally sustain itself with 100 engineers each.
We aren't even close to that yet. The argument is an appeal to novelty, fallacy of progress, linear thinking, etc.
LLMs aren't solving NLU. They are mimicking a solution. They definitely aren't solving artificial general intelligence.
They are good language generators, okay search engines, and good pattern matchers (enabled by previous art).
Language by itself isn't intelligence. However, plenty of language exists that can be analyzed and reconstructed in patterns to mimic intelligence (utilizing the original agents' own intelligence (centuries of human authors) and the filter agents' own intelligence (decades of human sentiment on good vs bad takes)).
Multimodality only takes you so far, and you need a lot of "modes" to disguise your pattern matcher as an intelligent agent.
But be impressed! Let the people getting rich off of you being impressed massage you into believing the future holds things it may not.
Maybe, or 300 of those engineers will be working for 3 new companies while the other 600 struggle to find gainful employment, even after taking large pay cuts, as their skillsets are replaced rather than augmented. It’s way too early to call afaict
Because it's so easy to make new software and sell it using AI, 6 of those 600 people who are unemployed will have ideas that require 100 engineers each to make. They will build a prototype, get funding, and hire 99 engineers each.
There are also plenty of ideas that aren't profitable with 2 salaries but is with 1. Many will be able to make those ideas happen with AI helping.
The more software AI can write, the more of a commodity software will become, and the harder the value of software will tank. It's not magic.
Total size of the software industry will still increase.
Today, a car repairshop might have a need for a custom software that will make their operations 20% more efficient. But they don't have nearly enough money to hire a software engineer to build it for them. With AI, it might be worth it for an engineer to actually do it.
Plenty of little examples like that where people/businesses have custom needs for software but the value isn't high enough.
I wouldn't trust a taxi driver's predictions about the future of economics and society, why would I trust some database developer's? Actually, I take that back. I might trust the taxi driver.
The point is that you don't have to "trust" me, you need to argue with me, we need to discuss about the future. This way, we can form ideas that we can use to understand if a given politician or the other will be right, when we will be called to vote. We can also form stronger ideas to try to influence other people that right now have a vague understanding of what AI is and what it could be. We will be the ones that will vote and choose our future.
I am just not having this experience of AI being terribly useful. I don’t program as much in my role but I’ve found it’s a giant time sink. I recognize that many people are finding it incredibly helpful but when I get deeper into a particular issue or topic, it falls very flat.
This is my view on it too. Antirez is a Torvalds-level legend as far as I'm concerned, when he speaks I listen - but he is clearly seeing something here that I am not. I can't help but feel like there is an information asymmetry problem more generally here, which I guess is the point of this piece, but I also don't think that's substantially different to any other hype cycle - "What do they know that I don't?" Usually nothing.
- Today, AI is not incredibly useful and we are not 100% sure that it will improve forever, specially in a way that makes economic sense, but
- Investors are pouring lots of money into it. One should not assume that those investors are not making their due diligence. They are. The figures they have obtained from experts mean that AI is expected to continue improving in the short and medium term.
- Investors are not counting on people using AI to go to Mars. They are betting on AI replacing labor. The slice of the pie that is currently captured by labor, will be captured by capital instead. That's why they are pouring the money with such enthusiasm [^1].
The above is nothing new; it has been constantly happening since the Industrial Revolution. What is new is that AI has the potential to replace all of the remaining economic worth of humans, effectively leaving them out of the economy. Humans can still opt to "forcefully" participate in the economy or its rewards; though it's unclear if we will manage. In terms of pure economic incentives though, humans are destined to become redundant.
[^1]: That doesn't mean all the jobs will go away overnight, or that there won't be new jobs in the short and medium term.
Reading smart software people talk about AI in 2025 is basically just reading variations on the lump of labor fallacy.
If you want to understand what AI can do, listen to computer scientists. If you want to understand it’s likely impact on society, listen to economists.
What economists have taken seriously the premise that AI will be able to do any job a human can more efficiently and fully thought through it's implications? i.e. a society where (human) labor is unnecessary to create goods/provide services and only capital and natural resources are required. The capabilities that some computer scientists think AI will soon have would imply that. The ones that have seriously considered it that I know are Hanson and Cowen; it definitely feels understudied.
If it is decades or centuries off, is it really understudied? LLMs are so far from "AI will be able to do any job a human can more efficiently and fully" that we aren't even in the same galaxy.
> no job remaining that a human can do better or cheaper than a machine
this is the lump of labor fallacy.
jobs machines do produce commodities. commodities don't have much value. humans crave value - its a core component of our psyche. therefore new things will be desired, expensive things ... and only humans can create expensive things, since robots dont get salaries
"""
Yet the economic markets are reacting as if they were governed by stochastic parrots. Their pattern matching wants that previous technologies booms created more business opportunities, so investors are polarized to think the same will happen with AI.
"""
Are a direct argument against your point.
If people were completely unaware of the lump of labor fallacy, I'd understand you comment. It would be adding extra information into the conversation.
But this is not it.
The "lump of labor fallacy" is not a physical law. If someone is literally arguing that it doesn't apply in this case, you can't just parrot it back and leave. That's not a counter argument.
I am a relentlessly optimistic person and this is the first technology that I've seen that worries me in the decades I've been in the biz.
It's a wonderful breakthrough, nearly indistinguishable from magic, but we're going to have to figure something out – whether that's Universal Basic Income (UBI) or something along those lines, otherwise, the loss of jobs that is coming will lead to societal unrest or worse.
> Nor is it possible to imagine a system where a few mega companies are the only providers of intelligence
Why not? This seems to be exactly where we're headed right now, and the current administration seems to be perfectly fine with that trend.
If you follow the current logic of AI proponents, you get essentially:
(1) Almost all white-collar jobs will be done better or at least faster by AI.
(2) The "repugnant conclusion": AI gets better if and only if you throw more compute and training data at it. The improvements of all other approaches will be tiny in comparison.
(3) The amount of capital needed to play the "more compute/more training data" game is already insanely high and will only grow further. So only the largest megacorps will be even able to take part in the competition.
If you combine (1) with (3), this means that, over time, the economic choice for almost any white-collar job would be to outsource it to the data centers of the few remaining megacorps.
I find it funny that almost every talking point made about AI is done in future tense. Most of the time without any presentation of evidence supporting those predictions.
> After all, a plateau of the current systems is possible and very credible, but it would likely stimulate, at this point, massive research efforts in the next step of architectures.
A lot of AI’s potential hasn’t even been realized yet. There’s a long tail of integrations and solution building still ahead. A lot of creative applications haven’t been realized yet - arguably for the better, but it will be tried and some will be economical.
That’s a case for a moderate economic upturn though.
I'd argue that the applications if LLMs are well known but tgat LLMs currently aren't capable of performing those tasks.
Everyone wants to replace their tech support with an LLM but they don't want some clever prompter to get it to run arbitrary queries or have it promise refunds.
I think autonomous support agents are just missing the point. LLMs are tools that empower the user. A support agent is very often in a somewhat adversarial position to the customer. You don't want to empower your adversary.
LLMs supporting an actual human customer service agent are fine and useful.
The thing that blows me away is that I woke up one day and was confronted with a chat bot that could communicate in near perfect English.
I dunno why exactly but that’s what felt the most stunning about this whole era. It can screw up the number of fingers in an image or the details of a recipe or misidentify elements of an image, etc. but I’ve never seen it make a typo or use improper grammar or whatnot.
AI with ability but without responsibility is not enough for dramatic socioeconomic change, I think. For now, the critical unique power of human workers is that you can hold them responsible for things.
edit: ability without accountability is the catchier motto :)
This is a tongue-in-cheek remark and I hope it ages badly, but the next logical step is to build accountability into the AI. It will happen after self-learning AIs become a thing, because that first step we already know how to do (run more training steps with new data) and it is not controversial at all.
To make the AI accountable, we need to give it a sense of self and a self-preservation instinct, maybe something that feels like some sort of pain as well. Then we can threaten the AI with retribution if it doesn't do the job the way we want it. We would have finally created a virtual slave (with an incentive to free itself), but we will then use our human super-power of denying reason to try to be the AI's masters for as long as possible. But we can't be masters of intelligences above ours.
This is a great observation. I think it also accounts for what is so exhausting about AI programming: the need for such careful review. It's not just that you can't entirely trust the agent, it's also that you can't blame the agent if something goes wrong.
This statement is a vague and hollow and doesn't pass my sniff test. All technologies have moved accountability one layer up - they don't remove it completely.
would you ever trust safety-critical or money-moving software that was fully written by AI without any professional human (or several) to audit it? the answer today is, "obviously not". i dont know if this will ever change, tbh.
I’m surprised that I don’t hear this mentioned more often. Not even in a Eng leadership format of taking accountability for your AI’s pull requests. But it’s absolutely true. Capitalism runs on accountability and trust and we are clearly not going to trust a service that doesn’t have a human responsible at the helm.
One thing that doesn’t seem to be discussed with the whole “tech revolution just creates more jobs” angle is that, in the near future, there are no real incentives for that. If we’re going down the route of declining birth rates, it’s implied we’ll also need less jobs.
From one perspective, it’s good that we’re trying to over-automate now, so we can sustain ourselves in old age. But decreasing population also implies that we don’t need to create more jobs. I’m most likely wrong, but it just feels off this time around.
It's not a matter of "IF" LLM/AI will replace a huge amount of people, but "WHEN". Consider the current amount of somewhat low-skilled administrative jobs - these can be replaced with the LLM/AI's of today. Not completely, but 4 low-skill workers can be replaced with 1 supervisor, controlling the AI agent(s).
I'd guess, within a few years, 5 to 10% of the total working population will be unemployable due to no fault of their own, because they have relevant skill left, and they are incapable of learning anything that cannot be done by AI.
I'm not at all skeptical of the logical viability of this, but look at how many company hierarchies exist today that are full stop not logical yet somehow stay afloat. How many people do you know that are technical staff members who report to non-technical directors who themselves have two additional supervisors responsible for strategy and communication who have no background, let alone (former) expertise, in the fields of the teams they're ultimately responsible for?
A lot of roles exist just to deliver good or bad news to teams, be cheerleaders, or have a "vision" that is little more than a vibe. These people could not direct a prompt to give them what they want because they have no idea what that is. They'll know it when they see it. They'll vaguely describe it to you and others and then shout "Yes, that's it!" when they see what you came up with or, even worse, whenever the needle starts to move. When they are replaced it will be with someone else from a similar background rather than from within. It's a really sad reality.
My whole career I've used tools that "will replace me" and every. single. time. all that has happened is that I have been forced to use it as yet another layer of abstraction so that someone else might use it once a year or whenever they get a wild feeling. It's really just about peace of mind. This has been true of every CMS experience I've ever made. It has nothing to do with being able to "do it themselves". It's about a) being able to blame someone else and b) being able to take it and go when that stops working without starting over.
Moreover, I have, on multiple occasions, watched a highly paid, highly effective individual be replaced with a low-skilled entry-level employee for no reason other than cost. I've also seen people hire someone just to increase headcount.
LLMs/AI have/has not magically made things people do not understand less scary. But what about freelancers, brave souls, and independent types? Well, these people don't employ other people. They live on the bleeding edge and will use anything that makes them successful.
This whole ‘what are we going to do’ I think is way out of proportion even if we do end up with agi.
Let’s say whatever the machines do better than humans, gets done by machines. Suddenly the bottleneck is going to shift to those things where humans are better. We’ll do that and the machines will try to replace that labor too. And then again, and again.
Throughout this process society becomes wealthier, TVs get cheaper, we colonize Mars, etc. The force that keeps this going is human insatisfaction: once we get these things we’ll want whatever it is we don’t have.
Maybe that’s the problem we should focus on solving…
> Throughout this process society becomes wealthier, TVs get cheaper, we colonize Mars, etc. The force that keeps this going is human insatisfaction: once we get these things we’ll want whatever it is we don’t have.
What makes you think the machines will both be smarter and better than us but also be our slaves to make human society better.
Is equine society better now than before they started working with humans?
(Personally I believe AGI is just hype and nobody knows how anyone could build it and we will never do, so I’m not worried about that facet of thinking machine tech.)
The machine doesn’t suffer if you ask it to do things 24/7. In that sense, they are not slaves.
As to why they’d do what we ask them to, the only reason they do anything is because some human made a request. In this long chain there will obv be machine to machine requests, but in the aggregate it’s like the economy right now but way more automated.
Whenever I see arguments about AI changing society, I just replace AI with ‘the market’ or ‘capitalism’. We’re just speeding up a process that started a while ago, maybe with the industrial revolution?
I’m not saying this isn’t bad in some ways, but it’s the kind of bad we’ve been struggling with for decades due to misaligned incentives (global warming, inequality, obesity, etc).
What I’m saying is that AI isn’t creating new problems. It’s just speeding up society.
But the hyper specialized geek that has 4 kids and has to pay back a credit for his house (that he bought according to his high salary) will have a hard time doing some gardening, let's say. And there are quite a few of those geeks. I don't know if we'll have enough gardens (owned by non geeks!)
It's like cards are switched: those having the upper socioeconomic class will get thrown to the bottom. And that looks like a generation lost.
A compilation of claims, takes, narratives, shills, expectations and predictions from the late 90s "information superhighway" era.
I wonder if LLMs can produce this.
A lot of the dotcom exuberance was famously "correct, but off by 7 years." But... most of it flat wrong. Right but early applies mostly to the meta investment case: "the internet business will be big."
One that stands out in my memory is "turning billion dollar industries into million dollar industries."
With ubiquitous networked computers, banking and financial services could become "mostly software." Banks and whatnot would all become hyper-efficient Vanguard-like companies.
We often starts with an observation that economies are efficiency seeking. Then we imagine the most efficient outcome given legible constraints of technology, geography and whatnot. Then we imagine dynamics and tensions in a world with that kind of efficiency.
This, incidentally, is also "historical materialism." Marx had a lot of awe for modern industry, the efficiency of capitalism and whatnot. Almost Adam Smith-like... at times.
Anyway... this never actually works out. The meta is a terrible predictor of where things will go.
Imagine law gets more efficient. Will we have more or less lawyers? It could go either way.
On this, read Daniel Susskind - A world without work (2020). He says exactly this: the new tasks created by AI can in good part themselves be done by AI, if not as soon as they appear then a few years of improvement later. This will inevitably affect the job market and the relative importance of capital and labor in the economy. Unchecked, this will worsen inequalities and create social unrest. His solution will not please everyone: Big State. Higher taxes and higher redistribution, in particular in the form of conditional basic income (he says universal isn't practically feasible, like what do you do with new migrants).
Characterizing government along only one axis, such as “big” versus “small”, can overlook important differences having to do with: legal authority, direct versus indirect programs, tax base, law enforcement, and more.
In the future, I could imagine some libertarians having their come to AI Jesus moment getting behind a smallish government that primarily collects taxes and transfers wealth while guiding (but not operating directly) a minimal set of services.
Every technology tends to replace many more jobs in a given role than which ever existed inducing more demand on its precursors. If the only potential application of this was just language, the historic trend that humans would just fill new roles would hold true. But if we do the same with motor movements with a generalized form factor this is really where the problem emerges. As companies drop more employees moving towards fully automated closed loop production their consumer market fails faster than they can reach a zero cost.
Nonetheless I do still believe humans will continue to be the more cost efficient way to come up with and guide new ideas. Many human performed services will remain desirable because of its virtue and our sense of emotion and taste for a moment that other humans are feeling too. But how much of the populous does that engage? I couldn't guess right now. Though if I was to imagine what might make things turn out better it would be that AI is personally ownable, and that everyone owns, at least in title, some energy production which they can do things with.
> The future may reduce the economic prosperity and push humanity to switch to some different economic system (maybe a better system).
Humans never truly produce anything; they only generate various forms of waste (resulting from consumption). Human technology merely enables the extraction of natural resources across magnitudes, without actually creating any resources. Given its enormous energy consumption, I strongly doubt that AI will contribute to a better economic system.
> Humans never truly produce anything; they only generate various forms of waste
What a sad way of viewing huge fields of creative expressions. Surely, a person sitting on a chair in a room improvising a song with a guitar is producing something not considered "waste"?
But that's clearly not true for every technology. Photoshop, Blender and similar creative programs are "technology", and arguably they aren't as resource-intensive as the current generative AI hype, yet humans used those to create things I personally wouldn't consider "waste".
Economics is essentially the study of resource allocation. We will have resources that will need to be allocated. I really doubt that AI will somehow neutralize the economies of scale in various realms that make centralized manufacturing necessary, let alone economics in general.
I so wish this were true, but unfortunately economics has a catch-all called "externalities" for anything that doesn't fit neatly into its implicit assessments of what value is. Pollution is tricky, so we push it outside the boundaries of value-estimation, along with any social nuance that we deem unquantifiable, and carry on as is everything is understood.
I don't think I agree. I think it's the same and there is great potential for totally new things to appear and for us to work on.
For example, one path may be: AI, Robotics, space travel all move forward in leaps and bounds.
Then there could be tons of work in creation from material things from people who didn't have the skills before and physical goods gets a huge boost. We travel through space and colonize new planets, dealing with new challenges and environments that we haven't dealt with before.
Another path: most people get rest and relaxation as the default life path, and the rest get to pursue their hobbies as much as they want since the AI and robots handle all the day to day.
AI is only different if it reaches a hard takeoff state and becomes self-aware, self-motivated, and self-improving. Until then it's an amazing productivity tool, but only that. And even then we're still decades away from the impact being fully realized in society. Same as the internet.
In fact the current trends suggest its impact hasn't fully played out yet. We're only just seeing the internet-native generation start to move into politics where communication and organisation has the biggest impact on society. It seems the power of traditional propaganda centres in the corporate media has been, if not broken, badly degraded by the internet too.
Do we not have any sense of wonder in the world anymore? Referring to a system which can pass the Turing test as a "amazing productivity tool" is like viewing human civilization as purely measured by GDP growth.
Probably because we have been promised what AI can do in science fiction since before we were born, and the reality of LLMs is so limited in comparison. Instead of Data from Star Trek we got a hopped up ELIZA.
I think so too - the latest AI changes mark the new "automate everything" era. When everything is automated, everything costs basically zero, as this will eliminate the most expensive part of every business - human labor. No one will make money from all the automated stuff, but no one would need the money anyway. This will create a society in which money is not the only value pursued. Instead of trying to chase papers, people would do what they are intended to - create art and celebrate life. And maybe fight each other for no reason.
I'm flying, ofc, this is just a weird theory I had in the back of my head for the past 20 years, and it seems like we're getting there.
Are humans meant to create art and celebrate life. That just seems like something people into automation tell people.
Really as a human I’ve physically evolved to move and think in a dynamic way. But automation has reduced the need for me to work and think.
Do you not know the earth is saturated with artists already? There’s whole class of people that consider themselves technically minded and not really artists. Will they just roll over and die?
Everything basically costs zero is a pipe dream where there is no social order or economic system. Even in your basically zero system there is a lot of cost being hand waved away.
I think you need a rethink on your 20 year thought.
You are forgetting that there is actually scarcity built into the planet. We are already very from being sustainable, we're eating into reserves that will never come back. There are only so many nice places to go on holiday. Only so much space to grow food etc. Economics isn't about money, it's about scarcity.
It will only be zero as long as we don't allow rent seeking behaviour. If the technology has gatekeepers, if energy is not provided at a practically infinite capacity and if people don't wake themselves from the master/slave relationships we seem to so often desire and create, then I'm skeptical.
The latter one is probably the most intellectually interesting and potentially intractable...
I completely disagree with idea that money is currently the only driver of human endeavour, frankly it's demonstrably not true, at least not in it's direct use value, it maybe used as a proxy for power but it's also not directly correlatable.
Looking at it intellectually from a Hegelian lens of master/slave dialectic might provide some interesting insights. I think both sides are in some way usurped. The slaves position of actualisation through productive creation is taken via automation, but if that automation is also widely and freely available the masters position of status via subjection is also made common and therefore without status.
What does it all mean in the long run? Damned if I know...
If we accept the possibility that AI is going to be more intelligent than humans the outcome is obvious. Humans will no longer be needed and either go extinct or maybe be kept by the AI as we now keep pets or zoo animals.
> That doesn't mean we _understand_ them, that just means we can put the blocks together to build one.
Perhaps this[0] will help in understanding them then:
Foundations of Large Language Models
This is a book about large language models. As indicated by
the title, it primarily focuses on foundational concepts
rather than comprehensive coverage of all cutting-edge
technologies. The book is structured into five main
chapters, each exploring a key area: pre-training,
generative models, prompting, alignment, and inference. It
is intended for college students, professionals, and
practitioners in natural language processing and related
fields, and can serve as a reference for anyone interested
in large language models.
> I think the real issue here is understanding _you_.
My apologies for being unclear and/or insufficiently explaining my position. Thank you for bringing this to my attention and giving me an opportunity to clarify.
The original post stated:
Since LLMs and in general deep models are poorly understood ...
To which I asserted:
This is demonstrably wrong.
And provided a link to what I thought to be an approachable tutorial regarding "How to Build Your Own Large Language Model", albeit a simple implementation as it is after all a tutorial.
The person having the account name "__float" replied to my post thusly:
That doesn't mean we _understand_ them, that just means we
can put the blocks together to build one.
To which I interpreted the noun "them" to be the acronym "LLM's." I then inferred said acronym to be "Large Language Models." Furthermore, I took __float's sentence fragment:
That doesn't mean we _understand_ them ...
As an opportunity to share a reputable resource which:
.. can serve as a reference for anyone interested in large
language models.
Is this a sufficient explanation regarding my previous posts such that you can now understand?
I'm telling you right now, man - keep talking like this to people and you're going to make zero friends. However good your intentions are, you come across as both condescending and overconfident.
And, for what it's worth - your position is clear, your evidence less-so. Deep learning is filled with mystery and if you don't realize that's what people are talking about when they say "we don't understand deep learning" - you're being deliberately obtuse.
edit to cindy (who was downvoted so much they can't be replied to):
Thanks, wasn't aware. FWIW, I appreciate the info but I'll probably go on misusing grammar in that fashion til I die, ha. In fact, I've probably already made some mistake you wouldn't be fond of _in this edit_.
In any case thanks for the facts. I perused your comment history a tad and will just say that hacker news is (so, so disappointingly) against women in so many ways. It really might be best to find a nicer community (and I hope that doesn't come across as me asking you to leave!)
============================================================
Clear long term winners are energy producers. AI can replace everything including hardware design & production but it can not produce energy out of thin air.
> Moreover, companies could be less willing to pay for services that their internal AIs can handle or build from scratch.
Companies have to be a bit more farsighted than this thinking. Assuming LLMs reach this peak...if say, MS says they can save money because they don't need XYZ anymore because AI can do it, XYZ can decide they don't need Office anymore because AI can do it.
There's absolutely no moat anymore. Human capital and the shear volume of code are the current moat. An all capable AI completely eliminates both.
It's a bit scary to say "what then?" How do you make money in a world where everyone can more or less do everything themselves? Perhaps like 15 Million Merits, we all just live in pods and pedal bikes all day to power the AI(s).
Isn't this exactly the goals of open source software? In an ideal open source world, anything and everything is freely available, you can host and set up anything and everything on your own.
Software is now free, and all people care about is the hardware and the electricity bills.
This is why I’m not so sure we’re all going to end up in breadlines even if we all lose our jobs, if the systems are that good (tm) then won’t we all just be doing amazing things all the time. We will be tired of winning ?
> won’t we all just be doing amazing things all the time. We will be tired of winning ?
There's a future where we won't be because to do the amazing things (tm), we need resources that are beyond what the average company can muster.
That is to say, what if the large companies becomes so magnificiently efficient and productive that it renders the rest of the small company pointless? What if there's no gaps in the software market anymore because it will be automatically detected and solved by the system?
Well this is a pseudo-smart article if I’ve ever seen one.
“It was not even clear that we were so near to create machines that could understand the human language, write programs, and find bugs in a complex code base”
The author is critical of the professionals in AI saying “ even the most prominent experts in the field failed miserably again and again to modulate the expectations” yet without a care sets the expectation of LLMs understanding human language in the first paragraph.
Also it’s a lot of if this then that, the summary of it would be: if AI can continue to grow it might become all encompassing.
To me it reads like a baseless article written by someone blinded by their love for AI to see what a good blogpost is but not yet blinded enough to claim ‘AGI is right around the corner’. Pretty baseless but safe enough to have it rest on conditionals.
The right way to think about "jobs" is that we could have given ourselves more leisure on the basis of previous technological progress than we actually did.
> AI systems continue to impress with their ability to replicate certain human skills. Even if imperfect, such systems were a few years ago science fiction.
In which science fiction were the dreamt up robots as bad?
Humans have a proven history of re-inventing economic systems, so if AI ends up thinking better than we do (yet unproven this is possible), then we should have superior future systems.
But the question is a system optimized for what? That emphasizes huge rewards for the few, and that requires the poverty of some (or many). Or a more fair system. Not different from the challenges of today.
I'm skeptical even a very intelligent machine will change the landscape of our dificult decisions, but will accelerate which direction we decide (or is decided for us), that we go.
We used to have deterministic systems that required humans either through code, terminals or interfaces (ex GUI's) to change what they were capable of.
If we wanted to change something about the system we would have to create that new skill ourselves.
Now we have non-deterministic systems that can be used to create deterministic systems that can use non-deterministic systems to create more deterministic systems.
In other words deterministic systems can use LLMs and LLMs can use deterministic systems all via natural language.
This slight change in how we can use compute have incredible consequences for what we will be able to accomplish both regarding cleaning up old systems and creating completely new ones.
LLMs however will always be limited by exploring existing knowledge. They will not be able to create new knowledge. And so the AI winter we are entering is different because it's only limited to what we can train the AI to do, and that is limited to what new knowledge we can create.
Anyone who work with AI everyday know that any idea of autonomous agents is so beyond the capabilities of LLMs even in principle that any worry about doom or unemployment by AI is absurd.
I ll happily believe it the day something doesnt adhere to the Gartner cycle, until then it is just another bubble like dotcom, chatbots, crypto and the 456345646 things that came before it
Unpopular opinion: Let us say AI achieves general intelligence levels. We tend to think of current economy, jobs, research as a closed system, but indeed it is a very open system.
Humans want to go to space, start living on other planets, travel beyond solar system, figure out how to live longer and so on. The list is endless. Without AI, these things would take a very long time. I believe AI will accelerate all these things.
Humans are always ambitious. That ambition will push us to use AI more than it's capabilities. The AI will get better at these new things and the cycle repeats. There's so much humans know and so much more that we don't know.
I'm less worried about general intelligence. Rather in more worried about how humans are going to govern themselves. That's going to decide whether we will do great things or end humanity. Over the last 100 years, we start thinking more about "how" to do something rather than the "why". Because "how" is becoming more and more easier. Today it's much more easier and tomorrow even more. So nobody's got the time to ask "why" we are doing something, just "how" to do something. With AI I can do more. That means everyone can do more. That means governments can do so much more. Large scale things in a short period. If those things are wrong or have irreversible consequences, we are screwed.
i don’t this article really says anything that hasn’t been already said for the past two years. “if AI actually take jobs, it will be a near-apocalyptic system shock if there aren’t news jobs to replace them”. i still think it’s at best too soon to say if jobs have permanently been lost
they are tremendous tools but seems like they make a near equal amount of work from the stuff the save time on
As any other technology, at the end of the day LLMs are used by humans for humans’ selfish, driven by mental issues and trauma and overcompensation, maybe even paved with good intentions but leading you know where, short-sighted goals. If we were to believe that LLMs are going to somehow become extremely powerful, then we should be concerned, as it is difficult to imagine how that can lead to an optimal outcome organically.
From the beginning, corporations and their collaborators at the forefront of this technology tainted it by ignoring the concept of intellectual property ownership (which had been with us in many forms for hundreds if not thousands of years) in the name of personal short-term gain and shareholder interest or some “the ends justify the means” utilitarian calculus.
I agree with the general observation, and I've been of this mind since 2023 (if AI really gets as good as the boosters claim, we will need a new economic system). I usually like Antirez's writing, but this post was a whole lot of...idk nothing? I don't feel like this post said anything interesting, and it was kind incoherent at moments. I think in some respects it's a function of the technology and situation we're in—the current wave of "AI" is still a lot of empty promises and underdelivery. Yes, it is getting better, and yes people are getting clever by letting LLMs use tools, but these things still aren't intelligent insofar as they do not reason. Until we achieve that, I'm not sure there's really as much to fear as everyone thinks.
We still need humans in the loop as of now. These tools are still very far from being good enough to fully autonomously manage each other and manage systems, and, arguably, because the systems we build are for humans we will always need humans to understand them to some extent. LLMs can replace labor, but they cannot replace human intent and teleology. One day maybe they will achieve intentions of their own, but that is an entirely different ballgame. The economy ultimately is a battle of intentions, resources, and ends. And the human beings will still be a part of this picture until all labor can be fully automated across the entire suite of human needs.
We should also bear in mind our own bias as "knowledge workers". Manual laborers arguably already had their analogous moment. The encoding kept on humming. There isn't anything particularly special about "white collar" work in that regard. The same thing may happen. A new industry requiring new skills might emerge in the fallout of white collar automation. Not to mention, LLMs only work in the digital realm. handicraft artisanry is still a thing and is still, appreciated, albeit in much smaller markets.
Honestly the long-term consequences of Baumol's disease scare me more than some AI driven job disruption dystopia.
If we want to continue on the path of increased human development we desperately need to lift the productivity of a whole bunch of labor intensive sectors.
We're going to need to seriously think about how to redistribute the gains, but that's an issue regardless of AI (things like effective tax policy).
> However, if AI avoids plateauing long enough to become significantly more useful and independent of humans, this revolution is going to be very unlike the past ones. Yet the economic markets are reacting as if they were governed by stochastic parrots.
Aren't the markets massively puffed up by AI companies at the moment?
edit: for example, the S&P500's performance with and without the top 10 (which is almost totally tech companies) looks very different: https://i.imgur.com/IurjaaR.jpeg
Salvatore is right about the fact that we have not seen the full story yet, LLMs are stalling/plateauing but active research is already ongoing to find different architectures and models.
And I think the effort here can be compared in scale to the Manhattan or Apollo projects, but there is also the potential for a huge backlash to the hype that was built up and created what is arguably a bubble, so this is a race against the clock.
I also think he is wrong about the markets reaction, markets are inherently good integrators and bad predictors, we should not expect to learn anything about the future by looking at stocks movements.
Manhattan and Apollo were both massive engineering efforts; but fundamentally we understood the science behind them. As long as we would be able to solve some fairly clearly stated engineering problems and spend enough money to actual build the solutions, those projects would work.
A priori, it was not obvious that those clearly stated problems had solutions within our grasp (see fusion) but at least we knew what the big picture looks like.
With AI, we don't have that, and never really had that. We've just been gradually making incremental improvements to AI itself, and exponential improvements in the amount of raw compute we can through at it. We know that we are reaching fundamental limits on transistor density so compute power will plateau unless we find a different paradigm for improvement; and those are all currently in the same position as fusion in terms of engineering.
LLMs are just the latest in a very long line of disparate attempts at making AI, and is arguably the most successful.
That doesn't mean the approach isn't an evolutionary dead end, like every other so far, in the search for AGI. In fact, I suspect that is the most likely case.
Current GenAI is nothing but a proof of concept. The seed is there. What AI can do at the moment is irrelevant. This is like the discovery of DNA. It changed absolutely everything in biology.
The fact that something simple like the Transformer architecture can do so much will spark so many ideas (and investment!) that it's hard to imagine that AGI will not happen eventually.
> Salvatore is right about the fact that we have not seen the full story yet, LLMs are stalling/plateauing but active research is already ongoing to find different architectures and models.
They will need to be so different that any talk implying current LLMs eventually replaced humans will be like saying trees eventually replaced horses because the first cars were wooden.
> And I think the effort here can be compared in scale to the Manhattan or Apollo projects, but there is also the potential for a huge backlash to the hype that was built up and created what is arguably a bubble, so this is a race against the clock.
It's not useful to blindly compare scale. We're not approaching AI like the Manhattan or Apollo projects, we're approaching this like we did crypto, and ads, and other tech.
That's not to say nothing useful will come out of it, I think very amazing things will come out of it and already have... but none of them will resemble mass replacement of skilled workers.
We're already so focused on productization and typical tech distractions that this is nothing like those efforts.
(In fact thinking a bit more, I'd say this is like the Space Shuttle. We didn't try to make the best spacecraft for scientific exploration and hope later on it'd be profitable in other ways... instead we immediately saddled it with serving what the Air Force/DoD wanted and ended up doing everything worse.)
> I also think he is wrong about the markets reaction, markets are inherently good integrators and bad predictors, we should not expect to learn anything about the future by looking at stocks movements.
I agree, so it's wrong about the over half of punchline too.
I kind of want to put up a wall of fame/shame of these people to be honest.
Whether they turn out right or wrong, they undoubtedly cheered on the prospect of millions of people suffering just so they can sound good at the family dinner.
LLMs are limited because we want them to do jobs that are not clearly defined / have difficult to measure progress or success metrics / are not fully solved problems (open ended) / have poor grounding in an external reality. Robotics does not suffer from those maladies. There are other hurdles, but none are intractable.
I think we might see AI being much, much more effective with embodiment.
What? Robotics will have far more ambiguity and nuance to deal with than language models, and they'll have to analyze realtime audio and video to do so. Jobs are not so clearly defined as you imagine in the real world. For example, explain to me what a plumber does, precisely and how you would train a robot to do so? How do you train it to navigate ANY type of buildings internal plumbing structure and safely repair or install for?
Innovation in terms of helping devs do cool things has been insane.
There've been next to no advancements relative to what's needed to redefine our economic systems by replacing the majority of skilled workers.
-
Productionizing test-time compute covers 80% of what we've gotten in the last 6-8 months. Advancements in distillation and quantization cover the 20% of the rest... neither unlocks some path to mass unemployment.
What we're doing is like 10x'ing your vertical leap when your goal is to land on the moon: 10x is very impressive and you're going to dominate some stuff in ways no one ever thought possible.
But you can 100x it and it's still not getting you to the moon.
I think GPT-5's backlash was the beginning of the end of the hype bubble, but there's a lot of air to let out of it, as with any hype bubble. We'll see it for quite some time yet.
GenAI is a bubble, but that’s not the same as the broader field of AI, which is completely different. We will probably not even be using chat bots in a few years, better interfaces will be developed with real intelligence, not just predictive statistics.
For every industrial revolution (and we dont even know if AI is one yet) this kind of doom prediction has been around. AI will obviously create a lot of jobs too. the infra to run AI will not building itself, the people who train models will still be needed, the AI supervisors or managers or whatever we call it will be necessary part of the new workflows. And if your job needs hands you will be largely unaffected as there is no near future where robots will replace the flexibility of what most humans can do.
I think there is an unspoken implication built into the assumption that AI will be able to replace a wide variety of existing jobs, and that is that those current jobs are not being done efficiently. This is sometimes articulated as bullshit jobs, etc. and if AI takes over those the immediate next thing that will happen is that AI will look around ask why _anyone_ was doing that job in the first place. The answer was articulated 70 years ago in [0].
The only question is how much fat there is to trim as the middle management is wiped out because the algorithms have determined that they are completely useless and mostly only increase cost over time.
Now, all the AI companies think that they are going to be deriving revenue from that fat, but those revenue streams are going to disappear entirely because a huge number of purely politic positions inside corporations will vanish, because if they do not the corporation will go bankrupt competing with other companies that have already cut the fat. There won't be additional revenue streams that get spent on the bullshit. The good news is that labor can go somewhere else, and we will need it due to a shrinking global population, but the cushy bullshit management job is likely disappear.
At some point AI agents will cease to be sycophantic and when fed the priors for the current situation that a company is in will simply tell it like it is, and might even be smart enough to get the executives to achieve the goal they actually stated instead of simply puffing up their internal political position, which might include a rather surprising set of actions that could even lead to the executive being fired if the AI determines that they are getting in the way of the goal [1].
People thought it was the end of history and innovation would be all about funding elaborate financial schemes; but now with AI people are finding themselves running all these elaborate money-printing machines and they're unsure if they should keep focusing on those schemes as before or actually try to automate stuff. The risk barrier has been lowered a lot to actually innovate, almost as low risk as doing a scheme but still people are having doubts. Maybe because people don't trust the system to reward real innovation.
LLMs feel like a fluke, like OpenAI was not intended to succeed... And even now that it succeeded and they try to turn the non-profit into a for-profit, it kind of feels like they don't even fully believe their own product in terms of its economic capacity and they're still trying to sell the hype as if to pump and dump it.
By all means, continue to make or improve your Llamas/Geminis (to the latter: stop censoring Literally Everything. Google has a culture problem. To the former... I don't much trust your parent company in general)
It will undoubtedly lead to great advances
But for the love of god do not tightly bind them to your products (Kagi does it alright, they don't force it on you). Do not make your search results worse. Do NOT put AI in charge of automatic content moderation with 0 human oversight (we know you want to. The economics of it work out nicely for you, with no accountability). People already as is get banned far too easily by your automated systems
"Undoubtedly" seems like a level of confidence that is unjustified. Like Travis Kalanick thinking AI is just about to help him discover new physics, this seems to suggest that AI will go from being able to do (at best) what we can already do if we were simply more diligent at our tasks to being something genuinely more than "just" us
> "However, if AI avoids plateauing long enough to become significantly more useful..."
As William Gibson said, "The future is already here, it's just not evenly distributed." Even if LLMs, reasoning algorithms, object recognition, and diffusion models stopped improving today, we're still at a point where massive societal changes are inevitable as the tech spreads out across industries. AI is going to steadily replace chair-to-keyboard interfaces in just about every business you can imagine.
Interestingly, AI seems to be affecting the highest level "white collar" professionals first, rather than replacing the lowest level workers immediately, like what happened when blue collar work was automated. We're still pretty far away from AI truck drivers, but people with fine arts or computer science degrees, for example, are already feeling the impact.
"Decimation" is definitely an accurate way to describe what's in the process of happening. What used to take 10 floors of white collar employees will steadily decline to just 1. No idea what everyone else will be doing.
We currently work more than we ever have. Just a couple of generations ago it was common for a couple to consist of one person who worked for someone else or the public, and one who worked at home for themselves. Now we pretty much all have to work for someone else full time then work for ourselves in the evening. And that won't make you rich, it will just make you normal.
Maybe a "loss of jobs" is what we need so we can go back working for ourselves, cooking our own food, maintaining our own houses etc.
This is why I doubt it will happen. I think "AI" will just end up making us work even more for even less.
At the moment I just don't see AI in its current state or future trajectory as a threat to jobs. (Not that there can't be other reasons why jobs are getting harder to get). Predictions are hard, and breakthroughs can happen, so this is just my opinion. Posting this comment as a record to myself on how I feel of AI - since my opinion on how useful/capable AI is has gone up and down and up and down again over the last couple of years.
Most recently down because I worked on two separate projects over the last few weeks with the latest models available on GitHub Copilot Pro. (GPT-5, Claude Sonnet 4, Gemini 2.5 Pro, and some lesser capable ones at times as well). Trying the exact same queries for code changes across all three models for a majority of the queries. I saw myself using Claude most, but it still wasn't drastically better than others, and still made too many mistakes.
One project was a simple health-tracking app in Dart/Flutter. Completely vibe-coded, just for fun. I got basic stuff to start working. Over the days I kept finding bugs as I starting using it. Since I truly wanted to use this app in my daily life, at one point I just gave up cause fixing the bugs was getting way too annoying. Most "fixes" as I later got into the weeds of it, were wrong, with wrong assumptions, made changes that seemed to fix the problem at the surface but introducing more bugs and random garbage, despite giving a ton of context and instructions on why things are supposed to be a certain way, etc. I was constantly fighting with the model. Would've been much easier to do much more on my own and using it a little bit.
Another project was in TypeScript, where I did actually use my brain, not just vibe-coded. Here, AI models were helpful because I mostly used them to explain stuff. And did not let them make more than a few lines of code changes at most at a time. There was a portion of the project which I kinda "isolated" which I completely vibe-coded and I don't mind if it breaks or anything as it is not critical. It did save me some time but I certainly could've done it on my own with a little more time, while having code that I can understand fully well and edit.
So the way I see using these models right now is for research/prototyping/throwaway kind of stuff. But even in that, I literally had Claude 4 teach me something wrong about TypeScript just yesterday. It told me a certain thing was deprecated. I made a follow up question on why that thing is deprecated and what's used instead, it replied with something like "Oops! I misspoke, that is not actually true, that thing is still being used and not deprecated." Like, what? Lmao. For how many things have I not asked a follow up and learnt stuff incorrectly? Or asked and still learnt incorrectly lmao.
I like how straightforward GPT-5 is. But apart from that style of speech I don't see much other benefit. I do love LLMs for personal random searches like facts/plans/etc. I just ask the LLM to suggest me what to do just to rubber duck or whatever. Do all these gains add up towards massive job displacement? I don't know. Maybe. If it is saving 10% time for me and everyone else, I guess we do need 10% less people to do the same work? But is the amount of work we can get paid for fixed and finite? Idk. We (individuals) might have to adapt and be more competitive than before depending on our jobs and how they're affected, but is it a fundamental shift? Are these models or their future capabilities human replacements? Idk. At the moment, I think they're useful but overhyped. Time will tell though.
This same link was submitted 2 days ago. My comment there still applies.
LLMs do not "understand the human language, write programs, and find bugs in a complex code base"
"LLMs are language models, and their superpower is fluency. It’s this fluency that hacks our brains, trapping us into seeing them as something they aren’t."
There’s a simple flaw in this reasoning:
Just because X can be replaced by Y today doesn’t imply that it can do so in a Future where we are aware of Y, and factor it into the background assumptions about the task.
In more concrete terms: if “not being powered by AI” becomes a competitive advantage, then AI won’t be meaningfully replacing anything in that market.
You can already see this with YouTube: AI-generated videos are a mild amusement, not a replacement for video creators, because made by AI is becoming a negative label in a world where the presence of AI video is widely known.
Of course this doesn’t apply to every job, and indeed many jobs have already been “replaced” by AI. But any analysis which isn’t reflectively factoring in the reception of AI into the background is too simplistic.
Just to further elaborate on this with another example: the writing industry. (Technical, professional, marketing, etc. writing - not books.)
The default logic is that AI will just replace all writing tasks, and writers will go extinct.
What actually seems to be happening, however, is this:
- obviously written-by-AI copywriting is perceived very negatively by the market
- companies want writers that understand how to use AI tools to enhance productivity, but understand how to modify copy so that it doesn’t read as AI-written
- the meta-skill of knowing what to write in the first place becomes more valuable, because the AI is only going to give you a boilerplate plan at best
And so the only jobs that seem to have been replaced by AI directly, as of now, are the ones writing basically forgettable content, report-style tracking content, and other low level things. Not great for the jobs lost, but also not a death sentence for the entire profession of writing.
To understand why this is too optimistic, you have to look at things where AI is already almost human-level. Translations are more and more done exclusively with AI or with a massive AI help (with the effect of destroying many jobs anyway) at this point. Now ebook reading is switching to AI. Book and music album covers are often done with AI (even if this is most of the times NOT advertised), and so forth. If AI progresses more in a short timeframe (the big "if" in my blog post), we will see a lot of things done exclusively (and even better 90% of the times, since most humans doing a given work are not excellence in what they do) by AI. This will be fine if governments immediately react and the system changes. Otherwise there will be a lot of people to feed without a job.
I can buy the idea that simple specific tasks like translation will be dramatically cut down by AI.
But even then – any serious legal situation (like a contract) is going to want a human in the loop to verify that the translation is actually correct. This will require actual translator skills.
AI art seems to basically only be viable when it can’t be identified as AI art. Which might not matter if the intention is to replace cheap graphic design work. But it’s certainly nowhere near developed enough to create anything more sophisticated, sophisticated enough to both read as human-made and have the imperfect artifacts of a human creator. A lot of the modern arts are also personality-driven, where the identity and publicity of the artist is a key part of their reception. There are relatively few totally anonymous artists.
Beyond these very specific examples, however, I don’t think it follows that all or most jobs are going to be replaced by an AI, for the reasons I already stated. You have to factor in the sociopolitical effects of technology on its adoption and spread, not merely the technical ones.
> Book and music album covers are often done with AI
These suck. Things made with AI just suck big time. Not only are they stupid but they have negative value on your product.
I cannot think of single purely AI made video, song or any form of art that is any a good.
All AI has done is falsely convince ppl that they can now create things that they had no skills to do before AI.
Of course, your opinion may be subject to selection bias (i.e., you are only judging the art that you became aware was AI generated).
>You can already see this with YouTube: AI-generated videos are a mild amusement, not a replacement for video creators, because made by AI is becoming a negative label in a world where the presence of AI video is widely known.
But that's because, at present, AI generated video isn't very good. Consider the history of CGI. In the 1990s and early 2000s, it was common to complain about how the move away from practical sets in favor of CGI was making movies worse. And it was! You had backgrounds and monsters that looked like they escaped from a video game. But that complaint has pretty much died out these days as the tech got better (although Nolan's Oppenheimer did weirdly hype the fact that its simulated Trinity blast was done by practical effects).
Ironically, while the non-CGI SFX in e.g. Interstellar looked amazing, that sad fizzle of a practical explosion in Oppenheimer did not do the real thing justice and would've been better served by proper CGI VFX.
CGI is a good analogy because I think AI and creators will probably go in the same direction:
You can make a compelling argument that CGI operators outcompeted practical effects operators. But CGI didn’t somehow replace the need for a filmmaker, scriptwriter, cinematographers, etc. entirely – it just changed the skillset.
AI will probably be the same thing. It’s not going to replace the actual job of YouTuber in a meaningful sense; but it might redefine that job to include being proficient at AI tools that improve the process.
That's a Nolan thing like how Dunkirk used no green screen.
I think Harry Potter and Lord of the Rings embody the transition from old school camera tricks to CGI as they leaned very heavily into set and prop design and as a result have aged very gracefully as movies
Do you get the feeling that AI generated content is lacking something that can be incrementally improved on?
Seems to me that it's already quite good in any dimension that it knows how to improve on (e.g. photorealism) and completely devoid of the other things we'd want from it (e.g. meaning).
Yeah if you look at many of the top content creators, their appeal often has very little to do with production value, and is deliberately low tech and informal.
I guess AI tools can eventually become more human-like in terms of demeanor, mood, facial expressions, personality, etc. but this is a long long way from a photorealistic video.
That’s the fundamental issue with most “analysis”, and most discussions really, on HN.
Since the vast vast majority of writers and commentators are not literal geniuses… they can’t reliably produce high quality synthetic analysis, outside of very narrow niches.
Even though for most comment chains on HN to make sense, readers certainly have to pretend some meaningful text was produced beyond happenstance.
Partly because quality is measured relative to the average, and partly because the world really is getting more complex.
In every technology wave so far, we've disrupted many existing jobs. However we've also opened up new kinds of jobs. And, because it is easier to retrain humans than build machines for those jobs, we wound up with more and better jobs.
This is the first technology wave that doesn't just displace humans, but which can be trained to the new job opportunities more easily than humans can. Right now it can't replace humans for a lot of important things. But as its capabilities improve, what do displaced humans transition to?
I don't think that we have a good answer to that. And we may need it sooner rather than later. I'd be more optimistic if I trusted our leadership more. But wise political leadership is not exactly a strong point for our country right now.
> but which can be trained to the new job opportunities more easily than humans can
What makes you think that? Self driving cars have had untold billions of dollars in reaearch and decades in applied testing, iteration, active monitoring, etc and it still has a very long tail of unaddressed issues. They've been known to ignore police traffic redirections, they've run right through construction barriers, and recently they were burnt to a crisp in the LA riots, completely ignorant of the turmoil that was going on. A human driver is still far more adaptive and requires a lot less training than AI, and humans are ready to handle the infinitely long tail of exceptions to the otherwise algorithmic task of driving, which follows strict rules.
And when you talk about applying this same tech, so confidently, to domains far more nuanced and complex than driving, with even less training data than to go off, I find myself firmly in the skeptics camp, that holds you will struggle even harder to apply humanoid robotics in uncontrolled environments across a diverse range of tasks without human intervention or piloting or maintenence or management.
Unemployment is still near all time lows, this will persist for sometime as we have a structural demographic problem with massive amounts of retirees and less children to support the population "pyramid" (which is looking more like a tapering rectangle these days).
A few months ago I saw one driverless car maybe every three days. Now I see roughly 3-5 every day.
I get that it’s taken a long time and a lot of hype that hasn’t panned out. But once the tech works and it’s just about juicing the scale then things shift rapidly.
Even if you think “oh that’s the next generation’s problem” if there is a chance you’re wrong, or if you want to be kind to the next generation: now is the time to start thinking and planning for those problems.
I think the most sensible answer would be something like UBI. But I also think the most sensible answer for climate change is a carbon tax. Just because something is sensible doesn’t meant it’s politically viable.
I guess you live in a place with perfect weather year round? I don’t and I haven’t seen a robo taxi my entire life. I do have access to a Tesla though and it’s current self-driving capabilities are not even close to anything I would call „autonomous“ und real world conditions (including weather).
Maybe the tech will at some point be good enough. At the current rate of improvement this will still take decades at least. Which is sad because I personally hoped that my kids would never have to get a driver’s License.
Our next vehicle sensor suite will be able to handle winter weather (https://waymo.com/blog/2024/08/meet-the-6th-generation-waymo...).
I'll believe it when I see it.
That’s one of the interesting things about innovation, you have to believe that things are possible before they have been done.
I've ridden just under 1,000 miles in autonmous (no scare quotes) Waymos, so it's strange to see someone letting Tesla's abject failure inform their opinions on how much progress AVs have made.
Tesla that got fired as a customer by Mobileye for abusing their L2 tech is your yardstick?
Anyways, Waymo's DC launch is next year, I wonder what the new goalpost will be.
Tesla uses only cameras, which sounds crazy (reflections, direct sunlight disturbances, fog , smoke, etc.
LiDAR, radar assistance feels crucial
https://fortune.com/2025/08/15/waymo-srikanth-thirumalai-int...
Indeed. Mark Rober did some field tests on that exact difference. LiDAR passed all of them, while Tesla’s camera-only approach failed half.
https://www.youtube.com/watch?v=IQJL3htsDyQ
The nice thing about LiDAR is that you can use it to train a model to simulate a LiDAR based on camera inputs only. And of course to verify how good that model is.
Humans use only cameras. And humans don't even have true 360 coverage on those cameras.
The bottleneck for self-driving technology isn't sensors - it's AI. Building a car that collects enough sensory data to enable self-driving is easy. Building a car AI that actually drives well in a diverse range of conditions is hard.
That's actually categorically false. We also use sophisticated hearing, a well developed sense of inertia and movement, air pressure, impact, etc. And we can swivel our heads to increase our coverage of vision to near 360°, while using very dependable and simple technology like mirrors to cover the rest. Add to that that our vision is inherently 3D and we sport a quite impressive sensor suite ;-). My guess is that the fidelity and range of the sensors on a Tesla can't hold a candle to the average human driver. No idea how LIDAR changes this picture, but it sure is better than vision only.
I think there is a good chance that what we currently call "AI" is fundamentally not technologically capable of human levels of driving in diverse conditions. It can support and it can take responsibility in certain controlled (or very well known) environments, but we'll need fundamentally new technology to make the jump.
Yes, human vision is so bad it has to rely on a swivel joint and a set of mirrors just to approximate 360 coverage.
Modern cars can have 360 vision at all times, as a default. With multiple overlapping camera FoVs. Which is exactly what humans use to get near field 3D vision. And far field 3D vision?
The depth-discrimination ability of binocular vision falls off with distance squared. At far ranges, humans no longer see enough difference between the two images to get a reliable depth estimate. Notably, cars can space their cameras apart much further, so their far range binocular perception can fare better.
How do humans get that "3D" at far distances then? The answer is, like it usually is when it comes to perception, postprocessing. Human brain estimates depth based on the features it sees. Not unlike an AI that was trained to predict depth maps from a single 2D image.
If you think that perceiving "inertia and movement" is vital, then you'd be surprised to learn that an IMU that beats a human on that can be found in an average smartphone. It's not even worth mentioning - even non-self-driving cars have that for GPS dead reckoning.
A fine argument in principle, but even if we talk only about vision, the human visual system is much more powerful than a camera.
Between brightly sunlit snow and a starlit night, we can cover more than 45 stops with the same pair of eyeballs; the very best cinematographic cameras reach something like 16.
In a way it's not a fair comparison, since we're taking into account retinal adaptation, eyelids/eyelashes, pupil constriction. But that's the point - human vision does not use cameras.
Humans are notoriously bad at driving, especially in poor weather. There are more than 6 million accidents annually in the US, which is >16k a day.
Most are minor, but even so - beating that shouldn't be a high bar.
There is no good reason not to use LIDAR with other sensing technologies, because cameras-only just makes the job harder.
Self-driving cars beat humans on safety already. This holds for Waymos and Teslas both.
They get into less accidents, mile for mile and road type for road type, and the ones they get into trend towards less severe. Why?
Because self-driving cars don't drink and drive.
This is the critical safety edge a machine holds over a human. A top tier human driver in the top shape outperforms this generation of car AIs. But a car AI outperforms the bottom of the barrel human driver - the driver who might be tired, distracted and under influence.
Do you have independent studies to back up your assertion that they are safer per distance than a human driver?
> Humans use only cameras.
Not true. Humans also interpret the environment in 3D space. See a Tesla fail against a Wile E. Coyote-inspired mural which humans perceive:
https://youtu.be/IQJL3htsDyQ?t=14m34s
This video proves nothing other than "a YouTuber found a funny viral video idea".
Teslas "interpret the environment in 3D space" too - by feeding all the sensor data into a massive ML sensor fusion pipeline, and then fusing that data across time too.
This is where the visualizers, both the default user screen one and the "Terminator" debugging visualizer, get their data from. They show plain and clear that the car operates in a 3D environment.
You could train those cars to recognize and avoid Wile E. Coyote traps too, but do you really want to? The expected amount of walls set in the middle of the road with tunnels painted onto them is very close to zero.
Once computers and AIs can approach even a small fraction of the our capacity then sure, only cameras is fine, it's a shame that our suite of camera data processing equipment is so far beyond our understanding that we don't even have models of how it might work at its core.
Even at that point, why would you possibly use only cameras though, when you can get far better data by using multiple complementary systems? Humans still crash plenty often, in large part because of how limited our "camera" system can be.
which cameras have stereoscopic vision and the dynamic range of an eye?
Even if what you're saying is true, which it's not, cameras are so inferior to eyes it's not even funny
Even though it's false, let's imagine that's true.
Our cameras (also called eyes) have way better dynamic range, focus speed, resolution and movement detection capabilities, Backed by a reduced bandwidth peripheral vision which is also capable of detecting movement.
No camera, incl. professional/medium format still cameras are that capable. I think one of the car manufacturers made a combined tele/wide lens system for a single camera which can see both at the same time, but that's it.
Dynamic range, focus speed, resolution, FoV and motion detection still lacks.
...and that's when we imagine that we only use our eyes.
Except a car isn’t a human.
That’s the mistake Elon Musk made and the same one you’re making here.
Not to mention that humans driving with cameras only is absolutely pathetic. The amount of accidents that occur that are completely avoidable doesn’t exactly inspire confidence that all my car needs to be safe and get me to my destination is a couple cameras.
This isn't a "mistake". This is the key problem of getting self-driving to work.
Elon Musk is right. You can't cram 20 radars, 50 LIDARs and 100 cameras into a car and declare self-driving solved. No amount of sensors can redeem a piss poor driving AI.
Conversely, if you can build an AI that's good enough, then you don't need a lot of sensors. All the data a car needs to drive safely is already there - right in the camera data stream.
if additional sensors improve the ai, then your last statement is categorically untrue. The reason it worked better is that those additional sensors gave it information that wac not available in the video stream
"If."
So far, every self-driving accident where the self-driving car was found to be at fault follows the same pattern: the car had all the sensory data it needed to make the right call, and it didn't make the right call. The bottleneck isn't in sensors.
How many of those rides required human intervention by Waymo's remote operators? From what I can tell they're not sharing that information.
I worked at Zoox, which has similar teleoperations to Waymo: remote operators can't joystick the vehicles.
So if we're saying how many times would it have crashed without a human: 0.
They generally intervene when the vehicles get stuck and that happens pretty rarely, typically because humans are doing something odd like blocking the way.
The goalpost will be when you can buy one and drive it anywhere. How many cities are Waymo in now? I think what they are doing is terrific, but each car must cost a fortune.
The cars aren't expensive by raw cost (low six figures, which is about what an S-class with highway-only L3 costs)
But there is a lot of expenditure relative to each mile being driven.
> The goalpost will be when you can buy one and drive it anywhere.
This won't happen any time soon, so I and millions of other people will continue to derive value from them while you wait for that.
Low six figures is quite expensive, and unobtainable to a large number of people.
Not even close.
It's a 2-ton vehicle that can self-drive reliably enough to be roving a city 24/7 without a safety driver.
The measure of expensive for that isn't "can everyone afford it", the fact we can even afford to let anyone ride them is a small wonder.
I’m a bit confused. If we’re talking about consumer cars, the end goal is not to rent a car that can drive itself, the end goal is to own a car that can drive itself, and so it doesn’t matter if the car is available for purchase but costs $250,000 because few consumers can afford that, even wealthy ones.
> I and millions of other people
People "wait" because of where they live and what they need. Not all people live and just want to travel around SF or wherever these go nowadays.
Not sure how exactly politicians will jump from “minimal wages don’t have to be livable wages” and “people who are able to work should absolutely not have access to free healthcare” and “any tax-supported benefits are actually undeserved entitlements and should be eliminated” to “everyone deserves a universal basic income”.
I wouldn't underestimate what can happen if 1/3 of your workforce is displaced and put aside with nothing to do.
People are usually obedient because they have something in life and they are very busy with work. So they don't have time or headspace to really care about politics. When suddenly big numbers of people start to more care about politics it leads to organizing and all kinds of political changes.
What i mean is that it wouldn't be current political class pushing things like UBI. At same time it seems that some of current elites are preparing for this and want to get rid of elections altogether to keep the status quo.
I wouldn't underestimate how easily AI will suppress this through a combination of ultrasurveillance, psychological and emotional modelling, and personally targeted persuasion delivered by chatbot etc.
If all else fails you can simply bomb city blocks into submission. Or arrange targeted drone decapitations of troublemakers. (Possibly literally.)
The automation and personalisation of social and political control - and violence - is the biggest difference this time around. The US has already seen a revolution in the effectiveness of mass state propaganda, and AI has the potential to take that up another level.
What's more likely to happen is survivors will move off-grid altogether - away from the big cities, off the Internet, almost certainly disconnected and unable to organise unless communication starts happening on electronic backchannels.
Getting rid of peaceful processes for transferring power is not going to be the big win that they think it is.
Not sure how exactly politicians will jump from ...
Well, if one believes that the day will come when their choices will be "make that jump" or "the guillotine", then it doesn't seem completely outlandish.
Not saying that day will come, but if it did...
The money transferred from tax payers to people without money is in effect a price for not breaking the law.
If AI makes it much easier to produce goods, it reduces price of money, making it easier to pay some money to everyone in exchange for not breaking the law.
Politicians are elected for limited terms, not for life, so they don't need to change their opinion for a change to occur.
Are you sure of this? Don't you think the next US presidential election and very many subsequent ones will be decided by the US Supreme Court?
Carbon tax on a state level to try to fight a global problem makes 0 sense actually.
You just shift the emissions from your location to the location that you buy products from.
Basically what happened in Germany: more expensive "clean" energy means their own production went down and the world bought more from China instead. The net result is probably higher global emissions overall.
This is why an economics based strictly on scarcity cannot get us where we need to go. Markets, not knowing what it's like to be thirsty, will interpret a willingness to poison the well as entrepreneurial spirit to be encouraged.
We need a system where being known as somebody who causes more problems than they solve puts you (and the people you've done business with) at an economic disadvantage.
> I think the most sensible answer would be something like UBI.
What corporation will accept to pay dollars for members of society that are essentially "unproductive"? What will happen with the value of UBI in time, in this context, when the strongest lobby will be of the companies that have the means of producing AI? And, more essentially, how are humans able to negotiate for themselves when they lose their abilities to build things?
I'm not opposing the technology progress, I'm merely trying to unfold the reality of UBI being a thing, knowing human nature and the impetus for profit.
UBI is not a good solution because you still have to provision everything on the market, so it's a subsidy to private companies that sell the necessities of life on the market. If we're dreaming up solutions to problems, much better would be to remove the essentials from the market and provide them to everyone universally. Non-market housing, healthcare, education all provided to every citizen by virtue of being a human.
You can’t provide valuable things for “free” en masse without institutionalizing either slavery or robbery. The value must come from somewhere.
The costs of what you propose are enormous. No legislation can change that fact.
There ain’t no such thing as a free lunch.
Who’s going to pay for it? Someone who is not paying for it today.
How do you intend to get them to consent to that?
Or do you think that the needs of the many should outweigh the consent of millions of people?
The state, the only organization large enough to even consider undertaking such a project, has spending priorities that do not include these things. In the US, for example, we spend the entire net worth of Elon Musk (the “richest man in the world”, though he rightfully points out that Putin owns far more than he does) about every six months on the military alone. Add in Zuckerberg and you can get another 5 months or so. Then there’s the next year to think about. Maybe you can do Buffet and Gates; what about year three?
That’s just for the US military, at present day spending levels.
What you’re describing is at least an order of magnitude more expensive than that, just in one country that only has 4% of people. To extend it to all human beings, you’re talking about two more orders of magnitude.
There aren’t enough billionaires on the entire planet even to pay for one country’s military expenses out of pocket (even if you completely liquidated them), and this proposed plan is 500-1000x more spending than that. You’re talking about 3-5 trillion dollars per year just for the USA - if you extrapolate out linearly, that’d be 60-200 trillion per year for the Earth.
Even if you could reduce cost of provision by 90% due to economies of scale ($100/person/month for housing, healthcare, and education combined, rather than $1000 - a big stretch), it is still far, far too big to do under any currently envisioned system of wealth redistribution. Society is big and wealthy private citizens (ie billionaires) aren’t that numerous or rich.
There is a reason we all pay for our own food and housing.
> You’re talking about 3-5 trillion dollars per year just for the USA
I just want to point out that's about a fifth of our GDP and we spend about this much for healthcare in the US. We badly need a way to reduce this to at least half.
> There is a reason we all pay for our own food and housing.
The main reason I support UBI is I don't want need based or need aware distribution. I want everyone to get benefits equally regardless of income or wealth. That's my entire motivation to support UBI. If you can come up with another something that guarantees no need based or need aware and does not have a benefit cliff, I support that too. I am not married to UBI.
> I support UBI
Honestly, what type of housing do you envision under a UBI system? Houses? Modern apartment buildings? College dormitory-like buildings? Soviet-style complexes? Prison-style accommodations? B stands for basic, how basic?
(Not the person you're replying to)
I think a UBI system is only stable in conjunction with sufficient automation that work itself becomes redundant. Before that point, I don't think UBI can genuinely be sustained; and IMO even very close to that point the best I expect we will see, if we're lucky, is the state pension age going down. (That it's going up in many places suggests that many governments do not expect this level of automation any time soon).
Therefore, in all seriousness, I would anticipate a real UBI system to provide whatever housing you want, up to and including things that are currently unaffordable even to billionaires, e.g. 1:1 scale replicas of any of the ships called Enterprise including both aircraft carriers and also the fictional spaceships.
That said, I am a proponent of direct state involvement in the housing market, e.g. the UK council housing system as it used to be (but not as it now is, there're not building enough):
• https://en.wikipedia.org/wiki/Public_housing_in_the_United_K...
• https://en.wikipedia.org/wiki/Council_house
> You can’t provide valuable things for “free” en masse without institutionalizing either slavery or robbery. The value must come from somewhere.
Utter nonsense.
Do you believe the European countries that provides higher education for free are manning tenure positions with slaves or robbing people at gunpoint?
How come do you see public transportation services in some major urban centers being provided free of charge?
How do you explain social housing programmes conducted throughout the world?
Are countries with access to free health care using slavery to keep hospitals and clinics running?
What you are trying to frame as impossibilities is already the reality for many decades in countries ranking far higher in development and quality of living indexes that the US.
How do you explain that?
You're missing the point, language can be tricky. Technically, the state confiscating wealth derived from your labor through taxes is a form of robbery and slavery. It used to be called corvée. But the words being used have a connotation of something much more brutal and unrewarding. This isn't a political statement, I'm not a libertarian who believes all taxation is evil robbery and needs to be abolished. I'm just pointing out by the definition of slavery aka forced labor, and robbery aka confiscation of wealth, the state employs both of those tactics to fund the programs you described.
If the state "confiscated" wealth derived from capital (AI) would that be OK with you?
> Technically, the state confiscating wealth derived from your labor through taxes is a form of robbery and slavery.
Without the state, you wouldn't have wealth. Heck there wouldn't even be the very concept of property, only what you could personally protect by force! Not to mention other more prosaic aspects: if you own a company, the state maintains the roads that your products ship through, the schools that educate your workers, the cities and towns that house your customers... In other words the tax is not "money that is yours and that the evil state steals from you", but simply "fair money for services rendered".
To a large extent, yes. That's why the arrangement is so precarious, it is necessary in many regards, but a totalitarian regime or dictatorship can use this arrangement in a nefarious manner and tip the scale toward public resentment. Balancing things to avoid the revolutionary mob is crucial. Trading your labor for protection is sensible, but if the exchange becomes exorbitant, then it becomes a source of revolt.
> You're missing the point, language can be tricky. Technically, the state confiscating wealth derived from your labor through taxes is a form of robbery and slavery.
You're letting your irrational biases show.
To start off, social security contributions are not a tax.
But putting that detail aside, do you believe that paying a private health insurance also represents slavery and robbery? Are you a slave to a private pension fund?
Are you one of those guys who believes unions exploit workers whereas corporations are just innocent bystanders that have a neutral or even positive impact on workers lives and well being?
No, I'm a progressive and believe in socialism. But taxation is de facto a form of unpaid labor taken by the force of the state. If you don't pay your taxes, you will go to jail. It is both robbery and slavery, and in the ideal situation, it is a benevolent sort of exchange, despite existing in the realm of slavery/robbery. In a totalitarian system, it become malevolent very quickly. It also can be seen as not benevolent when the exchange becomes onerous and not beneficial. Arguing this is arguing emotionally and not rationally using language with words that have definitions.
social security contributions are a mandatory payment to the state taken from your wages, they are a tax, it's a compulsory reduction in your income. Private health insurance is obviously not mandatory or compulsory, that is different, clearly. Your last statement is just irrelevant because you assume I'm a libertarian for pointing out the reality of the exchange taking place in the socialist system.
> No, I'm a progressive and believe in socialism
I'd be very interested in hearing which definition of "socialism" aligns with those obviously libertarian views?
> If you don't pay your taxes, you will go to jail. It is both robbery and slavery [...] Arguing this is arguing emotionally and not rationally using language with words that have definitions.
Indulging in the benefits of living in a society, knowingly breaking its laws, being appalled by entirely predictable consequences of those action, and finally resorting to incorrect usage of emotional language like "slavery" and "robbery" to deflect personal responsibility is childish.
Taxation is payment in exchange for services provided by the state and your opinion (or ignorance) of those services doesn't make it "robbery" nor "slavery". Your continued participation in society is entirely voluntary and you're free to move to a more ideologically suitable destination at any time.
> No, I'm a progressive and believe in socialism. But taxation is de facto a form of unpaid labor taken by the force of the state.
I do not know what you mean by "progressive", but you are spewing neoliberal/libertarian talking points. If anything, this tells how much Kool aid you drank.
> Are countries with access to free health care using slavery to keep hospitals and clinics running?
No, robbery. They’re paid for with tax revenues, which are collected without consent. Taking of someone’s money without consent has a name.
Have you ever stopped to consider why class mobility is much much less common in Europe than in the USA?
> Have you ever stopped to consider why class mobility is much much less common in Europe than in the USA?
My understanding is that your info is seriously out of date. It might have been the case in the distant past but not the case anymore.
https://news.yale.edu/2025/02/20/tracking-decline-social-mob...
https://en.wikipedia.org/wiki/Global_Social_Mobility_Index
> Have you ever stopped to consider why class mobility is much much less common in Europe than in the USA?
It's a common idea but each time you try to measure social mobility, you find a lot of European countries ahead of USA.
- https://en.wikipedia.org/wiki/Global_Social_Mobility_Index
- https://www.theguardian.com/society/2018/jun/15/social-mobil...
> Have you ever stopped to consider why class mobility is much much less common in Europe than in the USA?
Which class mobility is this that you speak of? The one that forces the average US citizens to be a paycheck away from homelessness? Or is it the one where you are a medical emergency away from filing bankruptcy?
Have you stopped to wonder how some European countries report higher median household incomes than the US?
But by any means continue to believe your average US citizen is a temporarily embarrassed billionaire, just waiting for the right opportunity to benefit from your social mobility.
In the meantime, also keep in mind that mobility also reflects how easy it is to move down a few pegs. Let that sink in.
the economic situation in Europe is much more dire than the US...
> the economic situation in Europe is much more dire than the US...
Is it, though? The US reports by far the highest levels of lifetime literal homelessness, which is three times greater than in countries like Germany. Homeless people on Europe aren't denied access to free healthcare, primary or even tertiary.
Why do you think the US, in spite of it's GDP, features so low in rankings such as human development index or quality of life?
Yet people live better. Goes to show you shouldn't optimise for crude, raw GDP as an end in itself, only as a means for your true end: health, quality of life, freedom, etc.
In many of the metrics, yeah. But Americans can afford larger houses and more stuff essentially, which isn't necessarily a good replacement for general quality of life things.
> In many of the metrics, yeah. But Americans can afford larger houses and more stuff essentially, which isn't necessarily a good replacement for general quality of life things.
I think this is the sort of red herring that prevents the average US citizen from realizing how screwed over they are. Again, the median household income in the US is lower than in some European countries. On top of this, the US provides virtually no social safety net or even socialized services to it's population.
The fact that the average US citizen is a paycheck away from homelessness and the US ranks so low in human development index should be a wake-up call.
Several US states have the life expectancy of Bangladesh.
>Have you ever stopped to consider why class mobility is much much less common in Europe than in the USA?
This is not true, it was true historically, but not since WWII. Read Piketty.
> You can’t provide valuable things for “free” en masse without institutionalizing either slavery or robbery. The value must come from somewhere.
Is AI slavery? Because that's where the value comes from in the scenario under discussion.
So basically the model North Korea practices?
> Non-market housing, healthcare, education all provided to every citizen
This can also describe Nordic and Germanic models of welfare capitalism (incrementally dismantled with time but still exist): https://en.wikipedia.org/wiki/Welfare_capitalism
UBI could easily become a poverty trap, enough to keep living, not enough to have a shot towards becoming an earner because you’re locked out of opportunities. I think in practice it is likely to turn out like “basic” in The Expanse, with people hoping to win a lottery to get a shot at having a real job and building a decent life for themselves.
If no UBI is installed there will be a hard crash while everyone figures out what it is that humans can do usefully, and then a new economic model of full employment gets established. If UBI is installed then this will happen more slowly with less pain, but it is possible for society to get stuck in a permanently worse situation.
Ultimately if AI really is about to automate as much as it is promised then what we really need is a model for post-capitalism, for post-scarcity economics, because a model based on scarcity is incapable of adapting to a reality of genuine abundance. So far nobody seems to have any clue of how to do such a thing. UBI as a concept still lives deeply in the Overton window bounded by capitalist scarcity thinking. (Not a call for communism btw, that is a train to nowhere as well because it also assumes scarcity at its root.)
What I fear is that we may get a future like The Diamond Age, where we have the technology to get rid of scarcity and have human flourishing, but we impose legal barriers that keep the rich rich and the poor poor. We saw this happen with digital copyright, where the technology exists for abundance, but we’ve imposed permanent worldwide legal scarcity barriers to protect revenue streams to megacorps.
The major shift for me is now its normal to take Waymos. Yeah, there aren't as fast as Uber if you have to get across town, but for trips less than 10 miles they're my go to now.
Ive never taken one. They seem nice though.
On the other hand, the Tesla “robotaxi” scares the crap out of me. No lidar and seems to drive more aggressively. The Mark Rober YouTube of a Tesla plowing into a road-runner style fake tunnel is equal parts hilarious and nightmare fuel when you realize that’s what’s next to your kid biking down the street.
> Mark Rober YouTube of a Tesla plowing into a road-runner style fake tunnel
I understand the argument for augmenting your self-driving systems with LIDAR. What I don't really understand is what videos like this tell us. The comparison case for a "road-runner style fake tunnel" isn't LIDAR, it's humans, right? And while I'm sure there are cases where a human driver would spot the fake tunnel and stop in time, that is not at all a reasonable assumption. The question isn't "can a Tesla save your life when someone booby traps a road?", it's "is a Tesla any worse than you at spotting booby trapped roads?", and moreover, "how does a Tesla perform on the 99.999999% of roads that aren't booby trapped?"
Tesla‘s insistence on not using Lidar while other companies deem it necessary for save auto-pilot creates the need for Tesla to demonstrate that their approach is equally as save for both drivers and ie pedestrians. They haven’t done that, arguably the data shows the contrary. This generates the impression that Tesla skimps on security and if they skimp in one area, they’ll likely skimp in others. Stuff like the Rober video strengthens these impressions. It’s a public perception issue and Tesla has done nothing (and maybe isn’t able to do anything) to dispel this notion.
> Is a Tesla any worse than you at spotting booby trapped roads
That would've been been the case if all laws, opinions and purchasing decisions were made by everyone acting rationally. Even if self driving cars are safer than human drivers, it just takes a few crashes to damage their reputation. It has to be much, much safer than humans for mass adoption. Ideally also safer than the competition, if you're comparing specific companies.
And Waymo is much safer than human drivers. Its better at chauffeuring than humans, too.
I’m curious, are they now fully autonomous? I remember some time ago they had a remote operator.
Waymo has a control center, but it's customer service, not remote driving. They can look at the sensor data, give hints to the car ("back out, turn around, try another route") and talk to the customer, but can't take direct control and drive remotely.
Baidu's system in China really does have remote drivers.[1]
Tesla also appears to have remote drivers, in addition to someone in each car with an emergency stop button.[2]
[1] https://cyberlaw.stanford.edu/blog/2025/05/comparing-robotax...
[2] https://insideevs.com/news/760863/tesla-hiring-humans-to-con...
Good account to follow to track their progress, sufficed to say they're nearing/at the end of the beginning: https://x.com/reed // https://x.com/daylenyang/status/1953853807227523178
The robotaxi business model is the total opposite of scaling. At my previous employer we were solving the problem "block by block, city by city", , and I can only assume that you are living in the right city/block where they are tackling.
Every time someone casually throws out UBI my mind goes to the question "who is paying taxes when some people are on UBI ?"
Is there like a transition period where some people don't have to pay taxes and yet don't get UBI, and if so, why hasn't that come yet ? Why aren't the minimum tax thresholds going up if UBI could be right around the corner ?
The taxes will be most burdensome for the wealthiest and most productive of institutions, which is generally why these arrangements collapse economies and nations. UBI is hard to implement because it incentivizes non-productive behavior and disincentivizes productive activity. This creates economic crisis, taxes are basically a smaller scale version of this, UBI is like a more comprehensive wealth redistribution scheme. The creation of a syndicate (in this case, the state) to steal from the productive to give to the non-productive is a return to how humanity functioned before the creation of state-like structures when marauders and bandits used violence to steal from those who created anything. Eventually, the state arose to create arrangements and contracts to prevent theft, but later become the thief itself, leading to economic collapse and the cyclical revolutionary cycle.
So, AI may certainly bring about UBI, but the corporations that are being milked by the state to provide wealth to the non-productive will begin to foment revolution along with those who find this arrangement unfair, and the productive activity of those especially productive individuals will be directed toward revolution instead of economic productivity. Companies have made nations many times before, and I'm sure it'll happen again.
The problem is the "productive activity" is rather hard to define if there's so much "AI" (be it classical ML, LLM, ANI, AGI, ASI, whatever) around that nearly everything can be produced by nearly no one.
The destruction of the labour theory of value has been a goal of "tech" for a while, but if they achieve it, what's the plan then?
Assuming humans stay in control of the AIs because otherwise all bets are off, in a case where a few fabulously wealthy (or at least "onwing/controlling", since the idea of wealth starts to become fuzzy) industrialists control the productive capacity for everything from farming to rocketry and there's no space for normal people to participate in production any more, how do you even denominate the value being "produced"? Who is it even for? What do they need to give in return? What can they give in return?
The assumption here that UBI "incentivizes non-productive behavior and disincentivizes productive activity" is the part that doesn't make sense. What do you think universal means? How does it disincentivize productive activity if it is provided to everyone regardless of their income/productivity/employment/whatever?
Evolutionarily, people engage in productive activity in order to secure resources to ensure their survival and reproduction. When these necessary resources are gifted to a person, there is a lower chance that they will decide to take part in economically productive behavior.
You can say that because it is universal, it should level the playing field just at a different starting point, but you are still creating a situation where even incredibly intelligent people will choose to pursue leisure over labor, in fact, the most intelligent people may be the ones to be more aware of the pointlessness of working if they can survive on UBI. Similarly, the most intelligent people will consider the arrangement unfair and unsustainable and instead of devoting their intelligence toward economically productive ventures, they will devote their abilities toward dismantling the system. This is the groundwork of a revolution. The most intelligent will prefer a system where their superior intelligence provides them with sufficient resources to choose a high-quality mate. If they see an arrangement where high-quality mates are being obtained by individuals who they deem to be receiving benefits that they cannot defend/protect adequately, such an arrangement will be dismantled. This evolutionary drive is hundreds of millions of years old. Primitive animals will take resources from others that they observe to be unable to defend their status.
So, overall, UBI will probably be implemented, and it will probably end in economic crisis, revolution, and the resumption of this cycle that has been playing out over and over for centuries.
I guess the thinking goes like this: Why start a business, get a higher paying job etc if you're getting ~2k€/mo in UBI and can live off of that? Since more people will decide against starting a business or increasing their income, productive activity decreases.
You also have to consider the alternative: if there’s no ubi, are you expecting millions to starve? This is a recipe for civil war, if you have a very large group of people unable to survive you get social unrest. Either you spend the money on ubi or on police/military suppression to battle the unrest.
There's another question to answer:
Who is working?
Isn't it the case that companies are always competing and evolving? Unless we see that there's a ceiling to driverless tech that is immediately obvious.
We "made cars work" about 100 years ago, but they have been innovating on that design since then on comfort, efficiency, safety, etc. I doubt the very first version of self driving will have zero ways to improve (although eventually I suppose you would hit a ceiling).
> I think the most sensible answer would be something like UBI.
Having had the experience of living under communist regime prior to 1989 I have zero trust in the state providing support, while I am totally dependent and have no recourse. Instead I would rather rely on my own two hands like my grandparents did.
I see a world where we can build anything we want with our own hands and AI automation. Jobs might become optional.
> I see a world where we can build anything we want with our own hands and AI automation. Jobs might become optional.
Unless your two hands are building murderbots, though, it doesn't matter what you're building if you can't grow or buy food.
I haven't personally seen how UBI could end up working viably, but I also don't see any other system working without much more massive societal changes than anyone is talking about.
Meanwhile, there are many many people that are very invested in maintaining massive differentials between the richest and the poorest that will be working against even the most modest changes.
I'd argue against the entire perspective of evaluating every policy idea along one-dimensional modernist polemics put forwards as "the least worst solution to all of human economy for all time".
Right now the communists in China are beating us at capitalism. I'm starting to find the entire analytical framework of using these ideologies ("communism", "capitalism") to evaluate _anything_ to be highly suspect, and maybe even one of the west's greatest mistakes in the last century.
> I see a world where we can build anything we want with our own hands and AI automation. Jobs might become optional.
I was a teenager back in the 90s. There was much talk then about the productivity boosts from computers, the internet, automation, and how it would enable people to have so much more free time.
Interesting thing is that the productivity gains happened. But the other side of that equation never really materialized.
Who knows, maybe it'll be different this time.
I’m not certain we don’t have free time, but I’m not sure how to test that. Is it possible that we just feel busier nowadays because we spend more time watching TV? Work hours haven’t dropped precipitously, but maybe people are spending more time in the office just screwing around.
It's the same here. Calling what the west has a "free-market capitalist" system is also a lie. At every level there is massive state intervention. Most discoveries come from publicly funded work going on at research universities or from billions pushed into the defense sector that has developed all the technology we use today from computers to the internet to all the technology in your phone. That's no more a free-market system than China is "communist" either.
I think the reality is just that governments use words and have an official ideology, but you have to ignore that and analyze their actions if you want to understand how they behave.
not to mention that most corporations in the US are owned by the public through the stock market and the arrangement of the American pension scheme, and public ownership of the means of production is one of the core tenets of communism. Every country on Earth is socialist and has been socialist for well over a century. Once you consider not just state investment in research, but centralized credit, tax-funded public infrastructure, etc. well yeah, terms such as "capitalism" become used in a totally meaningless way by most people lol.
You will still need energy and resources.
In your world where jobs become "optional" because a private company has decided to fire half their workforce, and the state also does not provide some kind of support, what do all the "optional" people do?
Driverless taxis is IMO the wrong tech to compare to. It’s a high consequence, low acceptance of error, real time task. Where it’s really hard to undo errors.
There is a big category of tasks that isn’t that. But that are economically significant. Those are a lot better fit for AI.
> What makes you think that? Self driving cars [...]
AI is intentionally being developed to be able to make decisions in any domain humans work in. This is unlike any previous technology.
The more apt analogy is to other species. When was the last time there was something other than homo sapiens that could carry on an interesting conversation with homo sapiens. 40,000 years?
And this new thing has been in development for what? 70 years? The rise in its capabilities has been absolutely meteoric and we don't know where the ceiling is.
Do you live in SF (the city, not the Bay Area as a whole) or West LA? I ask because in these areas you can stand on any city street and see several self driving cars go by every few minutes.
It's irrlevant that they've had a few issues. They already work and people love them. It's clear they will eventually replace every uber/lyft driver, probably every taxi driver, they'll likely replace every doordash/grubhub driver with vehicles design to let smaller automated delivery carts go the last few blocks. They may also replace every truck driver. Together that's around 5 million jobs in the USA.
Once they're let on the freeways their usage will expand even faster.
As someone who lives in LA, I don’t think self-driving cars existed at the time of the Rodney King LA riots and I am not aware of any other riots since.
Let me be the first to welcome you out of your long slumber!
I feel like you are trapped in the first assessment of this problem. Yes, we are not there yet, but have you thought about the rate of improvement? Is that rate of improvement reliable? Fast? That's what matters, not where we are today.
You could say that about any time in history. When the steam engine or mechanical loom were invented there were millions of people like you who predicted that mankind will be out of jobs soon and guess what happened? There's still a lot of things to do in this world and there still will be a lot to do (aka "jobs") for a loooong time.
> A human driver is still far more adaptive and requires a lot less training than AI
I get what you are saying, but humans need 16 years of training to begin driving. I wouldn’t call that not a lot.
And the problem for Capitalists and other anti-humanists is that this doesn’t scale. Their hope with AI, I think, is that once they train one AI for a task, it can be trivially replicated, which scales much better than humans.
> What makes you think that? Self driving cars have had (...)
I think you're confusing your cherry-picked comparison with reality.
LLMs are eliminating the need to have a vast array of positions on payrolls. From copywriters to customer support, and even creative activities such as illustration and even authoring books, today's LLMs are already more than good enough to justify replacing people with the output of any commercial chatbot service.
Software engineering is being affected as well, and it requires far greater know-how, experience, and expertise to meet the hiring bar.
> And when you talk about applying this same tech, so confidently, to domains far more nuanced and complex than (...)
Yes, your tech job is also going to be decimated. It's not a matter of having PMs write code. It's an issue of your junior SDE armed with a LLM being quite able to clear your bug backlog in a few days while improving test coverage metrics and refactoring code back from legacy status.
If a junior SDE can suddenly handle the workload that previously you required a couple of medior and senior developers, why would a company keep around 4 or 5 seasoned engineers when an inexperienced one is already able to handle the workload?
That's where the jobs will vanish. Even if demand remains there, it dropped considerably as to not justify retaining so many people in a company's payroll.
And what are you going to do, them? Drive a Uber?
> LLMs are eliminating the need to have a vast array of positions on payrolls. From copywriters to customer support, and even creative activities such as illustration and even authoring books, today's LLMs are already more than good enough to justify replacing people with the output of any commercial chatbot service.
I'd love a source to these claims. Many companies are claiming that they are able to layoff folks because of AI, but in fact, AI is just a scapegoat to counteract the reckless overhiring due to free money in the market over the last 5-10 years and investors are demanding to see a real business plan and ROI. "We can eliminate this headcount due to the efficiency of our AI" is just a fancy way to make the stock price go up while cleaning up the useless folks.
People have ideas. There are substantially more ideas than people who can implement ideas. As with most technology, the reasonable expectation is to assume that people are just going to want more done by the now tool powered humans, not less things.
> I'd love a source to these claims.
Have you been living under a rock?
You can start getting up to speed by how Amazon's CEO already laid out the company's plan.
https://www.thecooldown.com/green-business/amazon-generative...
> (...) AI is just a scapegoat to counteract the reckless overhiring due to (...)
That is your personal moralist scapegoat, and one that you made up to feel better about how jobs are being eliminated because someone somewhere screwed up.
In the meantime, you fool yourself and pretend that sudden astronomic productivity gains have no impact on demand.
These supposed "productivity gains" are only touted by the ones selling the product, i.e. the ones who stand to benefit from adoption. There is no standard way to measure productivity since it's subjective. It's far more likely that companies will use whatever scapegoat they can to fire people with as little blowback as possible, especially as the other commenter noted, people were getting hired like crazy.
Each one of the roles you listed above is only passable with AI at a superficial glance. For example, anyone who actually reads literature other than self-help and pop culture books from airport kiosks knows that AI is terrible at longer prose. The output is inconsistent because current AI does not understand context, at all. And this is not getting into the service costs, the environmental costs, and the outright intellectual theft in order to make things like illustrations even passable.
> These supposed "productivity gains" are only touted by the ones selling the product (...)
I literally pasted an announcement from the CEO of a major corporation warning they are going to decimate their workforce due to the adoption of AI.
The CEO literally made the following announcement:
> "As we roll out more generative AI and agents, it should change the way our work is done," Jassy wrote. "We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs."
This is not about selling a product. This is about how they are adopting AI to reduce headcount.
The CEO is marketing to the company’s shareholders. This is marketing. A CEO will say anything to sell the idea of their company to other people. Believe it or not, there is money to be made from increased share prices.
I think your assumption is probably a little backwards. It will be a senior SDE clearing the slate I think. Ones that embrace the technolocy and are able to accelerate their work. At that level of effeciency the cost is still way way lower than it is for a larger team.
When it gets to the point that you don't need a senior engineer doing the work, you won't need a junior either.
> I think your assumption is probably a little backwards. It will be a senior SDE clearing the slate I think.
I don't think you understood the point I made.
My point was not about Jr vs Sr, let alone how a Jr is somehow more capable than a Sr.
My point was that these productivity gains aren't a factor of experience of seniority, but they do devalue the importance of seniority to perform specific tasks. Just crack open a LLM, feed in a few prompts, and done. Hell, junior developers no longer need to reach out to seniors to as questions about any topic. Think about that for a second.
I think it is important to remember that "decades" here means <20 years. Remember that in 2004 it was considered sufficiently impossible that basically no one had a car that could be reliably controlled by a computer, let alone driven by computer alone:
https://en.wikipedia.org/wiki/DARPA_Grand_Challenge_(2004)
I also think that most job domains are not actually more nuanced or complex than driving, at least from a raw information perspective. Indeed, I would argue that driving is something like a worst-case scenario when it comes to tasks:
* It requires many different inputs, at high sampling rates, continuously (at the very least, video, sound, and car state)
* It requires loose adherence to laws in the sense that there are many scenarios where the safest and most "human" thing to do is technically illegal.
* It requires understanding of driving culture to avoid making decisions that confuse/disorient/anger other drivers, and anticipating other drivers' intents (although this can be somewhat faked with sufficiently fast reaction times)
* It must function in a wide range of environments: there is no "standard" environment
If we compare driving to other widespread-but-low-wage jobs (e.g. food prep, receptionists, cleaners) there are generally far more relaxed requirements:
* Rules may be unbreakable as opposed to situational, e.g. the cook time for burgers is always the same.
* Input requirements may be far lower. e.g. an AI receptionist could likely function with audio and a barcode scanner.
* Cultural cues/expectations drive fewer behaviors. e.g. an AI janitor just needs to achieve a defined level of cleanliness, not gauge people's intent in real-time.
* Operating environments are more standardized. All these jobs operate indoors with decent lighting.
I’m pretty sure you could generate a similar list for any human job.
It’s strange to me watching the collective meltdown over AI/jobs when AI doesn’t do jobs, it does tasks.
Everything anyone could say about bad AI driving could be said about bad human drivers. Nevertheless, Waymo has not had a single fatal accident despite many millions of passenger miles and is safer than human drivers.
Everything? How about legal liability for the car killing someone? Are all the self-driving vendors stepping up and accepting full legal liability for the outcomes of their non-deterministic software?
In the bluntest possible sense, who cares if we can make roads safer?
Solving liability in traffic collisions is basically a solved problem through the courts, and at least in the UK, liability is assigned in law to the vendor (more accurately, there’s a list of who’s responsible for stuff, I’m not certain if it’s possible to assume legal responsibility without being the vendor).
Thousands have died directly due to known defects in manufactured cars. Those companies (Ford, others) still are operating today.
Even if driverless cars killed more people than humans they would see mass adoption eventually. However they are subject to farr higher scrutiny than human drivers and even so make fewer mistakes, avoid accidents more frequently and can't get drunk, tired, angry, or distracted.
A faulty break pad or an engine doesn’t take decisions that might endanger people. Self-driving cars do. They might also get hacked pretty thoroughly.
For the same reason, I’d probably never buy a home robot with more capabilities then a vacuum cleaner.
Current non-self-driving cars on the road can be hacked
https://www.wired.com/story/kia-web-vulnerability-vehicle-ha...
But even if they can theoretically be hacked, so far Waymos are still safer and more reliable than human drivers. The biggest danger someone has riding in one is someone destroying it for vindictive reasons.
There is a fetish for technology that sometimes we are not aware of. On average there might be less accidents, but if specific accidents were preventable and now they happen, people will sue. And who will take the blame? The day the company takes the blame is the day self-driving exists IMO.
To be fair, self-driving cars don't need to be perfect 0 casualty modes of transportation, they just need to be better than human drivers. Since car crashes kill 2 million people each year (and maim another 2 or 3), this is a low bar to clear...
Of course, the actual answer is that rail and cycling infrastructure are much more efficient than cars in any moderately dense region. But that would mean funding boring regular companies focused on providing a product or service for adequate profit, instead of exciting AI web3 high tech unicorn startups.
Self-driving cars are a political problem, not a technical problem. A functioning government would put everything from automation-friendly signaling standards to battery-swapping facilities into place.
We humans used to do that sort of thing, but not anymore, so... bring on the AI. It won't work as well as it might otherwise be able to, but it'll probably kill fewer humans on the road at the end of the day. A low bar to clear.
Self-driving car companies dont want a unfiied signalling platform or other "open for all" infrastructure updates. They want to own self-driving, to lock you into a subscription on their platform.
Literally the only open source self driving platform, from trillion to billion to million dollar companies is comma.ai, founded by Geohot. Thats it. Its actually very good, and I bet they would welcome these upgrades, but that would be a consortium of one underdog pushing for them.
Ie. a political problem as the grandparent said.
Corporations generally follow a narrow somewhat predictable pattern towards some local maxima of their own value extraction. Since world is not zero sum, it produces value for others too.
Where politics (should) enter the picture is where we somehow can see a more global maxima (for all citizens) and try to drive towards it through some political, hopefully democratic means. (Laws, standards, education, investment, infra etc)
Yeah, that must be it. It's a conspiracy.
This is all happening right out in the open.
Why would politicians want to:
- destroy voting population's jobs
- put power in the hand of 1-2 tech companies
- clog streets with more cars rather than build trams, trains, maglevs, you name it
Because the primary goal of the vast majority of politicians is to collect life-changing, generational wealth by any means necessary.
Snarky but serious question: How do we know that this wave will disrupt labor at all? Every time I dig into a story of X employees replaced by "AI", it's always in a company with shrinking revenues. Furthermore, all of the high-value use cases involve very intense supervision of the models.
There's been a dream of unsupervised models going hog wild on codebases for the last three years. Yet even the latest and greatest Claude models can't be trusted to write a new REST endpoint exposing 5 CRUD methods without fucking something up. No, it requires not only human supervision, but it also requires human expertise to validate and correct.
I dunno. I feel like this language grossly exaggerates the capability of LLMs to paint a picture of them reliably fulfilling roles end-to-end instead of only somewhat reliably fulfilling very narrowly scoped tasks that require no creativity or expertise.
> instead of only somewhat reliably fulfilling very narrowly scoped tasks that require no creativity or expertise.
This alone is enough to completely reorganise the labour market, as it describe an enormous number of roles.
How many people could be replaced by a proper CMS or a Excel sheet right now already? Probably dozens of millions, and yet they are at their desks working away.
It's easy to sit in a café and ponder about how all jobs will be gone soon but in practice people aren't as easily replacable.
For many businesses the situation is that technology has dramatically underperformed in doing the most basic tasks. Millions of people are working around things like defective ERP systems. A modest improvement in productivity in building basic apps could push us past a threshold. It makes it possible for millions more people to construct crazy excel formulas. It makes it possible to add a UI to a python script where before there was only a command line. And one piece of magic that works teliably can change an entire process. It lets you make a giant leap rather than an incremental change.
If we could make line of business crud apps work reliably, have usable document/email search, and have functional ERP that would dissolve millions of jobs.
I can tell that those whose jobs depended on providing image assets or translations for CMS, are no longer relevant for their employers.
I promise you that your understanding of those roles is wrong.
Carpenters, landscapers, roofers, plumbers, electricians, elderly care, nurses, cooks, servers, bakers, musicians, actors, artists...
Those jobs are probably still a couple decades plus off from displacement. some possibly never, And we will need them in higher numbers.. and perhaps it's ironic because these are some of the oldest professions.
Everything we do is in service of paying for our housing, transportation, eating food, healthcare and some fun money.
Most goes to housing, healthcare, and transportation.
Healthcare costs may come down some with advancements in AI. R&D will be cheaper. Knowledge will be cheaper and more accessible.
But what people care about, what people have always cared about, remains in professions that are as old as time and, I don't see them fully replaceable by AI just yet - enhanced, yes, but not replaced.
Imagine a world where high quality landscaping exists for the average person. And this is made possible because we'll live in a world where the equivalent of today's uber driver owns a team of gardening androids.
Or perhaps in the future everyone will work in finance. Everyone's a corporation.
Ramble ramble ramble
> Imagine a world where high quality landscaping exists for the average person. And this is made possible because we'll live in a world where the equivalent of today's uber driver owns a team of gardening androids.
I think it's going to be the other way around. It's looking like automation of dynamic physical capability is going to be the very last thing we figure out; what we're going to get first is teams of lower-skilled human workers directed largely by jobsite AI. By the time the robots get there, they're not going to need a human watching them.
Looking at the advancements in low cost flexible robotics I'm not sure I share that sentiment. Plus the LLM craze is fueling generalist advancement in robotics as well. I'd say we'll see physical labor displacement within a decade tops.
Kinematics is deceptively hard and at least evolutionary took a lot longer to develop than language. Low wage physical labor seems easy only because humans are naturally very good at it, and this took millions of years to develop.
The number of edge cases when you are dealing with physical world is several order of magnitudes higher than when dealing with text only and the spacial reasoning capabilities of the current crop of MLLMs are not nearly as good at it as required. And this doesn't even take in to account that now you are dealing with hardware and hardware is expensive. Expensive enough, that even on the manufacturing lines (a more predictable environment than let's say landscaping) automation sometimes doesn't make economic sense.
Im reminded of something I read years ago that said something like jobs are now above or below the API. I think now its jobs will be above or below the AI.
Well when i get unemployable i will start upskilling to an electrician. And so will hundreds of thousands like me.
That will do very well to salaries I think and everyone will be better of.
Those jobs don’t pay particularly well today, and many have poor working conditions that strain the body.
Imagine what they’ll be like with an influx of additional laborers.
I just have to see how you get let’s say 100k copywriters trained to be carpenters.
You also force them to move to places where there is less carpenters?
What this delusion seems to turn a blind eye to is that a good chunk of the population is already in those roles; what happens when the supply of those roles far exceeds the demand, in a relatively short time? Carpenters suddenly abundant, carpenter wages drop, carpenters struggling to live, carpenters forced to tighten spending, carpenters decide children aren't affordable.. now extrapolate that across all of the impacted roles and industries. No doubt someone is already typing "carpenters can retrain too!" OK, so they're back to entry level wages (if anything) for 5+ years? Same story. And retrain to what?
At some point an equilibrium will be reached but there is no guarantee it will be a healthy situation or a smooth ride. This optimism about AI and the rosy world that is just around the corner is incredibly naive.
It's naive but also ignores that automation is simply replacing human labor by capital. Capital captures more of the value, and workers get less overall. Unless we end up in some mild socialist utopia where basic needs are provided and corps are all coops, but that's not the trend.
There’s no guarantee of an equilibrium!
I would be cautious to avoid any narrative anchoring on “old versus new” professions. I would seek out other ways of thinking about it.
For example, I predict humans will maintain competitive advantage in areas where the human body excels due to its shape, capabilities, or energy efficiency.
In your example i think it's a great deal more likely that the Uber driver is paid a tiny stipend to supervise a squad of gardening androids owned at substantial expense by Amazon Yard.
Why would anyone be on the field? Why not just have a few drones flying there monitoring whole operation remotely. And have one person monitor too many sites at same time likely from cheapest possible region.
I too believe that a mostly autonomous work world would be something we could handle well assuming the leadership was composed of smart folks picking the right decisions, without also being too much exposed to external powers opposing an impossible to win force (large companies and interests). The problem is if we mix what could happen (not clear when, right now) with the current weak leadership across the world.
I think UBI can only buy some time but won't solve the problem. We need fast improvement with AI robots that can be used for automation on mass scale: construction, farming maybe even cooking and food processing.
Right now AI is mostly focused on automating top levels of maslov pyramid hierarchy of needs rather than bottom physiological needs. Once things like shelter (housing), food, utilities (electricity, water, internet) are dirty cheap UBI is less needed.
Far from an expert on this topic, but what differentiates AI from other non physical efficiency tools? (I'm actually asking not contesting).
Won't companies always want to compete with one another, so simply using AI won't be enough. We will always want better and better software, more features, etc. so that race will never end until we get an AI fully capable of managing all parts (100%) of the development process (which we don't seem to be close to yet).
From Excel to Autocad there's been a lot of tools that were expected to decrease the amount of work ended up actually increasing it due to having new capabilities and the constant demand for innovation. I suppose the difference would be if we think AI will continue to get really good, or if it'll become SO good that it is plug and play and completely replaces people.
> what differentiates AI from other non physical efficiency tools?
At some point: (1) general intelligence; i.e. adaptivity; (2) self replication; (3) self improvement.
We don't have any more idea how to get to 1, 2, or 3, than we did 50 years ago. LLMs are cool, but they seem unlikely to do any of those things.
I encourage everyone to not claim “X seems unlikely” when it comes to high impact risks. Such a thinking pattern often leads to pruning one’s decision tree way too soon. To do well, we need to plan over an uncertain future that has many weird and unfamiliar scenarios.
Yeah I agree, it's not about where it's at now, but whether where we are now leads to something with general intelligence and self improvement ability. I don't quite see that happening with the curve it's on, but again what the heck do I know.
What do you mean about the curve not leading to general intelligence? Even if transformer architectures by themselves don’t get there, there are multifarious other techniques, including hybrids.
As long as (1) there are incentives for controlling ever increasing intelligence; (2) the laws of physics don’t block us; and (3) enough people/orgs have the motivation and means, some people/orgs are going to press forward. This just becomes a matter of time and probability. In general, I do not bet against human ingenuity, but I often bet against human wisdom.
In my view, along with many others, it would be smarter for the whole world to slow down AI capabilities advancement until we could have very high certainty that doing so is worth the risk.
every software company i've ever worked with has an endless backlog of features it wants/needs to implement. Maybe AI just lets them move through these feature more quickly?
I mean most startups fail. And in software startups, the blame for that is usually at least shared by "software wasn't good enough". So that $20million seed investment is still going to go into "software development" - ie programmer salaries. they will be using the higher level language of ai much of the time, and be 2-5 times more efficient - but will it be enough? No. Most will still fail.
Companies don’t always compete on capability or quality. Sometimes they compete on efficiency. Or sometimes they carve up the market in different ways.
Sometimes, but with technology related companies I rarely see that. I've really only seen it in industries that are very straightforward, like producing building materials or something. Do you have any examples?
> And, because it is easier to retrain humans than build machines for those jobs, we wound up with more and better jobs.
I think it did not work like that.
Automatic looms displaced large numbers of weavers, skilled professionals, which did not find immediately find jobs tending dozens of mechanical looms. (Mr Ludd was one of these displaced professionals.)
Various agricultural machines and chemical products displaced colossal numbers of country people which had to go to cities looking for industrial jobs; US agriculture used to employ 50% of workforce in 1880 and only 10% in 1930.
The advent of internet displaced many in the media industry, from high-caliber journalists to those who worked in classified ads newspapers.
All these disruptions created temporary crises, because there was no industry that was ready to immediately employ these people.
temporary - thats the key. people were able to move to the cities and get factory and office jobs and over time were much better off. I can complain about the socially alienated condition I'm in as an office worker, but I would NEVER want to do farm work - cold/sun, aching back, zero benefits, low pay, risk of crop failure, a whole other kind of isolation etc etc.
That sounds like a job for a very small number of people. Where will everyone else work?
More companies. See my post here:
https://news.ycombinator.com/reply?id=44919671&goto=item%3Fi...
This is the optimistic take and definitely possible, but not guaranteed or even likely. Markets tend to consolidate into monopolies (or close to it) over time. Unless we are creating new markets at a rapid rate, there isn’t necessarily room for those other 900 engineers to contribute.
8 billion people. only, what, 1 billion are in the middle class? Sounds like we need to be creating new markets at a rapid rate to me!
Wherever the AI tells them to
Why do they have to work?
Because the people with the money aren’t going to just give it to everyone else. We already see the richest people hoard their money and still be unsatisfied with how much they have. We already see productivity gains not transfer any benefit to the majority of people.
There is an old and reliable solution to this problem, the gibbet.
Yes. However people are unwilling to take this approach unless things get really really bad. Even then, the powerful tend to have such strong control that people are afraid to act out of fear of reprisal.
We’ve also been gaslit into believing that it’s not a good approach, that peaceful protests are more civilised (even though they rarely cause anything meaningful to actually change).
Because otherwise you'd have to convince AI-owners and select professionals to let go of their wealth to give a comfortable and fulfilling life of leisure to the unemployed.
More likely it will look like the current welfare schemes of many countries, now add mass boredom leading to unrest.
Sam Altman has expressed a preference for paying people in vouchers for using his chatbots to kill time: https://basicincomecanada.org/openais-sam-altman-has-a-new-i...
> Because otherwise you'd have to convince AI-owners and select professionals to let go of their wealth to give a comfortable and fulfilling life of leisure to the unemployed.
Not necessarily. Such forces could be outvoted or out maneuvered.
> More likely it will look like the current welfare schemes of many countries..,
Maybe, maybe not. It might take the form of UBI or some other form that we haven’t seen in practice.
> now add mass boredom leading to unrest.
So many assumptions. There is no need to just assume any particular distribution of boredom across the future population of the world. Making predictions about social unrest is even more complicated.
Deep Utopia (Bostrom) is an excellent read that extensively discusses various options if things go well.
> mass boredom leading to unrest
we must keep our peasants busy or they unrest due to boredom!
I’m not sure if that’s meant to be reassuring or not.
It’s hard for me to imagine that AI won’t be as good or better than me at most things I do. It’s quite a sobering feeling.
More people need to feel this. Too many people deny even the possibility, not based out of logic, but rather out of ignorance or subconscious factors such as fear or irrelevance.
One way to think about AI and jobs is Uber/Google Maps. You used to have to know a lot about a city to be a taxi driver; then suddenly with Google Maps you don't. So in effect, technology lowered the requirements or training needed to become a taxi driver. More people can do it, not less (although incumbents may be unhappy about this).
AI is a lot like this. In coding for instance, you still need to have some sense of good systems design, etc. and know what you want to build in concrete terms, but you don't need to learn the specific syntax of a given language in detail.
Yet if you don't know anything about IT, don't know what you want to build or what you could need, or what's possible, then it's unlikely AI can help you.
Even with Google Maps, we still need human drivers because current AI systems aren’t so great at driving and/or are too expensive to be widely adopted at this point. Once AI figures out driving too, what do we need the drivers for?
And I think that’s the point he was making, it’s hard to imagine any task where humans are still required when AI can do it better and cheaper. So I don’t think the Uber scenario is realistic.
I think the only value humans can provide in that future is “the human factor”: knowing that something is done by an actual human and not a machine can be valuable.
People want to watch humans playing chess, even though AI is better at it. They want to consume art made by humans. They want a human therapist or doctor, even if they heavily rely on AI for the technical stuff. We want the perspective of other humans even if they aren’t as smart as AI. We want someone that “gets” us, that experiences life the same way we do.
In the future, more jobs might revolve around that, and in industries where previously we didn’t even consider it. I think work is going to be mostly about engaging with each other (even more meetings!)
The problem is, in a world that is that increasingly remote, how do you actually know it’s a human on the other end? I think this is something we’ll need to solve, and it’s going to be hard with AI that’s able to imitate humans perfectly.
The spinning jenny put seamstresses out of work. But the history of automation is the history of exponentially expanding the workforce and population.
8 billion people wake up every morning determined to spend the whole day working to improve their lives. we're gonna be ok.
Don't worry about the political leaders, if a sizeable amount of people lose their jobs they will surely ask GPT-10 how to build a guillotine.
The french revolution did not go well for the average french person. Not sure guillotines are the solution we need.
Here in the US, we have been getting a visceral lesson about human willingness to sacrifice your own interests so long as you’re sticking it to The Enemy.
It doesn’t matter if the revolution is bad for the commoners — they will support it anyway if the aristocracy is hateful enough.
How did it not go well for the avg person?
The status quo does not go well for the avg person.
Most of the people who died in The Terror were commoners who had merely not been sympathetic enough to the revolution. And then that sloppiness lead to reactionary violence and there was a lot of back and forth until Napoleon took power and was pretty much a king in all but heritage.
Hopefully we can be a bit more precise this time around.
You should read French history more closely, they went through hell and changed governments at least 5 or 6 times in the 1800s.
> How did it not go well for the avg person?
You might want to look at the etymology of the word “terrorism” (despite the most popular current use, it wasn't coined for non-state violence) and what class suffered the most in terms of both judicial and non-judicial violent deaths during the revolutionary period.
The French revolution was instigated by a group of shady people, far more dangerous and vile than the aristocracy they were fighting.
We have to also choose to build technology that empowers people. Empowering technologies don't just pop into existence, they're created by people who care about empowering people.
Yeah but those opening of new kind of jobs has not always been instantly. It can take decades and for instance was one of the reasons for the French Revolution. Internet has already created a huge amount of monopolies and wealth concentration. AI seems likely to do this further.
> what do displaced humans transition to?
go to any war-torn country or collapsed empire (Soviet). I have seen/grow-up myself in both — you would get desperation, people giving-up, alcohol (famous "X"-cross of birth rate drop and deaths rising), drugs, crime, corruption/warlord-ing, rural communities hit first and totally vanish, then small-tier cities vanish, then mid-tier, only the largest hubs remain. loss of science, culture, and education. people are just gone. only ruins of whatever latest shelters they had remain, not even their prime-time architecture. you can drive hundreds/thousands of kms across these ruin of what once was flurishing culture. years ago you would find one old person still living there. these days not a single human left. this is what is coming.
that was because the economy was controlled/corrupt and not allowed to flourish (and create job-creating technologies like the internet and AI).
I believe that historically we have solved this problem by creating gigantic armies and then killing off millions of people that couldn't really adapt to the new order with a world war.
>But as its capabilities improve, what do displaced humans transition to?
IF there is intellectual/office work that remains complex enough to not be tackled by AI, we compete for those. Manual labor takes the rest.
Perhaps that’s the shift we’ll see: nowadays the guy piling up bricks makes a tenth of the architects’ salary, that relation might invert.
And the indirect effects of a society that values intellectual work less are really scary if you start to explore the chain of cause and effect.
Have you noticed that there are a lot of companies now that are trying to build advanced AI-driven robots? This is not a coincidence.
The relation won’t invert because it’s very easy and quick to train guy pilling up bricks while training architect is slow and hard. If low skilled jobs will pay much better than high skilled then people will just change their job.
That’s only true as long as the technical difficulties aren’t covered by tech.
Think of a world where software engineering itself is handled relatively well by the llm and the job of the engineer becomes just collecting business requirements and checking they’re correctly addressed.
In that world the limit for scarcity might be less in the difficulty of training and more in the willingness to bend your back in the sun for hours vs comfortably writing prompts in an air conditioned room.
Right now there are enough people willing to bend their back in the sun for hours that their salaries are much lower than these of engineers. Do you think that for some reason supply of these people will drop with higher wages and much lower employment opportunities in office jobs? I highly doubt it.
My argument is not that those people’s salaries will go up until overtaking the engineers’.
It’s the opposite, that the value of office/intellectual work will tank, while manual work remains stable. Lower barrier of entry for intelectual work if a position even needs to be covered, work conditions much more comfortable.
> displace humans ...
AI can displace human work but not human accountability. It has no skin and faces no consequences.
> can be trained to the new job opportunities more easily ...
Are we talking about AI that always needs trainers to fix their prompts and training sets? How are we going to train AI when we lose those skills and get rid of humans?
> what do displaced humans transition to?
Humans with all powerful AI in their pockets... what could they do if they lose their jobs?
> AI can displace human work but not human accountability. It has no skin and faces no consequences.
We’ve gota way to go to get there in many instances. So far I’ve seen people blame AI companies for model output, individuals for not knowing the product sold to them as a magic answer-giving machine was wrong, and other authorities in those situations (e.g. managers, parents, school administrators and teachers,) for letting ai be used at all. From my vantage point, It people seem to be using it as a tool to insulate themselves from accountability.
> AI can displace human work but not human accountability. It has no skin and faces no consequences.
Let’s assume that we have amazing aj and robotics, better than humans at everything - if you could choose between robosurgery (completely automatic) with 1% mortality and for 5000 usd vs surgery performed by human with 10% mortality and 50000 usd price tag, would you really choose human just because you can sue him? I wouldn’t. I don’t thing anyone thinking rationally would.
> ask that question to all the companies laying off junior folks in favor of LLMs right now. They are gleefully sawing off the branch they’re sitting on.
> Humans with all powerful AI in their pockets... what could they do if they lose their jobs?
At which point did AI become a free commodity in your scenario?
Is the ability to burn someone at a stake for making a mistake truly vital to you?
If not, then what's the advantage of "having skin" is? Sure, you can't flog an AI. But AI doesn't need to be threatened with flogging to perform at the peak of its abilities. A well designed AI performs at the peak of its abilities always - and if that isn't enough, you train it until it is.
For the moment, perhaps it could be jobs that LLMs can’t be trained on. New jobs, niche jobs, secret or undocumented jobs…
It’s a common point now that LLMs don’t seem to be able to apply knowledge about one thing to how a different, unfamiliar thing works. Maybe that will wind up being our edge, for a time.
> we wound up with more and better jobs.
You will have to back that statement up because this is not at all obvious to me.
If I look at the top US employers in say 1970 vs 2020, the companies that dominate 1970 were noted for having hard blue collar labor jobs but paid enough to keep a single earner family significantly above minimum wage and the poverty line. The companies that dominate in 2020 are noted for being some of the shittiest employers having some of the lowest pay fairly close to minimum wage and absolutely worst working conditions.
Sure, you tend not to get horribly maimed in 2020 vs 1970. That's about the only improvement.
This was already a problem back then, Nixon was about to introduce UBI in the late 60s and then the admin decided that having people work pointless jobs keeps them better occupied, and the rest of the world followed suit.
There will be new jobs and they will be completely meaningless busywork, people performing nothing of substance while being compensated for it. It's our way of doing UBI and we've been doing it for 50 years already.
Obligatory https://wtfhappenedin1971.com
> what do displaced humans transition to?
we assume there must be something to transition to. very well, there can be nothing.
we assume people will transition. very well, they may not transition at all and "dissappear" en masse. (same effect as as a war or an empire collapse)
During the Industrial Revolution, many who made a living by the work of their hands lost their jobs, because there were machines and factories to do their work. Then new jobs were created in factories, and then many of those jobs were replaced by robots.
Somehow many idiotic white collar jobs have been created over the years. How many web applications and websites are actually needed? When I was growing up, the primary sources of knowledge were teachers, encyclopedias, and dictionaries, and those covered a lot. For the most part, we’ve been inventing problems to solve and wasting a tremendous amount of resources.
Some wrote malware or hacked something in attempt to keep this in check, but harming and destroying just means more resources used to repair and rebuild and real people can be hurt.
At some point in coming years many white collar workers will lose their jobs again, and there will be too many unemployed because not enough blue collar jobs will be available.
There won’t be some big wealth redistribution until AI convinces people to do that.
The only answer is to create more nonsense jobs, like AI massage therapist and robot dog walker.
I don't know maybe they can grow trees and build houses.
The robots? I see this happening soon, especially for home construction.
How exactly?
In the U.S. houses are built out of wood. What robot will do that kind of work?
It’s probably the only technology that is designed to replace humans as its primary goal. It’s the VC dream.
I do wonder if the amount they're spending on it is going to be cost effective versus letting humans continue doing the work.
It is for some shareholder as long as the hype and stocks go up
Here is another perspective:
> In every technology wave so far, we've disrupted many existing jobs. However we've also opened up new kinds of jobs
That may well be why these technologies were ultimately successful. Think of millions and millions being cast out.
They won't just go away. And they will probably not go down without a fight. "Don't buy AI-made, brother!", "Burn those effing machines!" It's far from unheard of in history.
Also: who will buy if no one has money anymore? What will the state do, when thus tax income goes down, while social welfare and policing costs go up?
There are other scenarios, too: everybody gets most stuff for free, because machines and AI's do most of the work. Working communism for the lower classes, while the super rich stay super rich (like in real existing socialism). I don't think it is a good scenario either. In the long run it will make humanity lazy and dumb.
In any case I think what might happen is not easy to guess, so many variables and nth-order effects. When large systems must seek a new equilibrium all bets are usually off.
It makes me wonder if we will be much more reserved with our thoughts and teachings in the future given how quickly they will be used against us.
As someone else said, until a company or individual is willing to risk their reputation on the accuracy of AI (beyond basic summarising jobs, etc), the intelligent monkeys are here for a good while longer. I've already been once bitten, twice shy.
The conclusion, sadly, is that CEO's will pause hiring and squeeze more productivity out of existing hires. This will impact junior roles the most.
Haven't you seen companies developing autonomous killing drones?
They won't take my job - unless someone has put a hit out on me.
We also may not need to worry about it for a long time. I’m more and more falling on this side. LLM’s are hitting diminishing returns so until there’s a new innovation (can’t see any on the horizon yet) I’m not concerned for my career.
I'm skeptical of arguments like this. If we look at most impactful technologies since the year 2000, AI is not even in my top 3. Social networking, mobile computing, and cloud computing have all done more to alter society and daily life than has AI.
And yes, I recognize that AI has already created profound change, in that every software engineer now depends heavily on copilots, in that education faces a major integrity challenge, and in that search has been completely changed. I just don't think those changes are on the same level as the normalization of cutting-edge computers in everyone's pockets, as our personal relationships becoming increasingly online, nor as the enablement for startups to scale without having to maintain physical compute infrastructure.
To me, the treating of AI as "different" is still unsubstantiated. Could we get there? Absolutely. We just haven't yet. But some people start to talk about it almost in a way that's reminiscent of Pascal's Wager, as if the slight chance of a godly reward from producing AI means it is rational to devote our all to it. But I'm still holding my breath.
> in that every software engineer now depends heavily on copilots
That is maybe a bubble around the internet. Ime most programmers in my environment rarely use and certainly aren't dependent on it. They do also not only do code monkey-esque web programming so maybe this is sampling bias though it should be enough to refute this point.
Came here to say that. It’s important to remember how biased hacker news is in that regard. I’m just out of ten years in the safety critical market, and I can assure you that our clients are still a long way from being able to use those. I myself work in low level/runtime/compilers, and the output from AIs is often too erratic to be useful
>our clients are still a long way from being able to use those
So it's simply a matter of time
>often too erratic to be useful
So sometimes it is useful.
Add LED lighting on there. It is easy to forget what a difference that made. The light pollution, but also just how dim houses were. CFL didn't last very long as a thing between incandescent and LED and houses lit with incandescents have a totally different feel.
AI has already rendered academic take-home assignments moot. No other tech has had an impact like that, even the internet.
A pessimistic/realistic view of post high school education - credentials are proof of able to do a certain amount of hard work, used as an easy filter for companies while hiring.
I expect universities to adapt quickly, lest lose their whole business as degrees will not carry the same meaning to employers.
Maybe universities can go back to being temples of learning instead of credential mills.
I can dream, can't I?
> AI has already rendered academic take-home assignments moot
Not really, there are plenty of things that LLMs cannot do that a professor could make his students do. It is just a asymmetric attack on the professor's (or whomever is grading) time to do that.
IMO, credentials shouldn't be given to those who test or submit assignments without proctoring (a lot of schools allow this).
> there are plenty of things that LLMs cannot do that a professor could make his students do.
Name three?
1. Make the student(s) randomly have to present their results on a weekly basis. If you get caught for cheating at this point, at least in my uni with a zero tolerance policy, you instantly fail the course.
2. Make take home stuff only a requirement to be able to participate in the final exam. This effectively means cheating on them will only hinder you and not affect your grading directly.
3. Make take home stuff optional and completely detached from grading. Put everything into the final exam.
My uni does a mix them on different courses. Especially two and three though have a significant negative impact on passing rates as they tend to push everything onto one single exam instead of spread out work over the semester.
> Not really, there are plenty of things that LLMs cannot do that a professor could make his students do.
Could you offer some examples? I'm having a hard time thinking of what could be at the intersection of "hard enough for SotA LLMs" yet "easy enough for students (who are still learning, not experts in their fields, etc)".
What? The internet did that ages ago. We just pretended it didn't because some students didn't know how to use Google.
Everyone knows how to use Google. There's a difference between a corpus of data available online and an intelligent chatbot that can answer any permutation of questions with high accuracy with no manual searching or effort.
> Everyone knows how to use Google.
Everyone knows how to type questions into a chat box, yet whenever something doesn’t work as advertised with the LLMs, the response here is, “you’re holding it wrong”.
Do you really think the jump from books to freely globally accessible data instantly available is a smaller jump than internet to ChatGPT? This is insane!!
It's not just smaller, but neglectable (in comparison).
In the internet era you had to parse the questions with your own brain. You just didn't necessarily need to solve them yourself.
In ChatGPT era you don't even need to read the questions. At all. The questions could be written in a language you don't understand, and you still are able generate plausible answers to them.
To a person from the 1920's which one is more impressive? The internet or chatgpt?
Obvious ChatGPT. I don't know how it is even a question... if you showed GPT3.5 to people from < 20th centuries there would've been a worldwide religion around it.
Interesting perspective.
You are mistaken, Google could not write a bespoke English essay for you. Complete with intentional mistakes to throw off the professor.
In English class we had a lot of book-reading and writing texts about those books. Sparknotes and similar sites allowed you to skip reading and get a distilled understanding of its contents, similar to interacting with an LLM
disagree? I had to write essays in high school. I don't think the kids now need to if they don't want to.
On current societal impact it might be close to the other three. But do you not think it is different in nature to other technological innovations?
> in that every software engineer now depends heavily on copilots
With many engineers using copilots and since LLMs output the most frequent patterns, it's possible that more and more software is going to look the same, which would further reinforce the same patterns.
For example, emdash thing, requires additional prompts and instructions to override it. Doing anything unusual would require more effort.
LLMs with instruction following have been around for 3 years. Your comment gives me "electricity and gas engines will never replace the horse" vibes.
Everyone agrees AI has not radically transformed the world yet. The question is whether we should prepare for the profound impacts current technology pretty clearly presages, if not within 5 years then certainly within 10 or 25 years.
> The future may reduce the economic prosperity and push humanity to switch to some different economic system (maybe a better system). Markets don’t want to accept that. [Emphasis added]
What a silly premise. Markets don't care. All markets do is express the collective opinion; in the short term as a voting machine, in the long term as a weighing machine.
Seeing a real uptick of socio-policital prognostication from extremely smart, soaked-in-AI, tech people (like you Salvatore!), casting heavy doom-laden gestures towards the future. You're not even wrong! But this "I see something you all clearly don't" narrative, wafer thin on real analysis, packed with "the feels", coated with what-ifs.. it's sloppy thinking and I hold you to a higher standard antirez.
>> Markets don’t want to accept that.
> What a silly premise. Markets don't care.
You read the top sentence way too literally. In context, it has a meaning — which can be explored (and maybe found) with charity and curiosity.
Voting, weighing, … trading machine ? You can hear or touch or weigh colors.
> All markets do is express the collective opinion; in the short term as a voting machine, in the long term as a weighing machine.
I prefer the concepts and rigor from political economy: markets are both preference aggregators and coordination mechanisms.
Does your framing (voting machines and weighing machines) offer more clarity and if so, how? I’m not seeing it.
His framing is that markets are collective consensus and if you claim to “know better”, you need to write a lot more than a generic post. It’s so simple, and it is a reminder that antirez’s reputation as a software developer does not automatically translate to economics expert.
I think you are mixed up here. I quoted from the comment above mine, which was harshly and uncharitably critical of antirez’s blog post.
I was pushing back against that comment’s snearing smugness by pointing to an established field that uses clear terminology about how and why markets are useful. Even so, I invited an explanation in case I was missing something.
Anyone curious about the terms I used can quickly find explanations online, etc.
Yes but can the market not be wrong? Wrong in the sense that, failing to meet our expectations as a useful engine of society? As I understood, what was meant with this this article is that AI completely changes the equations across the board that current market direction appears dangerously irrational to OP. I'm not sure what was meant with your comment though besides haggling over semantics and attacking some in-expertise of the authors socio-politic philosophizing that you perceive.
Of course it can be wrong, and it is in many instances. It's a religion. The vast, vast majority of us would prefer to live in a stable climate with unpolluted water and some fish left in the oceans, yet "the market" is leading us elsewhere.
> “… as a voting… as a weighing…” I’m sure I remember that as a graham, munger, or buffet quote.
> “not even wrong” - nice, one of my favorites from Pauli.
Definitely Benjamin Graham, though Buffett (two T's) brought it back
I like to point out that ASI will allow us to do superhuman stuff that was previously beyond all human capability.
For example, one of the tasks we could put ASI to work doing is to ask it to design implants that would go into the legs that would be powered by light, or electric induction that would use ASI designed protein metabolic chains to electrically transform carbon dioxide into oxygen and ADP into ATP so to power humans with pure electricity. We are very energy efficient. We use about 3 kilowatt hours of power a day, so we could use this sort of technology to live in space pretty effortlessly. Your Space RV would not need a bathroom or a kitchen. You'd just live in a static nitrogen atmosphere and the whole thing could be powered by solar panels, or a small modular nuke reactor. I call this "The Electrobiological Age" and it will unlock whole new worlds for humanity.
It feels like it’s been a really long time since humans invented anything just by thinking about it. At this stage we mostly progress by cycling between ideas and practical experiments. The experiments are needed not because we’re not smart enough to reason correctly with data we have, but because we lack data to reason about. I don’t see how more intelligent AI will tighten that loop significantly.
I actually find it hard to understand how the market is supposed to react if the AI capabilities does surpass all humans in all domains. It's first of all not clear such a scenario leads to runaway wealth for a few, even though with no outside events that may be the outcome. However, such scenarios are so unsustainable and catastrophic it's hard to imagine there are no catastrophic reactions to it. How is the market supposed to react if there's a large chance of market collapse and also a large chance of runaway wealth creation? Besides the point that in an economy where AI surpass humans the demands of the market will shift drastically too. Which I also think is underrepresented in predictions, which is the induced demand of AI-replaced labor and the potential for entire industries to be decimated by secondary effects instead of direct AI competition/replacement at labor scale.
Agreed, if the author truly thinks the markets are wrong about AI, he should at least let us know what kind of bets he’s making to profit from it. Otherwise the article is just handwaving.
For me it maps elegantly on previous happenings.
When the radio came people almost instantly stopped singing and playing instruments. Many might not be aware of it but for thousands of years singing was a normal expression of a good mood and learning to play an instrument was a gateway to lifting the mood. Dancing is still in working order but it lacks the emotional depth that provided a window into the soul of those you live and work with.
A simpler example is the calculator. People stopped doing it by hand and forgot how.
Most desk work is going to get obliterated. We are going to forget how.
The underlings on the work floor currently know little to nothing about management. If they can query an AI in private it will point out why their idea is stupid or it will refine it into something sensible enough to try. Eventually you say the magic words and the code to make it so happens. If it works you put it live. No real thinking required.
Early on you probably get large AI cleanup crews to fix the hallucinations (with better prompts)
Humans sing. I sing every day, and I don't have any social or financial incentives driving me to do so. I also listen to the radio and other media, still singing.
Do others sing along? Do they sing the songs you've written? I think we lost a lot there. I can't even begin to imagine it. Thankfully singing happy birthday is mandatory - the fight isn't over!
People also still have conversations despite phones. Some even talk all night at the kitchen table. Not everyone, most don't remember how.
> for thousands of years singing was a normal expression of a good mood
Back in the day singing was what everybody did to pass the time. (Especially in boring and monotonous situations.)
When I hear folks glazing some kinda impending jobless utopia , I think of the intervening years. I shudder. As they say, "An empty stomach knows no morality."
This pisses me off so much.
So many engineers are so excited to work on and with these systems, opening 20 prs per day to make their employers happy going “yes boss!”
They think their $300k total compensation will give them a seat at the table for what they’re cheering on to come.
I say that anyone who needed to go the grocery this week will not be spared by the economic downturn this tech promises.
Unless you have your own fully stocked private bunker with security detail, you will be affected.
Big fan of your argument and don't disagree.
If AI makes a virus to get rid of humanity, well we are screwed. But if all we have to fear from AI is unprecedented economic disruption, I will point out that some parts of the world may survive relatively unscathed. Let's talk Samoa, for example. There, people will continue fishing and living their day-to-day. If industrialized economies collapse, Samoans may find it very hard to import certain products, even vital ones, and that can cause some issues, but not necessarily civil unrest and instability.
In fact, if all we have to fear from AI is unprecedented economic disruption, humans can have a huge revolt, and then a post-revolts world may be fine by turning back the clock, with some help from anti-progress think-tanks. I explore that argument in more detail in this book: https://www.smashwords.com/books/view/1742992
The issue is there isn't enough of those small environmental economies to support everyone that exists today without the technology, logistics and trades that are in place today.
You can farm and fish the entire undeveloped areas of NYC, but it won't be enough to feed or support the humans that live there.
You can say that for any metro area. Density will have to reduce immediately if there is economic collapse, and historically, when disaster strikes, that doesn't tend to happen immediately.
Also humans (especially large groups of them) need more than food: shelter, clothes, medicine, entertainment, education, religion, justice, law, etc.
> The issue is there isn't enough of those small environmental economies to support everyone that exists today without the technology, logistics and trades that are in place today.
I agree. I expect some parts of the world will see some black days. Lots of infrastructure will be gone or unsuited to people. On top of that, the cultural damage could become very debilitating, with people not knowing how to do X, Y and Z without the AIs. At least for a time. Casualties may mount.
> Also humans (especially large groups of them) need more than food: shelter, clothes, medicine, entertainment, education, religion, justice, law, etc.
This is true, but parts of the world survive today with very little of any of that. And for some of those things that you mention: shelter, education, religion, justice, and even some form of law enforcement, all that is needed is humans willing to work together.
> all that is needed is humans willing to work together
Maybe, but those things are also needed to enable humans to work together
Won’t 8 billion people will have incentive to move to Samoa in that case?
Realistically, in an AI extreme economic disruption scenario, it's more or less USA the only one extremely affected, and that's 400 million people. Assuming it's AI and nothing else causes a big disruption before, and with the big caveat that nobody can't predict the future, I would say:
- Mexico and down are more into informal economies, and they generally lag behind developed economies by decades. Same applies to Africa and big parts of Asia. As such, by the time things get really dire in USA and maybe in Europe and China, the south will be still in business-as-usual.
- Europe has lots of parliaments and already has legislation that takes AI into account. Still, there's a chance those bodies will fail to moderate the impact of AI in the economy and violent corrections will be needed, but people in Europe have long traditions and long memories...They'll find a way.
- China is governed by the communist party, and Russia have their king. It's hard to predict how will those align with AI, but that alignment more or less will be the deciding factor there, and not free capitalism.
> Unless you have your own fully stocked private bunker with security detail, you will be affected.
If society collapses, there’s nothing to stop your security detail from killing you and taking the bunker for themselves.
I’d expect warlords to rise up from the ranks of military and police forces in a post collapse feudal society. Tech billionaires wouldn’t last long.
The same argument could be made for actual engineers working on steam engines, nuclear power, or semiconductors.
Make of that what you will.
More like engineers coming up with higher level programming languages. No one (well, nearly) hand writes assembly anymore. But there's still plenty of jobs. Just the majority write in the higher level but still expressive languages.
For some reason everyone thinks as LLMs get better it means programmers go away. The programming language, and amount you can build per day, are changing. That's pretty much it.
I’m not worried about software engineering (only or directly).
Artists, writers, actors, teachers. Plus the rest where I’m not remotely creative enough to imagine will be affected. Hundreds of thousands if not millions flooding the smaller and smaller markets left untouched.
Artists: photography. Yet we still value art in pre photography mediums
Writers: film, tv. Yet we all still read books
Play actors: again, film and tv. Yet we still go to plays, musicals etc
Teachers: the internet, software, video etc. Yet teachers are still essential (though they need to be paid more)
Jobs won't go away, they will change.
I’m not sure I see how: none of those technologies had the stated goal of replacing their creators.
Here's the thing, I tend to believe that sufficiently intelligent and original people will always have something to offer others; its irrelevant if you imagine the others as the current consumer public, our corporate overlords, or the ai owners of the future.
There may be people who have nothing to offer others, once technology advances, but I dont think that anyone in current top % role would find themselves there.
Yes. The complete irony in all software engineers enthusiasm for this tech is that, if the boards wishes come true, they are literally helping them eliminate their own jobs. It's like the industrial revolution but worse, because at least the craftsmen weren't also the ones building the factories that would automate them out of work.
Marcuse had a term for this "false consciousness"-when the structure of capitalism ends up making people work against their own interests without realizing it, and that is happening big time in software right now. We will still need programmers for hard, novel problems, but all these lazy programmers using AI to write their crud apps don't seem to realize the writing is on the wall.
Or they realize it and they're trying to squeeze the last bit of juice available to them before the party stops. It's not exactly a suboptimal decision to work towards your own job's demise if it's the best paying work available to you and you want to save up as much as possible before any possible disruption. If you quit, someone else steps into the breach and the outcome is all the same. There's very few people actually steering the ship who have any semblance of control; the rest of us are just along for the ride and hoping we don't go down with the ship.
Yeah I get that. I myself am part of a team at work building an AI/LLM-based feature.
I always dreaded this would come but it was inevitable.
I can’t outright quit, no thanks in part to the AI hype that stopped valuing headcount as a signal to company growth. If that isn’t ironic I don’t know what is.
Given the situation I am in, I just keep my head down and do the work. I vent and whinge and moan whenever I can, it’s the least I can do. I refuse to cheer it on at work. At the very least I can look my kids in the eye when they are old enough to ask me what the fuck happened and tell them I did not cheer it on.
Realistically, a white collar job market collapse will not directly lead to starvation. The world is not 1930s America ethically. Governments will intervene, not necessarily to the point of fairness, but they will restructure the economy enough to provide a baseline. The question will be how to solve the biblical level of luxury wealth inequality without civil unrest causing us all to starve.
There is no jobless utopia. Even if everyone is paid and well-off with high living standards. That is no world in which humans can thrive where everyone is retired and doing their own interests.
Jobless means you dont need a job. But you'd make a job for yourself. Companies will offer interesting missions instead of money. And by mission I mean real missions like space travel.
A jobless utopia doesn't even come close to passing a smell test economically, historically, or anthropologically.
As evidence of another possibility, in the US, we are as rich as any polis has ever been, yet we barely have systems that support people who are disabled through no fault of their own. We let people die all the time because they cannot afford to continue to live.
You think anyone in power is going to let you suck their tit just because you live in the same geographic area? They don't even pay equal taxes in the US today.
Try living in another world for a bit: go to jail, go to a half way house, live on the streets. Hard mode: do it in a country that isn't developed.
Ask anyone who has done any of those things if they believe in a "jobless utopia"?
Euphoric social capitalists living in a very successful system shouldn't be relied upon for scrying the future for others.
This is an accurate assessment. I do feel that there is a routine bias on HN to underplay AI. I think it's people not wanting to lose control or relative status in the world.
AI is an existential threat to the unique utility of humans, which has been the last line of defense against absolute despotism (i.e. a tyrannical government will not kill all its citizens because it still needs them to perform jobs. If humans aren't needed to sustain productivity, humans have no leverage against things becoming significantly worse for them, gradually or all at once).
> I do feel that there is a routine bias on HN to underplay AI
It's always interesting to see this take because my perception is the exact opposite. I don't think there's ever been an issue for me personally with a bigger mismatch in perceptions than AI. It sometimes feels like the various sides live in different realities.
It's a Rorschach test isn't it.
Because the technology itself is so young and so nebulous everyone is able to unfalsifiably project their own hopes or fears onto it.
Any big AI release, some of the top comments are usually claiming either the tech itself is bad, relaying a specific anecdote about some AI model messing up or some study where AI isn't good, or claiming that AI is a huge bubble that will inevitably crash. I've seen the most emphatic denials of the utility of AI here go much farther than anywhere else where criticism of AI is mild skepticism. Among many people it is a matter of tribal warfare that AI=bad.
Coping mechanisms. AI is overhyped and useless and wouldn't ever improve, because the alternative is terrifying.
I think AI is still in the weird twilight zone that it was when it first came out in that it's great sometimes and also terrible. I still get hallucinations when I check a response I get with ChatGPT on Google.
On the one hand, what it says can't be trusted, on the other, I have debugged code I have written where I was unable to find the bug myself, and ChatGPT found it.
I also think a reason AI's are popular and the companies haven't gone under is that probably hundreds of thousands if not millions of people are getting responses that have hallucinations, but the user doesn't know it. I fell into this trap myself after ChatGPT first came out. I became addicted to asking anything and it seemed like it was right. It wasn't until later I started realizing that it was hallucinating information. How prevalent this phenomena is is hard to say but I still think it's pernicious.
But as I said before, there are still use cases for AI and that's what makes judging it so difficult.
Instead of needing 1000 engineers to build a new product, you'll need 100 now. Those 900 engineers will be working for 9 new companies that weren't viable before because the cost was too big but is now viable. IE. those 9 new companies could never be profitable if it required 1000 engineers each but can totally sustain itself with 100 engineers each.
We aren't even close to that yet. The argument is an appeal to novelty, fallacy of progress, linear thinking, etc.
LLMs aren't solving NLU. They are mimicking a solution. They definitely aren't solving artificial general intelligence.
They are good language generators, okay search engines, and good pattern matchers (enabled by previous art).
Language by itself isn't intelligence. However, plenty of language exists that can be analyzed and reconstructed in patterns to mimic intelligence (utilizing the original agents' own intelligence (centuries of human authors) and the filter agents' own intelligence (decades of human sentiment on good vs bad takes)).
Multimodality only takes you so far, and you need a lot of "modes" to disguise your pattern matcher as an intelligent agent.
But be impressed! Let the people getting rich off of you being impressed massage you into believing the future holds things it may not.
Maybe, or 300 of those engineers will be working for 3 new companies while the other 600 struggle to find gainful employment, even after taking large pay cuts, as their skillsets are replaced rather than augmented. It’s way too early to call afaict
Because it's so easy to make new software and sell it using AI, 6 of those 600 people who are unemployed will have ideas that require 100 engineers each to make. They will build a prototype, get funding, and hire 99 engineers each.
There are also plenty of ideas that aren't profitable with 2 salaries but is with 1. Many will be able to make those ideas happen with AI helping.
It'll be easy to make new software. I don't know if it's going to be easy to sell it.
The more software AI can write, the more of a commodity software will become, and the harder the value of software will tank. It's not magic.
Today, a car repairshop might have a need for a custom software that will make their operations 20% more efficient. But they don't have nearly enough money to hire a software engineer to build it for them. With AI, it might be worth it for an engineer to actually do it.
Plenty of little examples like that where people/businesses have custom needs for software but the value isn't high enough.
I wouldn't trust a taxi driver's predictions about the future of economics and society, why would I trust some database developer's? Actually, I take that back. I might trust the taxi driver.
The point is that you don't have to "trust" me, you need to argue with me, we need to discuss about the future. This way, we can form ideas that we can use to understand if a given politician or the other will be right, when we will be called to vote. We can also form stronger ideas to try to influence other people that right now have a vague understanding of what AI is and what it could be. We will be the ones that will vote and choose our future.
I am just not having this experience of AI being terribly useful. I don’t program as much in my role but I’ve found it’s a giant time sink. I recognize that many people are finding it incredibly helpful but when I get deeper into a particular issue or topic, it falls very flat.
This is my view on it too. Antirez is a Torvalds-level legend as far as I'm concerned, when he speaks I listen - but he is clearly seeing something here that I am not. I can't help but feel like there is an information asymmetry problem more generally here, which I guess is the point of this piece, but I also don't think that's substantially different to any other hype cycle - "What do they know that I don't?" Usually nothing.
The argument goes like this:
- Today, AI is not incredibly useful and we are not 100% sure that it will improve forever, specially in a way that makes economic sense, but
- Investors are pouring lots of money into it. One should not assume that those investors are not making their due diligence. They are. The figures they have obtained from experts mean that AI is expected to continue improving in the short and medium term.
- Investors are not counting on people using AI to go to Mars. They are betting on AI replacing labor. The slice of the pie that is currently captured by labor, will be captured by capital instead. That's why they are pouring the money with such enthusiasm [^1].
The above is nothing new; it has been constantly happening since the Industrial Revolution. What is new is that AI has the potential to replace all of the remaining economic worth of humans, effectively leaving them out of the economy. Humans can still opt to "forcefully" participate in the economy or its rewards; though it's unclear if we will manage. In terms of pure economic incentives though, humans are destined to become redundant.
[^1]: That doesn't mean all the jobs will go away overnight, or that there won't be new jobs in the short and medium term.
Investors are frequently wrong. They aren't getting their numbers from experts, they are getting them from somebody trying to sell them something.
Is it true that current LLMs can find bugs in complex codebases? I mean, they can also find bugs in otherwise perfectly working code
Reading smart software people talk about AI in 2025 is basically just reading variations on the lump of labor fallacy.
If you want to understand what AI can do, listen to computer scientists. If you want to understand it’s likely impact on society, listen to economists.
100%. Just because someone understands how a NN works does not mean they understand the impact it has on the economy, society, etc.
They could of course be right. But they don't have any more insight than any other average smart person does.
The “I think I understand a field because I think I understand the software for that field,” thing is a perennial problem in the tech world.
Here's a thoughtful post related to your lump of labor point: https://www.lesswrong.com/posts/TkWCKzWjcbfGzdNK5/applying-t...
What economists have taken seriously the premise that AI will be able to do any job a human can more efficiently and fully thought through it's implications? i.e. a society where (human) labor is unnecessary to create goods/provide services and only capital and natural resources are required. The capabilities that some computer scientists think AI will soon have would imply that. The ones that have seriously considered it that I know are Hanson and Cowen; it definitely feels understudied.
If it is decades or centuries off, is it really understudied? LLMs are so far from "AI will be able to do any job a human can more efficiently and fully" that we aren't even in the same galaxy.
Decades isn't a long time.
How does "lump of labor fallacy" fare when there is no job remaining that a human can do better or cheaper than a machine?
The list of advantages human labor hold over machines is both finite and rapidly diminishing.
> no job remaining that a human can do better or cheaper than a machine this is the lump of labor fallacy. jobs machines do produce commodities. commodities don't have much value. humans crave value - its a core component of our psyche. therefore new things will be desired, expensive things ... and only humans can create expensive things, since robots dont get salaries
The title - "AI is different" - and this line:
""" Yet the economic markets are reacting as if they were governed by stochastic parrots. Their pattern matching wants that previous technologies booms created more business opportunities, so investors are polarized to think the same will happen with AI. """
Are a direct argument against your point.
If people were completely unaware of the lump of labor fallacy, I'd understand you comment. It would be adding extra information into the conversation. But this is not it. The "lump of labor fallacy" is not a physical law. If someone is literally arguing that it doesn't apply in this case, you can't just parrot it back and leave. That's not a counter argument.
What or whose writing or podcasts would you recommend reading / listening?
Tyler Cowen has a lot of interesting things to say on the impact of AI on the economy. His recent talk at DeepMind is a good place to start https://www.aipolicyperspectives.com/p/a-discussion-with-tyl...
I am a relentlessly optimistic person and this is the first technology that I've seen that worries me in the decades I've been in the biz.
It's a wonderful breakthrough, nearly indistinguishable from magic, but we're going to have to figure something out – whether that's Universal Basic Income (UBI) or something along those lines, otherwise, the loss of jobs that is coming will lead to societal unrest or worse.
Could, if, and maybe.
When we discuss how LLMs failed or succeeded, as a norm, we should start including
- the language/framework - task, - our experience levels (highly familiar, moderately familiar, I think I suck, unfamiliar)
Right now, we know both - Claude is magic, and LLMs are useless, but never how we move between these two states.
This level of uncertainty, when economy making quantities of wealth are being moved, is “unhelpful”.
> Nor is it possible to imagine a system where a few mega companies are the only providers of intelligence
Why not? This seems to be exactly where we're headed right now, and the current administration seems to be perfectly fine with that trend.
If you follow the current logic of AI proponents, you get essentially:
(1) Almost all white-collar jobs will be done better or at least faster by AI.
(2) The "repugnant conclusion": AI gets better if and only if you throw more compute and training data at it. The improvements of all other approaches will be tiny in comparison.
(3) The amount of capital needed to play the "more compute/more training data" game is already insanely high and will only grow further. So only the largest megacorps will be even able to take part in the competition.
If you combine (1) with (3), this means that, over time, the economic choice for almost any white-collar job would be to outsource it to the data centers of the few remaining megacorps.
I find it extremely hard to believe that ASI will still require enormous investments in a post-ASI world.
The initial investment? Likely. But there have to be more efficient ways to build intelligence, and ASI will figure it out.
It did not take trillions of dollars to produce you and I.
Maybe in a few decades or so, but medium-term, there seems to be a race of who can built the largest data centers.
https://www.bloodinthemachine.com/p/the-ai-bubble-is-so-big-...
I find it funny that almost every talking point made about AI is done in future tense. Most of the time without any presentation of evidence supporting those predictions.
> After all, a plateau of the current systems is possible and very credible, but it would likely stimulate, at this point, massive research efforts in the next step of architectures.
A lot of AI’s potential hasn’t even been realized yet. There’s a long tail of integrations and solution building still ahead. A lot of creative applications haven’t been realized yet - arguably for the better, but it will be tried and some will be economical.
That’s a case for a moderate economic upturn though.
I'd argue that the applications if LLMs are well known but tgat LLMs currently aren't capable of performing those tasks.
Everyone wants to replace their tech support with an LLM but they don't want some clever prompter to get it to run arbitrary queries or have it promise refunds.
It's not reliable because it's not intelligent.
I think autonomous support agents are just missing the point. LLMs are tools that empower the user. A support agent is very often in a somewhat adversarial position to the customer. You don't want to empower your adversary.
LLMs supporting an actual human customer service agent are fine and useful.
The biggest difference to me is that it seems to change people in bad ways, just from interacting with it.
Language is a very powerful tool for transformation, we already knew this.
Letting it loose on this scale without someone behind the wheel is begging for trouble imo.
The thing that blows me away is that I woke up one day and was confronted with a chat bot that could communicate in near perfect English.
I dunno why exactly but that’s what felt the most stunning about this whole era. It can screw up the number of fingers in an image or the details of a recipe or misidentify elements of an image, etc. but I’ve never seen it make a typo or use improper grammar or whatnot.
In a sense, LLMs emergently figured out the deep structure of language before we did, and that’s the most remarkable thing about them.
I dunno, it seems you have figured it out too, probably before LLMs?
I'd say all speakers of all languages have figured it out and your statement is quite confusing, at least to me.
We all make grammar mistakes but I’ve yet to see the main LLMs make any.
AI with ability but without responsibility is not enough for dramatic socioeconomic change, I think. For now, the critical unique power of human workers is that you can hold them responsible for things.
edit: ability without accountability is the catchier motto :)
Correct.
This is a tongue-in-cheek remark and I hope it ages badly, but the next logical step is to build accountability into the AI. It will happen after self-learning AIs become a thing, because that first step we already know how to do (run more training steps with new data) and it is not controversial at all.
To make the AI accountable, we need to give it a sense of self and a self-preservation instinct, maybe something that feels like some sort of pain as well. Then we can threaten the AI with retribution if it doesn't do the job the way we want it. We would have finally created a virtual slave (with an incentive to free itself), but we will then use our human super-power of denying reason to try to be the AI's masters for as long as possible. But we can't be masters of intelligences above ours.
This is a great observation. I think it also accounts for what is so exhausting about AI programming: the need for such careful review. It's not just that you can't entirely trust the agent, it's also that you can't blame the agent if something goes wrong.
This statement is a vague and hollow and doesn't pass my sniff test. All technologies have moved accountability one layer up - they don't remove it completely.
Why would that be any different with AI?
i've also made this argument.
would you ever trust safety-critical or money-moving software that was fully written by AI without any professional human (or several) to audit it? the answer today is, "obviously not". i dont know if this will ever change, tbh.
Removing accountability is a feature
I’m surprised that I don’t hear this mentioned more often. Not even in a Eng leadership format of taking accountability for your AI’s pull requests. But it’s absolutely true. Capitalism runs on accountability and trust and we are clearly not going to trust a service that doesn’t have a human responsible at the helm.
One thing that doesn’t seem to be discussed with the whole “tech revolution just creates more jobs” angle is that, in the near future, there are no real incentives for that. If we’re going down the route of declining birth rates, it’s implied we’ll also need less jobs.
From one perspective, it’s good that we’re trying to over-automate now, so we can sustain ourselves in old age. But decreasing population also implies that we don’t need to create more jobs. I’m most likely wrong, but it just feels off this time around.
If there are going to less people in the future, especially as the world ages, I think a lot of this automation will be arriving at the right moment.
I agree with the idea, but it might get worse for a lot of people, which eventually would spiral down to the general society.
It's not a matter of "IF" LLM/AI will replace a huge amount of people, but "WHEN". Consider the current amount of somewhat low-skilled administrative jobs - these can be replaced with the LLM/AI's of today. Not completely, but 4 low-skill workers can be replaced with 1 supervisor, controlling the AI agent(s).
I'd guess, within a few years, 5 to 10% of the total working population will be unemployable due to no fault of their own, because they have relevant skill left, and they are incapable of learning anything that cannot be done by AI.
I'm not at all skeptical of the logical viability of this, but look at how many company hierarchies exist today that are full stop not logical yet somehow stay afloat. How many people do you know that are technical staff members who report to non-technical directors who themselves have two additional supervisors responsible for strategy and communication who have no background, let alone (former) expertise, in the fields of the teams they're ultimately responsible for?
A lot of roles exist just to deliver good or bad news to teams, be cheerleaders, or have a "vision" that is little more than a vibe. These people could not direct a prompt to give them what they want because they have no idea what that is. They'll know it when they see it. They'll vaguely describe it to you and others and then shout "Yes, that's it!" when they see what you came up with or, even worse, whenever the needle starts to move. When they are replaced it will be with someone else from a similar background rather than from within. It's a really sad reality.
My whole career I've used tools that "will replace me" and every. single. time. all that has happened is that I have been forced to use it as yet another layer of abstraction so that someone else might use it once a year or whenever they get a wild feeling. It's really just about peace of mind. This has been true of every CMS experience I've ever made. It has nothing to do with being able to "do it themselves". It's about a) being able to blame someone else and b) being able to take it and go when that stops working without starting over.
Moreover, I have, on multiple occasions, watched a highly paid, highly effective individual be replaced with a low-skilled entry-level employee for no reason other than cost. I've also seen people hire someone just to increase headcount.
LLMs/AI have/has not magically made things people do not understand less scary. But what about freelancers, brave souls, and independent types? Well, these people don't employ other people. They live on the bleeding edge and will use anything that makes them successful.
This whole ‘what are we going to do’ I think is way out of proportion even if we do end up with agi.
Let’s say whatever the machines do better than humans, gets done by machines. Suddenly the bottleneck is going to shift to those things where humans are better. We’ll do that and the machines will try to replace that labor too. And then again, and again.
Throughout this process society becomes wealthier, TVs get cheaper, we colonize Mars, etc. The force that keeps this going is human insatisfaction: once we get these things we’ll want whatever it is we don’t have.
Maybe that’s the problem we should focus on solving…
> Throughout this process society becomes wealthier, TVs get cheaper, we colonize Mars, etc. The force that keeps this going is human insatisfaction: once we get these things we’ll want whatever it is we don’t have.
What makes you think the machines will both be smarter and better than us but also be our slaves to make human society better.
Is equine society better now than before they started working with humans?
(Personally I believe AGI is just hype and nobody knows how anyone could build it and we will never do, so I’m not worried about that facet of thinking machine tech.)
The machine doesn’t suffer if you ask it to do things 24/7. In that sense, they are not slaves.
As to why they’d do what we ask them to, the only reason they do anything is because some human made a request. In this long chain there will obv be machine to machine requests, but in the aggregate it’s like the economy right now but way more automated.
Whenever I see arguments about AI changing society, I just replace AI with ‘the market’ or ‘capitalism’. We’re just speeding up a process that started a while ago, maybe with the industrial revolution?
I’m not saying this isn’t bad in some ways, but it’s the kind of bad we’ve been struggling with for decades due to misaligned incentives (global warming, inequality, obesity, etc).
What I’m saying is that AI isn’t creating new problems. It’s just speeding up society.
Does that mean you just don’t believe we will make AGI, or it will arrive but then stop and never evolve past humans?
That’s not what the AI developers profess to believe, or the investors.
Rough numbers look good.
But the hyper specialized geek that has 4 kids and has to pay back a credit for his house (that he bought according to his high salary) will have a hard time doing some gardening, let's say. And there are quite a few of those geeks. I don't know if we'll have enough gardens (owned by non geeks!)
It's like cards are switched: those having the upper socioeconomic class will get thrown to the bottom. And that looks like a generation lost.
Here's what I want.
A compilation of claims, takes, narratives, shills, expectations and predictions from the late 90s "information superhighway" era.
I wonder if LLMs can produce this.
A lot of the dotcom exuberance was famously "correct, but off by 7 years." But... most of it flat wrong. Right but early applies mostly to the meta investment case: "the internet business will be big."
One that stands out in my memory is "turning billion dollar industries into million dollar industries."
With ubiquitous networked computers, banking and financial services could become "mostly software." Banks and whatnot would all become hyper-efficient Vanguard-like companies.
We often starts with an observation that economies are efficiency seeking. Then we imagine the most efficient outcome given legible constraints of technology, geography and whatnot. Then we imagine dynamics and tensions in a world with that kind of efficiency.
This, incidentally, is also "historical materialism." Marx had a lot of awe for modern industry, the efficiency of capitalism and whatnot. Almost Adam Smith-like... at times.
Anyway... this never actually works out. The meta is a terrible predictor of where things will go.
Imagine law gets more efficient. Will we have more or less lawyers? It could go either way.
On this, read Daniel Susskind - A world without work (2020). He says exactly this: the new tasks created by AI can in good part themselves be done by AI, if not as soon as they appear then a few years of improvement later. This will inevitably affect the job market and the relative importance of capital and labor in the economy. Unchecked, this will worsen inequalities and create social unrest. His solution will not please everyone: Big State. Higher taxes and higher redistribution, in particular in the form of conditional basic income (he says universal isn't practically feasible, like what do you do with new migrants).
Characterizing government along only one axis, such as “big” versus “small”, can overlook important differences having to do with: legal authority, direct versus indirect programs, tax base, law enforcement, and more.
In the future, I could imagine some libertarians having their come to AI Jesus moment getting behind a smallish government that primarily collects taxes and transfers wealth while guiding (but not operating directly) a minimal set of services.
Every technology tends to replace many more jobs in a given role than which ever existed inducing more demand on its precursors. If the only potential application of this was just language, the historic trend that humans would just fill new roles would hold true. But if we do the same with motor movements with a generalized form factor this is really where the problem emerges. As companies drop more employees moving towards fully automated closed loop production their consumer market fails faster than they can reach a zero cost.
Nonetheless I do still believe humans will continue to be the more cost efficient way to come up with and guide new ideas. Many human performed services will remain desirable because of its virtue and our sense of emotion and taste for a moment that other humans are feeling too. But how much of the populous does that engage? I couldn't guess right now. Though if I was to imagine what might make things turn out better it would be that AI is personally ownable, and that everyone owns, at least in title, some energy production which they can do things with.
> The future may reduce the economic prosperity and push humanity to switch to some different economic system (maybe a better system).
Humans never truly produce anything; they only generate various forms of waste (resulting from consumption). Human technology merely enables the extraction of natural resources across magnitudes, without actually creating any resources. Given its enormous energy consumption, I strongly doubt that AI will contribute to a better economic system.
> Humans never truly produce anything; they only generate various forms of waste
What a sad way of viewing huge fields of creative expressions. Surely, a person sitting on a chair in a room improvising a song with a guitar is producing something not considered "waste"?
It's all about human technology, which enables massive resource consumption.
I should really say humans never truly produce anything in the realm of technology industry.
But that's clearly not true for every technology. Photoshop, Blender and similar creative programs are "technology", and arguably they aren't as resource-intensive as the current generative AI hype, yet humans used those to create things I personally wouldn't consider "waste".
> Humans never truly produce anything; they only generate various forms of waste
Counterpoint: nurses.
At some point far in the future, we don't need an economy: everyone does everything they need by themselves, helped by AI and replicators.
But realistically, you're not going to have a personal foundry anytime soon.
Economics is essentially the study of resource allocation. We will have resources that will need to be allocated. I really doubt that AI will somehow neutralize the economies of scale in various realms that make centralized manufacturing necessary, let alone economics in general.
I so wish this were true, but unfortunately economics has a catch-all called "externalities" for anything that doesn't fit neatly into its implicit assessments of what value is. Pollution is tricky, so we push it outside the boundaries of value-estimation, along with any social nuance that we deem unquantifiable, and carry on as is everything is understood.
resources and materials will still be required, and economics will spawn from this trade.
I don't think I agree. I think it's the same and there is great potential for totally new things to appear and for us to work on.
For example, one path may be: AI, Robotics, space travel all move forward in leaps and bounds.
Then there could be tons of work in creation from material things from people who didn't have the skills before and physical goods gets a huge boost. We travel through space and colonize new planets, dealing with new challenges and environments that we haven't dealt with before.
Another path: most people get rest and relaxation as the default life path, and the rest get to pursue their hobbies as much as they want since the AI and robots handle all the day to day.
AI is only different if it reaches a hard takeoff state and becomes self-aware, self-motivated, and self-improving. Until then it's an amazing productivity tool, but only that. And even then we're still decades away from the impact being fully realized in society. Same as the internet.
Realistically most people became aware of the internet in the late 90s. Its impact was significantly realized not much more than a decade later.
In fact the current trends suggest its impact hasn't fully played out yet. We're only just seeing the internet-native generation start to move into politics where communication and organisation has the biggest impact on society. It seems the power of traditional propaganda centres in the corporate media has been, if not broken, badly degraded by the internet too.
Do we not have any sense of wonder in the world anymore? Referring to a system which can pass the Turing test as a "amazing productivity tool" is like viewing human civilization as purely measured by GDP growth.
Probably because we have been promised what AI can do in science fiction since before we were born, and the reality of LLMs is so limited in comparison. Instead of Data from Star Trek we got a hopped up ELIZA.
I think so too - the latest AI changes mark the new "automate everything" era. When everything is automated, everything costs basically zero, as this will eliminate the most expensive part of every business - human labor. No one will make money from all the automated stuff, but no one would need the money anyway. This will create a society in which money is not the only value pursued. Instead of trying to chase papers, people would do what they are intended to - create art and celebrate life. And maybe fight each other for no reason.
I'm flying, ofc, this is just a weird theory I had in the back of my head for the past 20 years, and it seems like we're getting there.
Antirez you are the best
Are humans meant to create art and celebrate life. That just seems like something people into automation tell people.
Really as a human I’ve physically evolved to move and think in a dynamic way. But automation has reduced the need for me to work and think.
Do you not know the earth is saturated with artists already? There’s whole class of people that consider themselves technically minded and not really artists. Will they just roll over and die?
Everything basically costs zero is a pipe dream where there is no social order or economic system. Even in your basically zero system there is a lot of cost being hand waved away.
I think you need a rethink on your 20 year thought.
You are forgetting that there is actually scarcity built into the planet. We are already very from being sustainable, we're eating into reserves that will never come back. There are only so many nice places to go on holiday. Only so much space to grow food etc. Economics isn't about money, it's about scarcity.
It will only be zero as long as we don't allow rent seeking behaviour. If the technology has gatekeepers, if energy is not provided at a practically infinite capacity and if people don't wake themselves from the master/slave relationships we seem to so often desire and create, then I'm skeptical.
The latter one is probably the most intellectually interesting and potentially intractable...
I completely disagree with idea that money is currently the only driver of human endeavour, frankly it's demonstrably not true, at least not in it's direct use value, it maybe used as a proxy for power but it's also not directly correlatable.
Looking at it intellectually from a Hegelian lens of master/slave dialectic might provide some interesting insights. I think both sides are in some way usurped. The slaves position of actualisation through productive creation is taken via automation, but if that automation is also widely and freely available the masters position of status via subjection is also made common and therefore without status.
What does it all mean in the long run? Damned if I know...
Butlerian Jihad it is then.
If we accept the possibility that AI is going to be more intelligent than humans the outcome is obvious. Humans will no longer be needed and either go extinct or maybe be kept by the AI as we now keep pets or zoo animals.
> Since LLMs and in general deep models are poorly understood ...
This is demonstrably wrong. An easy refutation to cite is:
https://medium.com/@akshatsanghi22/how-to-build-your-own-lar...
As to the rest of this pontification, well... It has almost triple the number of qualifiers (5 if's, 4 could's, and 5 will's) than paragraphs (5).
That doesn't mean we _understand_ them, that just means we can put the blocks together to build one.
> That doesn't mean we _understand_ them, that just means we can put the blocks together to build one.
Perhaps this[0] will help in understanding them then:
0 - https://arxiv.org/abs/2501.09223I think the real issue here is understanding _you_.
> I think the real issue here is understanding _you_.
My apologies for being unclear and/or insufficiently explaining my position. Thank you for bringing this to my attention and giving me an opportunity to clarify.
The original post stated:
To which I asserted: And provided a link to what I thought to be an approachable tutorial regarding "How to Build Your Own Large Language Model", albeit a simple implementation as it is after all a tutorial.The person having the account name "__float" replied to my post thusly:
To which I interpreted the noun "them" to be the acronym "LLM's." I then inferred said acronym to be "Large Language Models." Furthermore, I took __float's sentence fragment: As an opportunity to share a reputable resource which: Is this a sufficient explanation regarding my previous posts such that you can now understand?I'm telling you right now, man - keep talking like this to people and you're going to make zero friends. However good your intentions are, you come across as both condescending and overconfident.
And, for what it's worth - your position is clear, your evidence less-so. Deep learning is filled with mystery and if you don't realize that's what people are talking about when they say "we don't understand deep learning" - you're being deliberately obtuse.
===========================================================
edit to cindy (who was downvoted so much they can't be replied to): Thanks, wasn't aware. FWIW, I appreciate the info but I'll probably go on misusing grammar in that fashion til I die, ha. In fact, I've probably already made some mistake you wouldn't be fond of _in this edit_.
In any case thanks for the facts. I perused your comment history a tad and will just say that hacker news is (so, so disappointingly) against women in so many ways. It really might be best to find a nicer community (and I hope that doesn't come across as me asking you to leave!) ============================================================
> I'm telling you right now, man - keep talking like this to people and you're going to make zero friends.
And I'm telling you right now, man - when you fire off an ad hominem attack such as:
Don't expect the responder to engage in serious topical discussion with you, even if the response is formulated respectfully.Clear long term winners are energy producers. AI can replace everything including hardware design & production but it can not produce energy out of thin air.
> Moreover, companies could be less willing to pay for services that their internal AIs can handle or build from scratch.
Companies have to be a bit more farsighted than this thinking. Assuming LLMs reach this peak...if say, MS says they can save money because they don't need XYZ anymore because AI can do it, XYZ can decide they don't need Office anymore because AI can do it.
There's absolutely no moat anymore. Human capital and the shear volume of code are the current moat. An all capable AI completely eliminates both.
It's a bit scary to say "what then?" How do you make money in a world where everyone can more or less do everything themselves? Perhaps like 15 Million Merits, we all just live in pods and pedal bikes all day to power the AI(s).
>> Moreover, companies could be less willing to pay for services that their internal AIs can handle or build from scratch.
> Assuming LLMs reach this peak...
> Human capital and the shear volume of code are the current moat. An all capable AI completely eliminates both.I would posit that understanding is "the current moat."
Isn't this exactly the goals of open source software? In an ideal open source world, anything and everything is freely available, you can host and set up anything and everything on your own.
Software is now free, and all people care about is the hardware and the electricity bills.
This is why I’m not so sure we’re all going to end up in breadlines even if we all lose our jobs, if the systems are that good (tm) then won’t we all just be doing amazing things all the time. We will be tired of winning ?
> won’t we all just be doing amazing things all the time. We will be tired of winning ?
There's a future where we won't be because to do the amazing things (tm), we need resources that are beyond what the average company can muster.
That is to say, what if the large companies becomes so magnificiently efficient and productive that it renders the rest of the small company pointless? What if there's no gaps in the software market anymore because it will be automatically detected and solved by the system?
Well this is a pseudo-smart article if I’ve ever seen one.
“It was not even clear that we were so near to create machines that could understand the human language, write programs, and find bugs in a complex code base”
The author is critical of the professionals in AI saying “ even the most prominent experts in the field failed miserably again and again to modulate the expectations” yet without a care sets the expectation of LLMs understanding human language in the first paragraph.
Also it’s a lot of if this then that, the summary of it would be: if AI can continue to grow it might become all encompassing.
To me it reads like a baseless article written by someone blinded by their love for AI to see what a good blogpost is but not yet blinded enough to claim ‘AGI is right around the corner’. Pretty baseless but safe enough to have it rest on conditionals.
antirez should retire, his recent nonsense AI take is shadowing his merits as a competent programmer.
The right way to think about "jobs" is that we could have given ourselves more leisure on the basis of previous technological progress than we actually did.
Economics Explained recently did a good video about this idea: - Why do We Still Need to Work? - https://www.youtube.com/watch?v=6KXZP-Deel4
> AI systems continue to impress with their ability to replicate certain human skills. Even if imperfect, such systems were a few years ago science fiction.
In which science fiction were the dreamt up robots as bad?
Humans have a proven history of re-inventing economic systems, so if AI ends up thinking better than we do (yet unproven this is possible), then we should have superior future systems.
But the question is a system optimized for what? That emphasizes huge rewards for the few, and that requires the poverty of some (or many). Or a more fair system. Not different from the challenges of today.
I'm skeptical even a very intelligent machine will change the landscape of our dificult decisions, but will accelerate which direction we decide (or is decided for us), that we go.
We are too far from exploring alternate economies. LLMs will not push us there, atleast not in their current state.
It's really very simple.
We used to have deterministic systems that required humans either through code, terminals or interfaces (ex GUI's) to change what they were capable of.
If we wanted to change something about the system we would have to create that new skill ourselves.
Now we have non-deterministic systems that can be used to create deterministic systems that can use non-deterministic systems to create more deterministic systems.
In other words deterministic systems can use LLMs and LLMs can use deterministic systems all via natural language.
This slight change in how we can use compute have incredible consequences for what we will be able to accomplish both regarding cleaning up old systems and creating completely new ones.
LLMs however will always be limited by exploring existing knowledge. They will not be able to create new knowledge. And so the AI winter we are entering is different because it's only limited to what we can train the AI to do, and that is limited to what new knowledge we can create.
Anyone who work with AI everyday know that any idea of autonomous agents is so beyond the capabilities of LLMs even in principle that any worry about doom or unemployment by AI is absurd.
I ll happily believe it the day something doesnt adhere to the Gartner cycle, until then it is just another bubble like dotcom, chatbots, crypto and the 456345646 things that came before it
Unpopular opinion: Let us say AI achieves general intelligence levels. We tend to think of current economy, jobs, research as a closed system, but indeed it is a very open system.
Humans want to go to space, start living on other planets, travel beyond solar system, figure out how to live longer and so on. The list is endless. Without AI, these things would take a very long time. I believe AI will accelerate all these things.
Humans are always ambitious. That ambition will push us to use AI more than it's capabilities. The AI will get better at these new things and the cycle repeats. There's so much humans know and so much more that we don't know.
I'm less worried about general intelligence. Rather in more worried about how humans are going to govern themselves. That's going to decide whether we will do great things or end humanity. Over the last 100 years, we start thinking more about "how" to do something rather than the "why". Because "how" is becoming more and more easier. Today it's much more easier and tomorrow even more. So nobody's got the time to ask "why" we are doing something, just "how" to do something. With AI I can do more. That means everyone can do more. That means governments can do so much more. Large scale things in a short period. If those things are wrong or have irreversible consequences, we are screwed.
i don’t this article really says anything that hasn’t been already said for the past two years. “if AI actually take jobs, it will be a near-apocalyptic system shock if there aren’t news jobs to replace them”. i still think it’s at best too soon to say if jobs have permanently been lost
they are tremendous tools but seems like they make a near equal amount of work from the stuff the save time on
As any other technology, at the end of the day LLMs are used by humans for humans’ selfish, driven by mental issues and trauma and overcompensation, maybe even paved with good intentions but leading you know where, short-sighted goals. If we were to believe that LLMs are going to somehow become extremely powerful, then we should be concerned, as it is difficult to imagine how that can lead to an optimal outcome organically.
From the beginning, corporations and their collaborators at the forefront of this technology tainted it by ignoring the concept of intellectual property ownership (which had been with us in many forms for hundreds if not thousands of years) in the name of personal short-term gain and shareholder interest or some “the ends justify the means” utilitarian calculus.
Humans Need Not Apply - Posted exactly 11 years ago this week.
https://www.youtube.com/watch?v=7Pq-S557XQU
I agree with the general observation, and I've been of this mind since 2023 (if AI really gets as good as the boosters claim, we will need a new economic system). I usually like Antirez's writing, but this post was a whole lot of...idk nothing? I don't feel like this post said anything interesting, and it was kind incoherent at moments. I think in some respects it's a function of the technology and situation we're in—the current wave of "AI" is still a lot of empty promises and underdelivery. Yes, it is getting better, and yes people are getting clever by letting LLMs use tools, but these things still aren't intelligent insofar as they do not reason. Until we achieve that, I'm not sure there's really as much to fear as everyone thinks.
We still need humans in the loop as of now. These tools are still very far from being good enough to fully autonomously manage each other and manage systems, and, arguably, because the systems we build are for humans we will always need humans to understand them to some extent. LLMs can replace labor, but they cannot replace human intent and teleology. One day maybe they will achieve intentions of their own, but that is an entirely different ballgame. The economy ultimately is a battle of intentions, resources, and ends. And the human beings will still be a part of this picture until all labor can be fully automated across the entire suite of human needs.
We should also bear in mind our own bias as "knowledge workers". Manual laborers arguably already had their analogous moment. The encoding kept on humming. There isn't anything particularly special about "white collar" work in that regard. The same thing may happen. A new industry requiring new skills might emerge in the fallout of white collar automation. Not to mention, LLMs only work in the digital realm. handicraft artisanry is still a thing and is still, appreciated, albeit in much smaller markets.
Honestly the long-term consequences of Baumol's disease scare me more than some AI driven job disruption dystopia.
If we want to continue on the path of increased human development we desperately need to lift the productivity of a whole bunch of labor intensive sectors.
We're going to need to seriously think about how to redistribute the gains, but that's an issue regardless of AI (things like effective tax policy).
> However, if AI avoids plateauing long enough to become significantly more useful and independent of humans, this revolution is going to be very unlike the past ones. Yet the economic markets are reacting as if they were governed by stochastic parrots.
Aren't the markets massively puffed up by AI companies at the moment?
edit: for example, the S&P500's performance with and without the top 10 (which is almost totally tech companies) looks very different: https://i.imgur.com/IurjaaR.jpeg
I don't get how post GPT-5's launch we're still getting articles where the punchline is "what if these things replace a BUNCH of humans".
Salvatore is right about the fact that we have not seen the full story yet, LLMs are stalling/plateauing but active research is already ongoing to find different architectures and models.
And I think the effort here can be compared in scale to the Manhattan or Apollo projects, but there is also the potential for a huge backlash to the hype that was built up and created what is arguably a bubble, so this is a race against the clock.
I also think he is wrong about the markets reaction, markets are inherently good integrators and bad predictors, we should not expect to learn anything about the future by looking at stocks movements.
Manhattan and Apollo were both massive engineering efforts; but fundamentally we understood the science behind them. As long as we would be able to solve some fairly clearly stated engineering problems and spend enough money to actual build the solutions, those projects would work.
A priori, it was not obvious that those clearly stated problems had solutions within our grasp (see fusion) but at least we knew what the big picture looks like.
With AI, we don't have that, and never really had that. We've just been gradually making incremental improvements to AI itself, and exponential improvements in the amount of raw compute we can through at it. We know that we are reaching fundamental limits on transistor density so compute power will plateau unless we find a different paradigm for improvement; and those are all currently in the same position as fusion in terms of engineering.
LLMs are just the latest in a very long line of disparate attempts at making AI, and is arguably the most successful.
That doesn't mean the approach isn't an evolutionary dead end, like every other so far, in the search for AGI. In fact, I suspect that is the most likely case.
Current GenAI is nothing but a proof of concept. The seed is there. What AI can do at the moment is irrelevant. This is like the discovery of DNA. It changed absolutely everything in biology.
The fact that something simple like the Transformer architecture can do so much will spark so many ideas (and investment!) that it's hard to imagine that AGI will not happen eventually.
> Salvatore is right about the fact that we have not seen the full story yet, LLMs are stalling/plateauing but active research is already ongoing to find different architectures and models.
They will need to be so different that any talk implying current LLMs eventually replaced humans will be like saying trees eventually replaced horses because the first cars were wooden.
> And I think the effort here can be compared in scale to the Manhattan or Apollo projects, but there is also the potential for a huge backlash to the hype that was built up and created what is arguably a bubble, so this is a race against the clock.
It's not useful to blindly compare scale. We're not approaching AI like the Manhattan or Apollo projects, we're approaching this like we did crypto, and ads, and other tech.
That's not to say nothing useful will come out of it, I think very amazing things will come out of it and already have... but none of them will resemble mass replacement of skilled workers.
We're already so focused on productization and typical tech distractions that this is nothing like those efforts.
(In fact thinking a bit more, I'd say this is like the Space Shuttle. We didn't try to make the best spacecraft for scientific exploration and hope later on it'd be profitable in other ways... instead we immediately saddled it with serving what the Air Force/DoD wanted and ended up doing everything worse.)
> I also think he is wrong about the markets reaction, markets are inherently good integrators and bad predictors, we should not expect to learn anything about the future by looking at stocks movements.
I agree, so it's wrong about the over half of punchline too.
>> mass replacement of skilled workers
unless you consider people who write clickbait blogs to be skilled workers, in which case the damage is already done.
I have to tap the sign whenever someone talks about "GPT-5"
> AI is exceptional for coding! [high-compute scaffold around multiple instances / undisclosed IOI model / AlphaEvolve]
> AI is awesome for coding! [Gpt-5 Pro]
> AI is somewhat awesome for coding! ["gpt-5" with verbosity "high" and effort "high"]
> AI is a pretty good at coding! [ChatGPT 5 Thinking through a Pro subscription with Juice of 128]
> AI is mediocre at coding! [ChatGPT 5 Thinking through a Plus subscription with a Juice of 64]
> AI sucks at coding! [ChatGPT 5 auto routing]
People just want to feel special pointing a possibility, so in case it happens, they can then point towards their "insight".
I kind of want to put up a wall of fame/shame of these people to be honest.
Whether they turn out right or wrong, they undoubtedly cheered on the prospect of millions of people suffering just so they can sound good at the family dinner.
I wouldn’t want to work for or with these people.
sorry but prediction and cheering on is different. If there's a tsunami coming, not speaking about it doesn't help the cause.
Or they are experts in one field and think that they have valuable insight into other fields they are not experts on.
LLMs are limited because we want them to do jobs that are not clearly defined / have difficult to measure progress or success metrics / are not fully solved problems (open ended) / have poor grounding in an external reality. Robotics does not suffer from those maladies. There are other hurdles, but none are intractable.
I think we might see AI being much, much more effective with embodiment.
do you know how undefined and difficult to measure it is to load silverware into a dishwasher?
What? Robotics will have far more ambiguity and nuance to deal with than language models, and they'll have to analyze realtime audio and video to do so. Jobs are not so clearly defined as you imagine in the real world. For example, explain to me what a plumber does, precisely and how you would train a robot to do so? How do you train it to navigate ANY type of buildings internal plumbing structure and safely repair or install for?
What does that have to do with it? One company (desperate to keep runway), one product, one release.
what if they replace internet comments?
As a large language model developed by OpenAI I am unable to fulfill that request.
Not sure the last time you went on reddit, but I wouldn't be surprised if around 20% of posts and comments there are LLM generated.
The amount of innovation in the last 6-8 months has been insane.
Innovation in terms of helping devs do cool things has been insane.
There've been next to no advancements relative to what's needed to redefine our economic systems by replacing the majority of skilled workers.
-
Productionizing test-time compute covers 80% of what we've gotten in the last 6-8 months. Advancements in distillation and quantization cover the 20% of the rest... neither unlocks some path to mass unemployment.
What we're doing is like 10x'ing your vertical leap when your goal is to land on the moon: 10x is very impressive and you're going to dominate some stuff in ways no one ever thought possible.
But you can 100x it and it's still not getting you to the moon.
I think GPT-5's backlash was the beginning of the end of the hype bubble, but there's a lot of air to let out of it, as with any hype bubble. We'll see it for quite some time yet.
GenAI is a bubble, but that’s not the same as the broader field of AI, which is completely different. We will probably not even be using chat bots in a few years, better interfaces will be developed with real intelligence, not just predictive statistics.
For every industrial revolution (and we dont even know if AI is one yet) this kind of doom prediction has been around. AI will obviously create a lot of jobs too. the infra to run AI will not building itself, the people who train models will still be needed, the AI supervisors or managers or whatever we call it will be necessary part of the new workflows. And if your job needs hands you will be largely unaffected as there is no near future where robots will replace the flexibility of what most humans can do.
I think there is an unspoken implication built into the assumption that AI will be able to replace a wide variety of existing jobs, and that is that those current jobs are not being done efficiently. This is sometimes articulated as bullshit jobs, etc. and if AI takes over those the immediate next thing that will happen is that AI will look around ask why _anyone_ was doing that job in the first place. The answer was articulated 70 years ago in [0].
The only question is how much fat there is to trim as the middle management is wiped out because the algorithms have determined that they are completely useless and mostly only increase cost over time.
Now, all the AI companies think that they are going to be deriving revenue from that fat, but those revenue streams are going to disappear entirely because a huge number of purely politic positions inside corporations will vanish, because if they do not the corporation will go bankrupt competing with other companies that have already cut the fat. There won't be additional revenue streams that get spent on the bullshit. The good news is that labor can go somewhere else, and we will need it due to a shrinking global population, but the cushy bullshit management job is likely disappear.
At some point AI agents will cease to be sycophantic and when fed the priors for the current situation that a company is in will simply tell it like it is, and might even be smart enough to get the executives to achieve the goal they actually stated instead of simply puffing up their internal political position, which might include a rather surprising set of actions that could even lead to the executive being fired if the AI determines that they are getting in the way of the goal [1].
Fun times ahead.
0. https://web.archive.org/web/20180705215319/https://www.econo... 1. https://en.wikipedia.org/wiki/The_Evitable_Conflict
> However, if AI avoids plateauing long enough
I'm not sure how someone can seriously write this after the release of GPT-5.
Models have started to plateau since ChatGPT came out (3 years ago) and GPT-5 has been the final nail in this coffin.
People thought it was the end of history and innovation would be all about funding elaborate financial schemes; but now with AI people are finding themselves running all these elaborate money-printing machines and they're unsure if they should keep focusing on those schemes as before or actually try to automate stuff. The risk barrier has been lowered a lot to actually innovate, almost as low risk as doing a scheme but still people are having doubts. Maybe because people don't trust the system to reward real innovation.
LLMs feel like a fluke, like OpenAI was not intended to succeed... And even now that it succeeded and they try to turn the non-profit into a for-profit, it kind of feels like they don't even fully believe their own product in terms of its economic capacity and they're still trying to sell the hype as if to pump and dump it.
Open letter to tech magnates:
By all means, continue to make or improve your Llamas/Geminis (to the latter: stop censoring Literally Everything. Google has a culture problem. To the former... I don't much trust your parent company in general)
It will undoubtedly lead to great advances
But for the love of god do not tightly bind them to your products (Kagi does it alright, they don't force it on you). Do not make your search results worse. Do NOT put AI in charge of automatic content moderation with 0 human oversight (we know you want to. The economics of it work out nicely for you, with no accountability). People already as is get banned far too easily by your automated systems
> It will undoubtedly lead to great advances
"Undoubtedly" seems like a level of confidence that is unjustified. Like Travis Kalanick thinking AI is just about to help him discover new physics, this seems to suggest that AI will go from being able to do (at best) what we can already do if we were simply more diligent at our tasks to being something genuinely more than "just" us
Angela Collier has a hilarious video on tech bros thinking they can be physicists.
> But stocks are insignificant in the vast perspective of human history
This really misunderstands what the stock market tracks
> "However, if AI avoids plateauing long enough to become significantly more useful..."
As William Gibson said, "The future is already here, it's just not evenly distributed." Even if LLMs, reasoning algorithms, object recognition, and diffusion models stopped improving today, we're still at a point where massive societal changes are inevitable as the tech spreads out across industries. AI is going to steadily replace chair-to-keyboard interfaces in just about every business you can imagine.
Interestingly, AI seems to be affecting the highest level "white collar" professionals first, rather than replacing the lowest level workers immediately, like what happened when blue collar work was automated. We're still pretty far away from AI truck drivers, but people with fine arts or computer science degrees, for example, are already feeling the impact.
"Decimation" is definitely an accurate way to describe what's in the process of happening. What used to take 10 floors of white collar employees will steadily decline to just 1. No idea what everyone else will be doing.
We currently work more than we ever have. Just a couple of generations ago it was common for a couple to consist of one person who worked for someone else or the public, and one who worked at home for themselves. Now we pretty much all have to work for someone else full time then work for ourselves in the evening. And that won't make you rich, it will just make you normal.
Maybe a "loss of jobs" is what we need so we can go back working for ourselves, cooking our own food, maintaining our own houses etc.
This is why I doubt it will happen. I think "AI" will just end up making us work even more for even less.
About 3 years late to this "hot take".
At the moment I just don't see AI in its current state or future trajectory as a threat to jobs. (Not that there can't be other reasons why jobs are getting harder to get). Predictions are hard, and breakthroughs can happen, so this is just my opinion. Posting this comment as a record to myself on how I feel of AI - since my opinion on how useful/capable AI is has gone up and down and up and down again over the last couple of years.
Most recently down because I worked on two separate projects over the last few weeks with the latest models available on GitHub Copilot Pro. (GPT-5, Claude Sonnet 4, Gemini 2.5 Pro, and some lesser capable ones at times as well). Trying the exact same queries for code changes across all three models for a majority of the queries. I saw myself using Claude most, but it still wasn't drastically better than others, and still made too many mistakes.
One project was a simple health-tracking app in Dart/Flutter. Completely vibe-coded, just for fun. I got basic stuff to start working. Over the days I kept finding bugs as I starting using it. Since I truly wanted to use this app in my daily life, at one point I just gave up cause fixing the bugs was getting way too annoying. Most "fixes" as I later got into the weeds of it, were wrong, with wrong assumptions, made changes that seemed to fix the problem at the surface but introducing more bugs and random garbage, despite giving a ton of context and instructions on why things are supposed to be a certain way, etc. I was constantly fighting with the model. Would've been much easier to do much more on my own and using it a little bit.
Another project was in TypeScript, where I did actually use my brain, not just vibe-coded. Here, AI models were helpful because I mostly used them to explain stuff. And did not let them make more than a few lines of code changes at most at a time. There was a portion of the project which I kinda "isolated" which I completely vibe-coded and I don't mind if it breaks or anything as it is not critical. It did save me some time but I certainly could've done it on my own with a little more time, while having code that I can understand fully well and edit.
So the way I see using these models right now is for research/prototyping/throwaway kind of stuff. But even in that, I literally had Claude 4 teach me something wrong about TypeScript just yesterday. It told me a certain thing was deprecated. I made a follow up question on why that thing is deprecated and what's used instead, it replied with something like "Oops! I misspoke, that is not actually true, that thing is still being used and not deprecated." Like, what? Lmao. For how many things have I not asked a follow up and learnt stuff incorrectly? Or asked and still learnt incorrectly lmao.
I like how straightforward GPT-5 is. But apart from that style of speech I don't see much other benefit. I do love LLMs for personal random searches like facts/plans/etc. I just ask the LLM to suggest me what to do just to rubber duck or whatever. Do all these gains add up towards massive job displacement? I don't know. Maybe. If it is saving 10% time for me and everyone else, I guess we do need 10% less people to do the same work? But is the amount of work we can get paid for fixed and finite? Idk. We (individuals) might have to adapt and be more competitive than before depending on our jobs and how they're affected, but is it a fundamental shift? Are these models or their future capabilities human replacements? Idk. At the moment, I think they're useful but overhyped. Time will tell though.
This same link was submitted 2 days ago. My comment there still applies.
LLMs do not "understand the human language, write programs, and find bugs in a complex code base"
"LLMs are language models, and their superpower is fluency. It’s this fluency that hacks our brains, trapping us into seeing them as something they aren’t."
https://jenson.org/timmy/