I'm generally an AI skeptic, but it seems awfully early to make this call. Aside from the obvious frontline support, artist, junior coder etc, a whole bunch of white collar "pay me for advice on X" jobs (dietician, financial advice, tax agent, etc), where the advice follows set patterns only mildly tailored for the recipient, seem to be severely at risk.
Example: I recently used Gemini for some tax advice that would have cost hundreds of dollars to get from a licensed tax agent. And yes, the answer was supported by actual sources pointing to the tax office website, including a link to the office's well-hidden official calculator of precisely the thing I thought I would have to pay someone to figure out.
"I'm generally an AI skeptic, but it seems awfully early to make this call."
What call. Maybe some readers miss the (perhaps subtle) difference between "Generative AI is not ..." and "Generative Ai is not going to ..."
Then first can be based on fact, e.g., what has happened so far. The second is based on pure speculation. No one knows what will happen in the future. HN is continually being flooded with speculation, marketing, hype.
In contrast, this article, i.e., the paper it discusses, is is based on what has happened so far. There is no "call" being made. Only an examination of what has heppened so far. Facts not opinions.
> In contrast, this article, i.e., the paper it discusses, is is based on what has happened so far.
What happened in 2023 and 2024 actually
Nitpicky but it's worth noting that last year's AI capabilities are not the April 2025 AI capabilities and definitely won't be the December 2025 capabilities.
It's using deprecated/replaced technology to make a statement, that is not forward projecting. I'm struggling to see the purpose. It's like announcing that the sun is still shining at 7pm, no?
I feel like model improvement is severely overstated by the benchmarks and the last release cycle basically made no difference to my use cases. If you gave me Claude 3.5 and 3.7 I couldn't really tell the difference. OpenAI models feel like they are regressing, and LLAMA 4 regressed even on benchmarks.
And the hype was insane in 2023 already - it's useful to compare actual outcomes vs historic hype to gauge how credible the hype sellers are.
That's interesting. I think there's been some pretty significant improvements in the rate of hallucinations and accuracy of the models, especially when it comes to rule following. Perhaps the biggest improvement though is in the size of context windows which are huge compared to this time last year.
Maybe progress over the last 2-3 months is hard to see, but progress over the last 6 is very clear.
This is the real value of AI that, I think, we're just starting to get into. It's less about automating workflows that are inherently unstructured (I think that we're likely to continue wanting humans for this for some time).
It's more about automating workflows that are already procedural and/or protocolized, but where information gathering is messy and unstructured (I.e. some facets of law, health, finance, etc).
Using your dietician example: we often know quite well what types of foods to eat or avoid based on your nutritional needs, your medical history, your preferences, etc. But gathering all of that information requires a mix of collecting medical records, talking to the patient, etc. Once that information is available, we can execute a fairly procedural plan to put together a diet that will likely work for you.
These are cases that I believe LLMs are actually very well suited, if the solution can be designed in such a way as to limit hallucinations.
I recently tried looking up something about local tax law in ChatGPT. It confidently told me a completely wrong rule. There are lots of sources for this, but since some probably unknowingly spread misinformation, ChatGPT just treated it as correct. Since I always verify what ChatGPT spits out, it wasn't a big deal for me, just a reminder that it's garbage in, garbage out.
O3's web research seems to have gotten much, much better than their earlier attempts at using the web, which I didn't like. It seems to browse in a much more human way (trying multiple searches, noticing inconsistencies, following up with more refined searches, etc).
But I wonder how it would do in a case like yours where there is conflicting information and whether it picks up on variance in information it finds.
Yeah, I also find very often llms say sth wrong just because they found it in the internet. The problem is that we know to not trust a random website, but LLMs make wrong info more believable. So the problem in some sense is not exactly the LLM, as they pick up on wrong stuff people or "people" have written, but they are really bad at figuring these errors out and particularly good at covering them or backing them up.
I think this will be fixed by having LLM trained not on the whole internet but on well curated content. To me this feels like the internet in maybe 1993. You see the potential and it’s useful. But a lot of work and experimentation has to be done to work out use cases.
I think it’s weird to reject AI based on its current form.
"Hallucination" implies that the LLM holds some relationship to truth. Output from an LLM is not a hallucination, it's bullshit[0].
> Using your dietician example: we often know quite well what types of foods to eat or avoid based on your nutritional needs
No we don't. It's really complicated. That's why diets are popular and real dietitians are expensive. and I would know, I've had to use one to help me manage an eating disorder!
There is already so much bullshit in the diet space that adding AI bullshit (again, using the technical definition of bullshit here) only stands to increase the value of an interaction with a person with knowledge.
And that's without getting into what happens when brand recommendations are baked into the training data.
> "Hallucination" implies that the LLM holds some relationship to truth. Output from an LLM is not a hallucination, it's bullshit[0].
I understand your perspective, but the intention was to use a term we've all heard to reflect the thing we're all thinking about. Whether or not this is the right term to use for scenarios where the LLM emits incorrect information is not relevant to this post in particular.
> No we don't. It's really complicated. That's why diets are popular and real dietitians are expensive.
No, this is not why real dietitians are expensive. Real dietitians are expensive because they go through extensive training on a topic and are a licensed (and thus supply constrained) group. That doesn't mean they're operating without a grounding fact base.
Dietitians are not making up nutritional evidence and guidance as they go. They're operating on studies that have been done over decades of time and millions of people to understand in general what foods are linked to what outcomes. Yes, the field evolves. Yes, it requires changes over time. But to suggest we "don't know" is inconsistent with the fact that we're able to teach dietitians how to construct diets in the first place.
There are absolutely cases in which the confounding factors for a patient are unique enough such that novel human thought will be required to construct a reasonable diet plan or treatment pathway for someone. That will continue to be true in law, health, finances, etc. But there are also many, many cases where that is absolutely not the case, the presentation of the case is quite simple, and the next step actions are highly procedural.
This is not the same as saying dietitians are useless, or physicians are useless, or attorneys are useless. It is to say that, due to the supply constraints of these professions, there are always going to be fundamental limits to the amount they can produce. But there is a credible argument to be made that if we can bolster their ability to deliver the common scenarios much more effectively, we might be able to unlock some of the capacity to reach more people.
I find this way of looking at LLMs to be odd. Surely we all are aware that AI has always been probabilistic in nature. Very few people seem to go around talking about how their binary classifier is always hallucinating, but just sometimes happens to be right.
Just like every other form of ML we've come up with, LLMs are imperfect. They get things wrong. This is more of an indictment of yeeting a pure AI chat interface in front of a consumer than it is an indictment of the underlying technology itself. LLMs are incredibly good at doing some things. They are less good at other things.
There are ways to use them effectively, and there are bad ways to use them. Just like every other tool.
The problem is they are being sold as everything solutions. Never write code / google search / talk to a lawyer / talk to a human / be lonely again, all here, under one roof. If LLM marketing was staying in its lane as a creator of convincing text we'd be fine.
It can’t replace a human for support, it is not even close to replacing a junior developer. It can’t replace any advice job because it lies instead of erroring.
As an example if you want diet advice, it can lie to you very convincingly so there is no point in getting advice from it.
Main value you get from a programmer is they understand what they are doing and they can take the responsibility of what they are developing. Very junior developers are hired mostly as an investment so they become productive and stay with the company. AI might help with some of this but doesn’t really replace anyone in the process.
For support, there is massive value in talking to another human and having them trying to solve your issue. LLMs don’t feel much better than the hardcoded menu style auto support there already is.
I find it useful for some coding tasks but think LLMs were overestimated and it will blow up like NFTs
>I find it useful for some coding tasks but think LLMs were overestimated and it will blow up like NFTs
No way. NFTs did not make any headway in "the real world": their value proposition was that their cash value was speculative, like most other Blockchain technologies, and that understandably collapsed quickly and brilliantly. Right now developers are using LLMs and they have real tangible advantages. They are more successful than NFTs already.
I'm a huge AI skeptic and I believe it's difficult to measure their usefulness while we're still in a hype bubble but I am using them every day, they don't write my prod code because they're too unreliable and sloppy, but for one shot scripts <100 lines they have saved me hours, and they've entirely replaced stack overflow for me. If the hype bubble burst today I'd still be using LLMs tomorrow. Cannot say the same for NFTs
LLMs are somewhat useful compared to NFTs and other blockchain bullshit which is nearly completely useless.
It will be interesting what happens when the money from the investment bubble dries out and the real costs need to be paid by the users.
> As an example if you want diet advice, it can lie to you very convincingly so there is no point in getting advice from it.
How exactly is this different from getting advice from someone who acts confidently knowledgeable? Diet advice is an especially egregious example, since I can have 40 different dieticians give me 72 different diet/meal plans with them saying 100% certainty that this is the correct one.
It's bad enough the AI marketers push AI as some all knowing, correct oracle, but when the anti-ai people use that as the basis for their arguments, it's somehow more annoying.
Trust but verify is still a good rule here, no matter the source, human or otherwise.
In the case of dieticians, investment advisors, and accountants they are usually licensed professionals who face consequences for misconduct. LLMs don’t have malpractice insurance
If a junior developer lies about something important, they can be fired and you can try to find someone else who wouldn't do the same thing. At the very least you could warn the person not to lie again or they're gone. It's not clear that you can do the same thing with an LLM as they don't know they've lied.
If I ask it how to accomplish a task with the C standard library and it tells me to use a function that doesn't exist in the C standard library, that's not just "wrong" that is a fabrication. It is a lie
If you ask me to remove whitespace from a string in Python and I mistakenly tell you use ".trim()" (the Java method, a mistake I've made annoyingly too much) instead of ".strip()", am I lying to you?
You are correct that there is a difference between lying and making a mistake, however
> Lying requires intent to deceive
LLMs do have an intent to deceive, built in!
They have been built to never admit they don't know an answer, so they will invent answers based on faulty premises
I agree that for a human mixing up ".trim()" and ".strip()" is an honest mistake
In the example I gave you are asking for a function that does not exist. If it invents a function, because it is designed to never say "you are wrong that doesn't exist" or "I don't know the answer" that seems to qualify to me as "intent to deceive" because it is designed to invent something rather than give you a negative sounding answer
It’s more like bullshitting which is inbetween the two. Basically, like that guy who always has some story to tell. He’s not lying as such, he’s just waffling.
I wouldn't have a clue how to verify most things that get thrown around these days. How can I verify climate science? I just have to trust the scientific consensus (and I do). But some people refuse to trust that consensus, and they think that by reading some convincing sounding alternative sources they've verified that the majority view on climate science is wrong.
The same can apply for almost anything. How can I verify dietary studies? Just having the ability to read scientific studies and spot any flaws requires knowledge that only maybe 1 in 10000 people could do, if not worse than that.
Do people actually behave this way with you? If someone presents a plan confidently without explaining why, I tend to trust them less (even people like doctors, who just happen to start with a very high reputation). In my experience people are very forthcoming with things they don't know.
Someone can present a plan, explain that plan, and be completely wrong.
People are forthcoming with things they know they don't know. It's the stuff that they don't know that they don't know that get them. And also the things they think they know, but are wrong about. This may come as a shock, but people do make mistakes.
And if someone presents a plan, explains that plan, and is completely wrong repeatedly and often, in a way that makes it seem like they don’t even have any concept whatsoever of what they may have done wrong, wouldn’t you start to consider at some point that maybe this person is not a reliable source of information?
I trust cutting edge models now far more than the ones from a few years ago.
People talk a lot of about false info and hallucinations, which the models do in fact do, but the examples of this have become more and more far flung for SOTA models. It seems that now in order to elicit bad information, you pretty much have to write out a carefully crafted trick question or ask about a topic so on the fringes of knowledge that it basically is only a handful of papers in the training set.
However, asking "I am sensitive to sugar, make me a meal plan for the week targeting 2000cal/day and high protein with minimally processed foods" I would totally trust the output to be on equal footing with a run of the mill registered dietician.
As for the junior developer thing, my company has already forgone paid software solutions in order to use software written by LLMs. We are not a tech company, just old school manufacturing.
NFT:s never had any real value. It was just speculation hoping some bigger sucker will come after you.
LLM:s create real value. I save a bunch of time coding with an LLM vs without one. Is it perfect? No, but it does not have to be for still creating a lot of value.
Are some people hyping it up too much? Sure, an reality will set in but it wont blow up. It will rather be like the internet. 2000s and everyone thought "slap some internet on it and everything will be solved". They overestimated the (shorterm) value of the internet. But internet was still useful.
But it is replacing it. There's a rapidly-growing number of large, publicly-traded companies that replaced first-line support with LLMs. When I did my taxes, "talk to a person" was replaced with "talk to a chatbot". Airlines use them, telcos use them, social media platforms use them.
I suspect what you're missing here is that LLMs here aren't replacing some Platonic ideal of CS. Even bad customer support is very expensive. Chatbots are still a lot cheaper than hundreds of outsourced call center people following a rigid script. And frankly, they probably make fewer mistakes.
> and it will blow up like NFTs
We're probably in a valuation bubble, but it's pretty unlikely that the correct price is zero.
It doesn’t wholly replace the need for human support agents but if it can adequately handle a substantial number of tickets that’s enough to reduce headcount.
A huge percentage of problems raised in customer support are solved by otherwise accessible resources that the user hasn’t found. And AI agents are sophisticated enough to actually action on a lot of issues that require action.
The good news is that this means human agents can focus on the actually hard problems when they’re not consumed by as much menial bullshit. The bad news for human agents is that with half the workload we’ll probably hit an equilibrium with a lot fewer people in support.
I already know of at least one company that's pivoted to using a mix of AI and off-shoring their support, as well as some other functions; that's underway, with results unclear, aside from layoffs that took place. There was also a brouhaha a year or two ago when a mental health advocacy tried using AI to replace their support team... did not go as planned when it suggested self-harm to some users.
It seems like an obvious thing on the surface, but I've already noticed that when asked questions on LLM usage (eg building RAG pipelines and whatnot), ChatGPT will exclusively refer you to OpenAI products.
I just asked O3 for a software stack for deploying AI in a local application and it recommended llama over OpenAI API.
It has always been easy to imagine how advertising could destroy the integrity of LLM's. I can guarantee that there will be companies unable to resist the temporary cash flows from it. Those models will destroy their reputation in no time.
I'm an AI pessimist, yet I don't see this happening (at least not without some major advancements in how LLMs work).
One major problem is the payment mechanism. The nature of LLMs means you just can't really know or force it to spit out ad garbage in a predictable manor. That'll make it really tricky for an advertiser to want to invest in your LLM advertising (beyond being able to sell the fact that they are an AI ad service).
Another is going to be regulations. How can you be sure to properly highlight "sponsored" content in the middle of an AI hallucination? These LLM companies run a very real risk of running a fowl of FTC rules.
It matters a lot how much of the market they capture before then though. Oracle and Google are two companies that have spent years torching their reputation but they are still ubiquitous and wildly profitable.
I use it everyday to an extent that I’ve come to depend on it.
For copywriting, analyzing contracts, exploring my business domain, etc etc. Each of those tasks would have required me to consult with an expert a few years ago. Not anymore.
This is a great point, I was just using it understand various DMV procedures. It is invaluable for navigating bureaucracy so if your job is to ingest and regurgitate a bunch of documents and procedures you may be highly at risk here.
That is a great use for it too, rather than replacing artists we have personal advisors who can navigate almost any level of complex bureaucracy instantaneously. My girlfriend hates AI, like rails against it at any opportunity, but after spending a few hours on the DMV website I sat down and fed her questions into Claude and had answers in a few seconds. Instant convert.
I actually paid for tax advice from one of those big companies (it was recommended - last time I will take that person's recommendations!). I was very disappointed in the service. It felt like the person I was speaking to on the phone would have been better of just echoing the request into AI. So I did just that as I waited on the line. I found the answer and the tax expert "confirmed" it.
As for correctness, they mentioned the LLM citing links that the person can verify. So there is some protection at that level.
But, also, the threshold of things we manage ourselves versus when we look to others is constantly moving as technology advances and things change. We're always making risk tradeoff decisions measuring the probability we get sued or some harm comes to us versus trusting that we can handle some tasks ourselves. For example, most people do not have attorneys review their lease agreements or job offers, unless they have a specific circumstance that warrants they do so.
The line will move, as technology gives people the tools to become better at handling the more mundane things themselves.
But if you dont know anything about programming a link to a library/etc is not so useful. Same if you dont know about tax law and it cities the tax code and how it should be understood (the code is correct but the interpretation is not)
I think in many cases, chatbots may make information accessible to people who otherwise wouldn't have it, like in the OP's case. But I'm more sceptical it's replacing experts in specialize subjects that had been previously making a living at them. They would be serving different markets.
> And yes, the answer was supported by actual sources pointing to the tax office website, including a link to the office's well-hidden official calculator of precisely the thing I thought I would have to pay someone to figure out.
Sounds like reddit could also do a good job at this, though nobody said "reddit will replace your jobs". Maybe because not as many people actively use reddit as they use generative AI now, but I cannot imagine any other reason than that.
Similairly, while not perfect I use AI to help redesign my landscaping by uploading a picture of my yard and having it come up with different options.
Also took a picture of my tire while at the garage and asked it if I really needed new tires or not.
Took a picture of my sprinkler box and had it figure out what was going on.
Potentially all situations where I would’ve paid (or paid more than I already was) a local laborer for that advice. Or at a minimum spent much more time googling for the info.
For the tire you can also use a penny. If you stick the penny in the tread with Liconln’s head down and his hair isn’t covered, then you need new tires. No AI. ;)
So in the coming few years on the question whether or not to change your tires, a suggestions for shops in your area will come with a recommendation to change them. Do you think you would trust the outcome?
It's a race to the bottom for pricing. They can't do shit. Even if the American companies colluded to stop competing and raise prices, Chinese providers will undermine that.
There is no moat. Most of these AI APIs and products are interchangeable.
I find it easily hallucinates this stuff. It’s understanding of a picture is decidedly worse then its understanding of words. Be careful here about asking if it needs a tire change it is likely giving you an answer that only looks real.
There's a reason that people have to be told to not just believe everything they read on the Internet. And there's a reason some people still do that anyway.
I’m a bit of a skeptic too and kind of agree on this. Also, the human employee displacement will be slow. It will start by not eliminating existing jobs but just eliminating the need for additional headcount, so it caps the growth of these labor markets. As it does that, the folks in the roles leveraging AI the most will start slowly stealing share of demand as they find more efficient and cheaper ways to perform the work. Meanwhile, core demand is shrinking as self service by customers is increasingly enabled. Then at some step pattern, perhaps the next global business cycle down turn, the headcount starts trending downward. This will repeat a handful of times, probably taking decades to be measured in aggregate by this type of study.
Yeah, 2023 I would expect no effect. 2024 I think generally not, wasn’t good or deployed enough. I think 2025 might be the first signs, it I still think there is a lot of plumbing and working with these things. 2026 though I expect to show an effect.
Depends on how good the wrench is, if I can walk over to the wrench, kick it, say change my spark plugs now you fuck, and it does so instantly and for free and doesn't complain....
I don't trust what anyone says in this space because there is so much money to be made (by a fraction of people) if AI lives up to its promise, and money to be made to those who claim that AI is "bullshit".
The only thing I can remotely trust is my own experience. Recently, I decided to have some business cards made, which I haven't done in probably 15 years. A few years ago, I would have either hired someone on Fiverr to design my business card or pay for a premade template. Instead, I told Sora to design me a business card, and it gave me a good design the first time; it even immediately updated it with my Instagram link when I asked it to.
I'm sorry, but I fail to see how AI, as we now know it, doesn't take the wind out of the sails of certain kinds of jobs.
But didn't we have business card template programs, and even free suggested business card designs from the companies that sell business cards, almost immediately after they opened for business on the internet?
That's missing the point, and is kind of like saying why bother paying someone to build you a house when there are DIY home building kits. (or why even buy a home when you can live in a van rent-free)
The point is that I would have paid for another human being's time. Why? Because I am not a young man anymore, and have little desire to do everything myself at this point. But now, I don't have to pay for someone's time, and that surplus time doesn't necessarily transfer to something equivalent like magic.
I am not talking about whether I have to pay more or less for anything. My problem is not paying. I want to pay so that I don't have to make something myself or waste time fiddling with a free template.
What I am proposing is that, in the current day, a human being is less likely to be at the other end of the transaction when I want to spend money to avoid sacrificing my time.
Sure, one can say that whomever is working for one of these AI companies benefits, but they would be outliers and AI is effectively homogenizing labor units in that case. Someone with creative talent isn't going to feasibly spin up a competitive AI business the way they could have started their own business selling their services directly.
Your tax example isn't far off from what's already possible with Google.
The legal profession specifically saw the rise of computers, digitization of cases and records, and powerful search... it's never been easier to "self help" - yet people still hire lawyers.
I thought the value of using a licensed tax agent is that if they give you advice that ends up being bad, they have an ethical/professional obligation to clean up their mess.
The kind of person who wants to pay nothing for advice wasn’t going to hire a lawyer or an accountant anyway.
This fact is so simple and yet here we are having arguments about it. To me people are conflating an economic assessment - whose jobs are going to be impacted and how much - with an aspirational one - which of your acquaintances personally could be replaced by an AI, because that would satisfy a beef.
Google (non-Gemini) has always been a great source for tax advice, at least here in Canada because, if nothing else, the government's website appears to leave all its pages available for indexing (even if it's impossible to navigate on its own).
Here’s why I don’t think it matters , because the machine is paying for everyone’s productivity boost, even your accountants. So maybe this tide will rise all boats. Time will tell.
Your accountant also is probably saving hundreds of dollars in other areas using AI assistance.
Personally I still think you should cross check with a professional.
Most people’s (USA) taxes are not complex, and just require basic arithmetic to complete. Even topics like stock sales, IRA rollovers, HSAs, and rental income (which the vast majority of taxpayers don’t have) are straightforward if you just read the instructions on the forms and follow them. In 30 years of paying taxes, I’ve only had a tax professional do it once: as an experiment after I already did them myself to see if there was any difference in the output. I paid a tax professional $400 and the forms he handed me back were identical to the ones I filled out myself.
I'm one of those weird kids who liked doing those puzzles where you had to walk through a list of tricky instructions and end up with the right answers, so I'm pretty good at that sort of thing. I also have fairly simple finances: a regular W-2 job and a little side income that doesn't have taxes withdrawn. But last year the IRS sent me a $450 check and a note that said I'd made a mistake on my taxes and paid too much. Sadly, they didn't tell me what the mistake was, so I couldn't be sure to correct it this year.
Technically, all you have to do is follow the written instructions. But there are a surprising number of maybes in those instructions. You hit a checkbox that asks whether you qualify for such-and-such deduction, and find yourself downloading yet another document full of conditions for qualification, which aren't always as clear-cut as you'd like. You can end up reading page after page to figure out whether you should check a single box, and that single box may require another series of forms.
My small side income takes me from a one-page return to several pages, and next year I'm probably going to have to pay estimated taxes in advance because that non-taxed income leaves me owing at the end of the year more than some acceptable threshold that could result in fines. All because I make an extra 10% doing some evening freelancing.
Most people's taxes shouldn't be complex, but in practice they're more complex than they should be.
I don't think it makes you weird, and taxes really aren't that much of a puzzle to put together, outside of the many deduction-related edge cases (which you can skip if you just take the standard deduction). My federal and state returns last year added up to 36 pages, not counting the attachments listing investment sales. Still, they're pretty straightforward. I now at least use online software to do them, but that's only to save time filling out forms, not for the software's "expertise." I have no doubt I could do them by hand if I wanted to give myself more writing to do.
If I can do this, most people can do a simple 2-page 1040EZ.
Read Paul Tetlock's research about so-called "experts" and their inability to make good forecasts.
Here's my own take:
- It is far too early to tell.
- The roll-out of ChatGPT caused a mind-set revolution. People now "get" what is possible already now, and it encourages conceiving and persuing new use cases on what people have seen.
- I would not recommend any kinds to train to become a translator for sure; even before LLMs, people were paid penny amounts per word or line translated, and rates plummeted further due to tools that cache translations in previous versions of documents (SDL TRADOS etc.). The same decline not to be expected for interpreters.
- Graphic designers that live from logo designs and similar works may suffer fewer requests.
- Text editors (people that edit/proofread prose, not computer programs) will be replaced by LLMs.
- LLMs are a basic technology that now will be embedded into various products, from email clients over word processors to workflow tools and chat clients. This will take 2-3 years, and it may reduce the number of people needed in an office with a secretarial/admin/"analyst" type background after that.
- Industry is already working on the next-gen version of smarter tools for medics and lawyers. This is more of a 3-5 year development, but then again some early adopters started already 2-3 years ago. Once this is rolled out, there will be less demand for assitants-type jobs such as paralegals.
> Text editors (people that edit/proofread prose, not computer programs) will be replaced by LLMs.
This is such a broad category that I think it's inaccurate to say that all editors will be automated, regardless of your outlook on LLMs in general. Editing and proofreading are pretty distinct roles; the latter is already easily automated, but the former can take on a number of roles more akin to a second writer who steers the first writer in the correct direction. Developmental editors take an active role in helping creatives flesh out a work of fiction, technical editors perform fact-checking and do rewrites for clarity, etc.
My dentist already uses something called OverJet(?) that reads X-rays for issues. They seem to trust it and it agreed with what they suspected on the X-rays. Personally, I’ve been misdiagnosed through X-rays by a medical doctor so even being an LLM skeptic, Im slightly favorable to AI in medicine.
But I already trust my dentist. A new dentist deferring to AI is scary, and obviously will happen.
I had a misread X-ray once, and I can see how a machine could be better at spotting patterns than a tired technician, so I'm favorable too. I think I'd like a human to at least take a glance at it, though.
The mistake on mine was caught when a radiologist checked over the work of the weekend X-ray technician who missed a hairline crack. A second look is always good, and having one look be machine and the other human might be the best combo.
It is possible for all of the following to be true:
1. This study is accurate
2. We are early in a major technological shift
3. Companies have allocated massive amounts of capital to this shift that may not represent a good investment
4. Assuming that the above three will remain true going forward is a bad idea
The .com boom and bust is an apt reference point. The technological shift WAS real, and the value to be delivered ultimately WAS delivered…but not in 1999/2000.
It may be we see a massive crash in valuations but AI still ends up the dominant driver of software value over the next 5-10 years.
That's a repeating pattern with technologies. Most of the early investments don't pay off and the transformation does happen but also quite a bit later than people predicted. This was true of the steam engine, the telegraph, electricity, and the railroad. It actually tends to be the later stage investors who reap most of the reward because by then the lessons have been learned and solutions developed.
My primary worry since the start has been not that it would "replace workers", but that it can destroy value of entire sectors. Think of resume-sending. Once both sides are automated, the practice is actually superfluous. The concept of "posting" and "applying" to jobs has to go. So any infrastructure supporting it has to go. At no point did it successfully "do a job", but the injury to the signal to noise ratio wipes out the economic value a system.
This is what happened to Google Search. It, like cable news, does kinda plod along because some dwindling fraction of the audience still doesn't "get it", but decline is decline.
> it can destroy value of entire sectors. Think of resume-sending. Once both sides are automated, the practice is actually superfluous
"Like all ‘magic’ in Tolkien, [spiritual] power is an expression of the primacy of the Unseen over the Seen and in a sense as a result such spiritual power does not effect or perform but rather reveals: the true, Unseen nature of the world is revealed by the exertion of a supernatural being and that revelation reshapes physical reality (the Seen) which is necessarily less real and less fundamental than the Unseen" [1].
The writing and receiving of resumes has been superfluous for decades. Generative AI is just revealing that truth.
Interesting: At first I was objecting in my mind ("Clearly, the magic - LLMs - can create effect instead of only revealing it.") but upon further reflecting on this, maybe you're right:
First, LLMs are a distillation of our cultural knowledge. As such they can only reveal our knowledge to us.
Second, they are limited even more so by the users knowledge. I found that you can barely escape your "zone of proximal development" when interacting with an LLM.
(There's even something to be said about prompt engineering in the context of what the article is talking about: It is 'dark magic' and 'craft-magic' - some of the full potential power of the LLM is made available to the user by binding some selected fraction of that power locally through a conjuration of sorts. And that fraction is a product of the craftsmanship of the person who produced the prompt).
My view has been something of a middle ground. It's not exactly that it reveals relevant domains of activity are merely performative, but its a kind of "accelerationism of the almost performative". So it pushes these almost-performative systems into a death spiral of pure uselessness.
In this sense, I have rarely seen AI have negative impacts. Insofar as an LLM can generate a dozen lines of code, it forces developers to engage in less "performative copy-paste of stackoverflow/code-docs/examples/etc." and engage the mind in what those lines should be. Even if, this engagement of the mind, is a prompt.
Yeah man, I'm not so sure about that. My father made good money writing resumes in his college years studying for his MFA. Same for my mother. Neither of them were under the illusion that writing/receiving resumes was important or needed. Nor were the workers or managers. The only people who were confused about it were capitalists who needed some way to avoid losing their sanity under the weight of how unnecessary they were in the scheme of things.
Resume-sending is a great example: if everyone's blasting out AI-generated applications and companies are using AI to filter them, the whole "application" process collapses into meaningless busywork
No, the whole process is revealed to be meaningless busywork. But that step has been taken for a long time, as soon as automated systems and barely qualified hacks were employed to filter applications. I mean, they're trying to solve a hard and real problem, but those solutions are just bad at it.
The technical information on the cv/resume is, in my opinion, at most half of the process. And that's assuming that the person is honest, and already has the cv-only knowledge of exactly how much to overstate and brag about their ability and to get through screens.
Presenting soft skills is entirely random, anyway, so the only marker you can have on a cv is "the person is able to write whatever we deem well-written [$LANGUAGE] for our profession and knows exactly which meaningless phrases to include that we want to see".
So I guess I was a bit strong on the low information content, but you better have a very, very strong resume if you don't know the unspoken rules of phrasing, formatting and bragging that are required to get through to an actual interview. For those of us stuck in the masses, this means we get better results by adding information that we basically only get by already being part of the in-group, not by any technical or even interpersonal expertise.
Edit: If I constrain my argument to CVs only, I think my statement holds: They test an ability to send in acceptably written text, and apart from that, literally only in-group markers.
For some applications it feels like half the signal of whether you're qualified is whether the CV is set in Computer Modern, ie was produced via LaTeX.
Im not sure this is a great example... yes the infrastructure of posting and applying to jobs has to go, but the cost of recruitment in this world would actually be much higher... you likely need more people and more resources to recruit a single employee.
In other words, there is a lot more spam in the world. Efficiencies in hiring that implicitly existed until today may no longer exist because anyone and their mother can generate a professional-looking cover letter or personal web page or w/e.
I'm not sure that is actually a bad thing. Being a competent employee and writing a professional-looking resume are two almost entirely distinct skill sets held together only by "professional-looking" being a rather costly marker of being in the in-group for your profession.
This is completely untrue. Google Search still works, wonderfully. It works even better than other attempts at search by the same Google. For example, there are many videos that you will NEVER find on Youtube search that come up as the first results on Google Search. Same for maps: it's much easier to find businesses on Google Search than on maps. And it's even more true for non-google websites; searching Stack Overflow questions on SO itself is an exercice in frustration. Etc.
Google Search is distinct from Google's expansive ad network. Google search is now garbage, but their ads are everywhere are more profitable than ever.
On Google's earnings call - within the last couple of weeks - they explicitly stated that their stronger-than-expected growth in the quarter was due to a large unexpected increase in search revenues[0]. That's a distinct line-item from their ads business.
>Google’s core search and advertising business grew almost 10 per cent to $50.7bn in the quarter, surpassing estimates for between 8 per cent and 9 per cent.[0]
The "Google's search is garbage" paradigm is starting to get outdated, and users are returning to their search product. Their results, particularly the Gemini overview box, are (usually) useful at the moment. Their key differentiator over generative chatbots is that they have reliable & sourced results instantly in their overview. Just concise information about the thing you searched for, instantly, with links to sources.
> The "Google's search is garbage" paradigm is starting to get outdated
Quite the opposite. It's never been more true. I'm not saying using LLMs for search is better, but as it stands right now, SEO spammers have beat Google, since whatever you search for, the majority of results are AI slop.
Their increased revenue probably comes down to the fact that they no longer show any search results in the first screenful at all for mobile and they've worked hard to make ads indistinguishable from real results at a quick glance for the average user. And it's not like there exists a better alternative. Search in general sucks due to SEO.
Can you give an example of an everyday person search that generates a majority of AI slop?
If anything my frustration with google search comes from it being much harder to find niche technical information, because it seems google has turned the knobs hard towards "Treat search queries like they are coming from the average user, so show them what they are probably looking for over what they are actually looking for."
> Quite the opposite. It's never been more true. I'm not saying using LLMs for search is better, but as it stands right now, SEO spammers have beat Google, since whatever you search for, the majority of results is AI slop.
It's actually sadder than that. Google appear to have realised that they make more money if they serve up ad infested scrapes of Stack Overflow rather than the original site. (And they're right, at least in the short term).
Most Google ads comes from Google search, its a misconception Google derives most of their profits from third party ads that is just a minor part of Googles revenue.
You are talking past each other. They say "Google search sucks now" and you retort with "But people still use it." Both things can be true at the same time.
You misunderstand. Making organic search results shittier will drive up ad revenue as people click on sponsored links in the search results page instead.
Not a sustainable strategy in the long term though.
We're in the phase of yanking hard on the enshittification handle. Of course that increases profits whilst sufficient users can't or won't move, but it devalues the product for users. It's in decline insomuch as it's got notably worse.
GenAI is like plastic surgery for people who want to look better - looks good only if you can do it in a way it doesn't show it's plastic surgery.
Resume filtering by AI can work well on the first line (if implemented well). However, once we get to the the real interview rounds and I see the CV is full of AI slop, it immediately suggests the candidate will have a loose attitude to checking the work generated by LLMs. This is a problem already.
Probably the first significant hit are going to be drivers, delivery men, truckers etc. a demographic of 5 million jobs in US and double that in EU, with ripple effects costing other millions of jobs in industries such as roadside diners and hotels.
The general tone of this study seems to be "It's 1995, and this thing called the Internet has not made TV obsolete"; same for the Acemoglu piece linked elsewhere in the. Well, no, it doesn't work like that, it first comes for your Blockbuster, your local shops and newspaper and so on, and transforms those middle class jobs vulnerable to automation into minimum wages in some Amazon warehouse. Similarly, AI won't come for lawyers and programmers first, even if some fear it.
The overarching theme is that the benefits of automation flow to those who have the bleeding edge technological capital. Historically, labor has managed to close the gap, especially trough public education; it remains to be seen if this process can continue, since eventually we're bound to hit the "hardware" limits of our wetware, whereas automation continues to accelerate.
So at some point, if the economic paradigm is not changed, human capital loses and the owners of the technological capital transition into feudal lords.
I think that drivers are probably pretty late in cycle. Many environments they operate in are somewhat complicated. Even if you do a lot to make automation possible. Say with garbage move to containers that can simply be lifted either by crane or forks. Still places were those containers are might need lot of individual training to navigate to.
Similar thing goes to delivery. Moving single pallet to store or replacing carpets or whatever. Lot of complexity if you do not offload it to receiver.
More regular the environment is easier it is to automate. A shelving in store in my mind might be simpler than all environments where vehicles need to operate in.
And I think we know first to go. Average or below average "creative" professionals. Copywriter, artists and so on.
LLMs are the least deterministic means you could possibly ever have for automation.
What you are truly seeking is high level specifications for automation systems, which is a flawed concept to the degree that the particulars of a system may require knowledgeable decisions made on a lower level.
However, CAD/CAM, and infrastructure as code are true amplifiers of human power.
LLMs destroy the notion of direct coupling or having any layered specifications or actual levels involved at all, you try to prompt a machine trained in trying to ascertain important datapoints for a given model itself, when the correct model is built up with human specifications and intention at every level.
Wrongful roads lead to erratic destinations, when it turns out that you actually have some intentions you wish to implement IRL
If you want to get to a destination you use google maps.
If you want to reach the actual destination because conditions changed (there is a wreck in front of you) you need a system to identify changes that occur in a chaotic world and can pick from an undefined/unbounded list of actions.
Yeah no, I'm seeing more and more shitty ai generated ads, shop logos, interior design & graphics for instance in barber shops, fast food places etc.
The sandwich shop next to my work has a music playlist which is 100% ai generated repetitive slop.
Do you think they'll be paying graphic designers, musicians etc. for now on when something certainly shittier than what a good artist does, but also much better than what a poor one is able to achieve, can be used in five minutes for free?
> Do you think they'll be paying graphic designers, musicians etc. for now on
People generating these things weren't ever going to be customers of those skillsets. Your examples are small business owners basically fucking around because they can, because it's free.
Most barber shops just play the radio, or "spring" for satellite radio, for example. AI generated music might actively lose them customers.
Given that the world is fast deglobalizing there will be a flood of factory work being reshored in the next 10 years.
There's also going to be a shrinkage in the workforce caused by demographics (not enough kids to replace existing workers).
At the same time education costs have been artificially skyrocketed.
Personally the only scenario I see mass unemployment happening is under a "Russia-in-the-90s" style collapse caused by an industrial rugpull (supply chains being cut off way before we are capable of domestically substituting them) and/or the continuation of policies designed to make wealth inequality even worse.
The world is deglobalizing. EU has been cutting off from Russia since the war started, and forcing medical industries to reshore since covid. At the same time it has begun drive to remilitarize itself. This means more heavy industry and all of it local.
There is brewing conflict across continents. India and Pakistan, Red sea region, South China sea. The list goes on and on. It's time to accept it. The world has moved on.
navel gazing will be shown to be a reactionary empty step, as all current global issues require more global cooperation to solve, not less.
the individual phenomena you describe are indeed detritus of this failed reaction to an increasing awareness of all humans of our common conditions under disparate nation states.
nationalism is broken by the realization that everyone everywhere is paying roughly 1/4 to 1/3 of their income in taxes, however what you receive for that taxation varies.
your nation state should have to compete with other nation states to retain you.
the nativist movement is wrongful in the usa for the reason that none of the folks crying about foreigners is actually native american,
but it's globally in error for not presenting the truth: humans are all your relatives, and they are assets, not liabilities: attracting immigration is a good thing, but hey feel free to recycle tired murdoch media talking points that have made us nothing but trouble for 40 years.
> Global connectedness is holding steady at a record high level based on the latest data available in early 2025, highlighting the resilience of international flows in the face of geopolitical tensions and uncertainty.
We have had thousands of years of globalising. The trend has always been towards a more connected world. I strongly suspect the current Trump movement (and to an extent brexit depending on which brexit version you chose to listen to) will be blips in that continued trend. That is because it doesn't make sense for there to be 200 countries all experts in microchip manufacturing and banana growing.
It happens in cycles. Globalization has followed deglobalization before and vice versa. It's never been one straight line upward.
>That is because it doesn't make sense for there to be 200 countries all experts in microchip manufacturing and banana growing.
It'll break down into blocs, not 200 individual countries.
Ask Estonia why they buy overpriced LNG from America and Qatar rather than cheap gas from their next door neighbor.
If you think the inability to source high end microchips from anywhere apart from Taiwan is going to prevent a future conflict (the Milton Friedman(tm) golden arches theory) then I'm afraid I've got bad news.
Much of the globalized system is dependent upon US institutions which currently dont have a substitute.
BRICs have been trying to substitute for some of them and have made some nonzero progress but theyre still far, far away from stuff like a reserve currency.
Yeah you need a global navy that can assure the safe passage of thousands of ships daily. Now, how do you ensure that said navy will protect your interests? Nothing is free.
What's the alternative here? Apart from well-known, but not so useful useful advice to have a ton of friends who can hire you or be so famous as to not need an introduction.
When a sector collapses and become irrelevant, all its workers no longer need to be employed. Some will no longer have any useful qualifications and won't be able to find another job. They will have to go back to training and find a different activity.
It's fine if it's an isolated event. Much worse when the event is repeated in many sectors almost simultaneously.
Why? When we've seen a sector collapse, the new jobs that rush in to fill the void are new, never seen before, and thus don't have training. You just jump in and figure things out along the way like everyone else.
The problem, though, is that people usually seek out jobs that they like. When that collapses they are left reeling and aren't apt to want to embrace something new. That mental hurdle is hard to overcome.
What if no jobs, or fewer jobs than before, rush in to fill the void this time? You only need so many prompt engineers when each one can replace hundreds of traditional workers.
As others in this thread have pointed out, this is basically what happened in the relatively short period of 1995 to 2015 with the rise of global wireless internet telecommunications & software platforms.
Many, many industries and jobs transformed or were relegated to much smaller niches.
LLMs are already grounding their results in Google searches with citations. They have been doing that for a year already. Optional with all the big models from OpenAI, Google, xAI
People talk about LLM hallucinations as if they're a new problem, but content mill blog posts existed 15 years ago, and they read like LLM bullshit back then, and they still exist. Clicking through to Google search results typically results in lower-quality information than just asking Gemini 2.5 pro. (which can give you the same links formatted in a more legible fashion if you need to verify.)
What people call "AI slop" existed before AI and AI where I control the prompt is getting to be better than what you will find on those sorts of websites.
I had similar thoughts, but then remembered companies still burn billions on Google Ads, sure that humans...and not bots...click them, and thinking that in 2025 most people browse without ad-blockers.
I humbly disagree. I've seen team members and sometimes entire teams being laid off because of AI. It's also not just layoffs, the hiring processes and demand have been affected as well.
As an example, many companies have recently shifted their support to "AI first" models. As a result, even if the team or certain team members haven't been fired, the general trend of hiring for support is pretty much down (anecdotal).
I agree that some automation is better for the humans to do their jobs better, but this isn't one of those. When you're looking for support, something has clearly went wrong. Speaking or typing to an AI which responds with random unrelated articles or "sorry I didn't quite get that" is just evading responsibility in the name of "progress", "development", "modernization", "futuristic", "technology", <insert term of choice>, etc.
How do you know that these layoffs are the result of AI, rather than AI being a convenient place to lay the blame? I've seen a number of companies go "AI first" and stop hiring or have layoffs (Salesforce comes to mind) but I suspect they would have been in a slump without AI entirely.
> How do you know that these layoffs are the result of AI, rather than AI being a convenient place to lay the blame?
Both of those can be true, because companies are placing bets that AI will replace a lot of human work (by layoffs and reduced hiring), while also using it in the short term as a reason to cut short term costs.
The study looks at 11 occupations in Denmark in 2023-24.
Maybe instead look at the US in 2025. EU labor regulations make it much harder to fire employees. And 2023 was mainly a hype year for GenAI. Actual Enterprise adoption (not free vendor pilots) started taking off in the latter half of 2024.
That said, a lot of CEOs seem to have taken the "lay off all the employees first, then figure out how to have AI (or low cost offshore labor) do the work second" approach.
My biggest concern about AI is that it will make us better at things that we're already doing. Things that we would've stopped doing if we hadn't had such a slow introduction to their consequences, consequences that we're now accustomed to--but not adapted to. Frog in slowly warming water stuff like the troubling relationship between advertising and elections, or the lack of consent in our monetary systems.
I'm worried the shock will not be abrupt enough to encourage a proper rethink.
Have any of these economists ever tried to scrape by as an entry-level graphic designer/illustrator?
Apparently not, since the sort of specific work which one used to find for this has all but vanished --- every AI-generated image one sees represents an instance where someone who might have contracted for an image did not (ditto for stock images, but that's a different conversation).
> every AI-generated image one sees represents an instance where someone who might have contracted for an image did not
This is not at all true. Some percentage of AI generated images might have become a contract, but that percentage is vanishingly small.
Most AI generated images you see out there are just shared casually between friends. Another sizable chunk are useless filler in a casual blog post and the author would otherwise have gone without, used public domain images, or illegally copied an image.
A very very small percentage of them are used in a specific subset of SEO posts whose authors actually might have cared enough to get a professional illustrator a few years ago but don't care enough to avoid AI artifacts today. That sliver probably represents most of the work that used to exist for a freelance illustrator, but it's a vanishingly small percentage of AI generated images.
There is more to entry-level illustrators than SEO posts. In my daily life I've witnessed a bakery, an aspiring writer of children's books, and two University departments go for self-made AI pictures instead of hiring an illustrator. Those jobs would have definitely gone to a local illustrator.
> That sliver probably represents most of the work that used to exist for a freelance illustrator, but it's a vanishingly small percentage of AI generated images.
I prefer to get my illegally copied images from only the most humanely trained LLM instead of illegally copying them myself like some neanderthal or, heaven forbid, asking a human to make something. Such a though is revolting; humans breathe so loud and sweat so much and are so icky. Hold on - my wife just texted me. "Hey chat gipity, what is my wife asking about now?" /s
I miss the old internet, when every article didn't have a goofy image at the top just for "optimization." With the exception of photography in reporting, it's all a waste of time and bandwidth.
Most if it wasn't bespoke assets created by humans but stock art picked by if lucky, a professional photo editor, but more often the author themselves.
It looks like the writing is on the wall too for other menial and low-value creative jobs too - so basic music and videos - I fully expect that 90+% of video adverts will be entirely AI generated within the next year or two. see Google Veo - they have the tech already and they have YouTube already and they have the ad network already ...
Instead of uploading your video ad you already created, you'll just enter a description or two and the AI will auto-generate the video ads in thousands of iterations to target every demographic.
Google is going to run away with this with their ecosystem - OpenAI etc al can't compete with this sort of thing.
People will develop an eye for how AI-generated looks and that will make human creativity stand out even more. I'm expecting more creativity and less cookie-cutter content, I think AI generated content is actually the end of it.
>People will develop an eye for how AI-generated looks
People will think they have an eye for AI-generated content, and miss all the AI that doesn't register. If anything it would benefit the whole industry to keep some stuff looking "AI" so people build a false model of what "AI" looks like.
This is like the ChatGPT image gen of last year, which purposely put a distinct style on generated images (that shiny plasticy look). Then everyone had an "eye for AI" after seeing all those. But in the meantime, purpose made image generators without the injected prompts were creating indistinguishable images.
It is almost certain that every single person here has laid eyes on an image already, probably in an ad, that didn't set off any triggers.
People already know what the ads are and what is content, but yet the advertisers keep on paying for ads on videos so they must be working.
It feels to me that the SOTA video models today are pretty damn good already, let alone in another 12 months when SOTA will no doubt have moved on significantly.
Given that the goal of generative AI is to generate content that is virtually indistinguishable from expert creative people, I think it's one of these scenarios:
1. If the goal is achieved, which is highly unlikely, then we get very very close to AGI and all bets are off.
2. If the goal is not achieved and we stay in this uncanny valley territory (not at the bottom of it but not being able to climb out either), then eventually in a few years' time we should see a return to many fragmented almost indie-like platforms offering bespoke human-made content. The only way to hope to achieve the acceptable quality will be to favor it instead of scale as the content will have to be somehow verified by actual human beings.
> If the goal is achieved, which is highly unlikely, then we get very very close to AGI and all bets are off.
Question on two fronts:
1. Why do you think, considering the current rate of progress think it is very unlikely that LLM output becomes indistinguishable from expert creatives? Especially considering a lot of tells people claim to see are easily alleviated by prompting.
2. Why do you think a model whose output reaches that goal would rise in any way to what we’d consider AGI?
Personally, I feel the opposite. The output is likely to reach that level in the coming years, yet AGI is still far away from being reached once that has happened.
I don't know. Even with these tools, I don't want to be doing this work.
I'd still hire an entry level graphic designer. I would just expect them to use these tools and 2x-5x their output. That's the only changing I'm sensing.
Probably not, economists generally stay in school straight to becoming professors or they’ll go into finance right after school.
That said I don’t think entry level illustration jobs can be around if software can do their job better than they do. Just like we don’t have a lot of calculators anymore, technological replacement is bound to occur in society, AI or not.
> The economists found for example that "AI chatbots have created new job tasks for 8.4 percent of workers, including some who do not use the tools themselves."
For me, the most interesting takeaway. It's easy to think about a task, break it down into parts, some of which can be automated, and count the savings. But it's more difficult to take into account any secondary consequences from the automation. Sometimes you save nothing because the bottleneck was already something else. Sometimes I guess you end up causing more work down the line by saving a bit of time at an earlier stage.
This can make automation a bit of a tragedy of the commons situation: It would be better for everyone collectively to not automate certain things, but it's better for some individually, so it happens.
> you end up causing more work down the line by saving a bit of time at an earlier stage
in this case, the total cost would've gone up, and thus, eventually the stakeholder (aka, the person who pays) is going to not want to pay when the "old" way was cheaper/faster/better.
> It would be better for everyone collectively to not automate certain things, but it's better for some individually, so it happens.
not really, as long as the precondition i mentioned above (the total cost dropping) is true.
That's probably true as long as the workers generally cooperate.
But there's also adversarial situations. Hiring would be one example: Companies use automated CV triaging tools that make it harder to get through to a human, and candidates auto generate CVs and cover letters and even auto apply to increase their chance to get to a human. Everybody would probably be better off if neither side attempted to automate. Yet for the individuals involved, it saves them time, so they do it.
> AI chatbots have had no significant impact on earnings or recorded hours in any occupation
But Generative AI is not just AI chatbots. There are ones that generate sounds/music, ones that generates imagines etc.
Another thing is, the research only looked Denmark, a nation with fairly healthy altitude towards work-life-balance, not a nation that gives proud to people who work their own ass off.
And the research also don't cover the effect of AI generated product: if music or painting can be created by an AI within just 1 minute based on prompt typed in by a 5 year old, then your expected value for "art work" will decrease, and you'll not pay the same price when you're buying from a human artist.
> von Ahn’s email follows a similar memo Shopify CEO Tobi Lütke sent to employees and recently shared online. In that memo, Lütke said that before teams asked for more headcount or resources, they needed to show “why they cannot get what they want done using AI.”
The headline is a bit baity (in that the article is describing no job losses because there hasn't been any economic benefit to LLM/GenAI to justify it), but what if we re-ran the study in a country _without_ exceptionally strong unionisation participation? Would we see the same results?
> We examine the labor market effects of AI chatbots using two large-scale adoption surveys (late 2023 and 2024) covering 11 exposed occupations (25,000 workers, 7,000 workplaces), linked to matched employer-employee data in Denmark.
It sounds like they didn't ask those who got laid off.
I think the methods here are highly questionable, and appear to be based on self report from a small amount of employees in Denmark 1 year ago.
The overall rate of participation in the labor work force is falling. I expect this trend to continue as AI makes the economy more and more dynamic and sets a higher and higher bar for participation.
Overall GDP is rising while labor participation rate is falling. This clearly points to more productivity with fewer people participating. At this point one of the main factors is clearly technological advancement, and within that I believe if you were to make a survey of CEOS and ask what technological change has allowed them to get more done with fewer people, the resounding consensus would definitely be AI
Based on the speed most companies operate at - no surprises here. The internet also didn't have most of its impact in the first decade. And as is fairly well understood, most of the current generation of AI models are a bit dicey in practice. There isn't much of a question that this early phase where AI is likely to create new jobs and opportunities. The real question is what happens when AI is reliably intellectually superior to humans in all domains and it has been proven to everyone's satisfaction, which is still some uncertain time away.
It is like expecting cars to replace horses before anyone starts investing in the road network and getting international petroleum supply chains set up - large capital investment is an understatement when talking about how long it takes to bring in transformative tech and bed it in optimally. Nonetheless, time passed and workhorses are rare beasts.
My absolutely unqualified opinion is that blockchain will survive but won't find much uses apart from those it already has; while the metaverse- or vr usage and contents- will have an explosive growth at some point, especially when mixed with AI generated and rendered worlds- which will be lifelike and almost infinitely flexible. Which btw, is also a great way to spend your time when your job has been replaced by another AI and you have little money for anything else.
If they end up going somewhere? Absolutely, we haven't seen anything out of the crypto universe yet compared to what'll start to happen when the tech is a century old and well understood by the bankers.
The thing about AI is that it doesn't work, you can't build on top of it, and it won't get better.
It doesn't work: even for the tiny slice of human work that is so well defined and easily assessed that it is sent out to freelancers on sites like Fiverr, AI mostly can't do it. We've had years to try this now, the lack of any compelling AI work is proof that it can't be done with current technology.
You can't build on top of it: unlike foundational technologies like the internet, AI can only be used to build one product, a chatbot. The output of an AI is natural language and it's not reliable. How are you going to meaningfully process that output? The only computer system that can process natural language is an AI, so all you can do is feed one AI into another. And how do you assess accuracy? Again, your only tool is an AI, so your only option is to ask AI 2 if AI 1 is hallucinating, and AI 2 will happily hallucinate its own answer. It's like The Cat in the Hat Comes Back, Cat E trying to clean up the mess Cat D made trying to clean up the mess Cat C made and so on.
And it won't get any better. LLMs can't meaningfully assess their training data, they are statistical constructions. We've already squeezed about all we can from the training corpora we have, more GPUs and parameters won't make a meaningful difference. We've succeeded at creating a near-perfect statistical model of wikipedia and reddit and so on, it's just not very useful even if it is endlessly amusing for some people.
Seen a whole lot of gen AI deflecting customer questions which would have been previously tickets. That is a reduced ticket volume that would have been taken by a junior support engineer.
We are a couple of years away from the death of the level 1 support engineer. I can't even imagine what's going to happen to the level 0 IT support.
> We are a couple of years away from the death of the level 1 support engineer.
And this trend isn't new; a lot of investments into e.g. customer support is to need less support staff, for example through better self-service websites, chatbots / conversational interfaces / phone menus (these go back decades), or to reduce expenses by outsourcing call center work to low-wage countries. AI is another iteration, but gut feeling says they will need a lot of training/priming/coaching to not end up doing something other than their intended task (like Meta's AIs ending up having erotic chats with minors).
One of my projects was to replace the "contact" page of a power company with a wizard - basically, get the customers to check for known outages first, then check their own fuse boxes etc, before calling customer support.
I have had AI support agents deflect my questions, but not resolve them. It is more companies ending customer support under the guise of automation than AI obsoleting the support workers.
Perhaps briefly. Companies tried this with offshoring support. Some really took a hit and had to bring it back. Some didn't though, so it's not all or nothing in the medium term. In the short term, most of the execs will buy into the hype and try it. I suspect the lower quality companies will use it, but the companies whose value is in their reputation for quality will continue to use people.
All the jobs (11) they looked at are at least medium level complexity and task delegating. They are the ones giving out time-consuming, low level jobs to cheap labour (assistants etc.) . They can save time and money by directly doing it using AI assistants instead of waiting to have an assistant available.
I am 100% convinced that Ai will and already has destroyed lots of Jobs. We will likely encounter world order disrupting changes in the coming decades when computer get another 1000 times faster and powerful in the coming 10 years.
The jobs described might get lost (obsolete or replaced) as well in the longer term if AI gets better than them. For example just now another article was mentioned in HN: "Gen Z grads say their college degrees were a waste of time and money as AI infiltrates the workplace" which would make teachers obsolete.
I have survived until today using the shibboleth "let me speak to a human" [1] The day this doesn't work any more, is the day I stop paying for that service. We should make a list of companies that still have actual customer service.
1: https://xkcd.com/806/ - from an era when the worst that could happen was having to speak with incompetent, but still human, tech support.
> Many of these occupations have been described as being vulnerable to AI: accountants, customer support specialists, financial advisors, HR professionals, IT support specialists, journalists, legal professionals, marketing professionals, office clerks, software developers, and teachers.
Chatbots probably won't be the final interface. But machine learning in general is a full on revolutionary tech (much clearer now than ten years ago) that hasn't been explored fully and will eventually be quite disruptive on the scale of computers on the economy probably. Though it likely won't take the form it's taking today (chatbots etc).
Translators? Graphic artists? The omission of the most obviously impacted professions immediately identifies this as a cooked study, along with talking about LLMs as "chatbots". I wonder who paid for it.
are graphic artists actually getting replaced by AI? If so that would surprise me for as impressive as AI image generation is, very little of what it does seems like it would replace a graphic artists.
The report looks at "at the labor market impact of AI chatbots on 11 occupations, covering 25,000 workers and 7,000 workplaces in Denmark in 2023 and 2024."
As with all other technologies the jobs it removes are not normally in country that introduces it but that they never happen elsewhere.
For example, while the automated looms that the Luddites were protesting about didn't result in significant job losses in the UK. How much clothing manufacturing has been curtailed in Africa because of it and similar innovations since that have lead to cheap mass produced clothes making it uneconomic to produce there.
As suggest by this report, Denmark and West will probably be make good elsewhere and be largely unaffected.
However, places like India, Vietnam with large industries based on call centres and outsourced development servicing the West are likely to be more vulnerable.
The survey questions they asked are bad questions if you’re attempting to answer the question about future labor state. However they didn’t ask that, they asked existing employees how LLMs have changed their workplace.
This is the wrong question.
The question should be to hiring managers: Do you expect LLM based tools to increase or decrease your projected hiring of full time employees?
LLM workflows are already *displacing* entry-level labor because people are reaching for copilot/windsurf/CGPT instead of hiring a contract developer, researcher, BD person. I’m watching this happen across management in US startups.
It’s displacing job growth in entry level positions across primary writing copy, admin tasks or research.
You’re not going to find it in statistics immediately because it’s not a 1:1 replacement.
Much like the 1971 labor-productivity separation that everyone scratched their head about (answer: labor was outsourced and capital kept all value gains), we will see another asymptote to that labor productivity graph based on displacement not replacement.
AI scaring students away from the software field and simultaneously making it hard for new developers to learn (because its too temping to click a button rather than struggle for 30 minutes) could be balancing out some job losses as well
AI can't replace jobs or hurt wages. AI doesn't make these decisions & wages have been suppressed for a very long time, well before general AI adoption. Managers make these decisions. Don't blame AI if you get laid off or if your wages aren't even keeping up with inflation, let alone your productivity. Blame your manager.
Be wary of people trying to deflect the away from the managerial class for these issues.
I have a 185 year old treatise on wood engraving. At the time, to reproduce any image required that it be engraved in wood or metal for the printer; the best wood engravers were not mere reproducers, as they used some artistry when reducing the image to black and white, to keep the impression from continuous tones. (And some, of course, were also original artists in their own right). The wood engraving profession was destroyed by the invention of photo-etching (there was a weird interval before the invention of photo etching, in which cameras existed but photos had to be engraved manually anyway for printing).
Maybe all the wood engravers found employment; although I doubt it. But at this speed, there will be a lot of people who won't be able to retrain during employment and will either have to use up their savings while doing so, or have to take lower paid jobs.
This is a completely meaningless study with no correlation at all to reality in the US in right now. The hockey stick started around 2/25. We are in a completely different world now for devs.
When new technology that seemingly replaces human effort it often doesn't directly replace humans (e.g. businesses don't rush to immediately replace them with the technology). More often than not, these systems are put in place to help scale a business. We've seen this time and time again and AI seems to be no different.
Anecdotal situation - I use ChatGPT daily to rewrite sentences in the client reports I write. I would have traditionally had a marketing person review these and rewrite them, but now AI does it.
FYI: The actual study may not quite say what this article is suggesting. Unless I'm missing something, the study seems to focus on employee use of chat-based assistants, not on company-wide use of AI workflow solutions. The answers come from interviewing the employees themselves. There is an analysis of impacts on the labor market, but that is likely flawed if the companies are segmented based on employee use of chat assistants versus company-wide deployment of AI technology.
In other words, this more likely answers the question "If customer support agents all use ChatGPT or some in-house equivalent, does the company need fewer customer support agents?" than it answers the question "If we deploy an AI agent for customers to interact with, can it reduce the volume of inquiries that make it to our customer service team and, thus, require fewer agents?"
In other terms: There will be a lot more work. So even if robots do 80% of it, if we do 10x more - the amount of work we need humans to do will double.
We will write more software, build more houses, build more cars, planes and everything down the supply chain to make these things.
When you look at planet earth, it is basically empty. While rent in big cities is high. But nobody needs to sleep in a big city. We just do so because getting in and out of it is cumbersome and building houses outside the city is expensive.
When robots build those houses and drive us into town in the morning (while we work in the car) that will change. I have done a few calculations, how much more mobility we could achieve with the existing road infrastructure if we use electric autonomous buses, and it is staggering.
Another way to look at it: Currently, most matter of planet earth has not been transformed to infrastructure used by humans. As work becomes cheaper, more and more of it will. There is almost infinitely much to do.
For my part, I would like for there still to be wild and quiet places to go to when I need time away from my fellow man, and I don't envision a world paved over for modern infrastructure as desirable, but rather the stuff of nightmares such as the movie _Silent Running_ envisioned.
That said, the fact that I can't find an opensource LLM front-end which will accept a folder full of images to run a prompt on sequentially, then return the results in aggregate is incredibly frustrating.
I agree! People will become more productive, meaning fewer people can do more work. That said, I hope this does not result in the production of evermore things at the cost of nature!
I think we are at a crossroads as to what this will result in, however. In one case, the benefits will accrue at the top, with corporations earning greater profits while employing less people, leaving a large part of the population without jobs.
In the second case, we manage to capture these benefits, and confer them not just on the corporations but also the public good. People could work less, leaving more time for community enhancing activities. There are also many areas where society is currently underserved which could benefit from freed up workforce, such as schooling, elderly care, house building and maintenance etc etc.
I hope we can work toward the latter rather than the former.
...and saves humongous amounts of time in the process. Documentations are rarely a good read (however sad, I like good docs), and we should waste less engineering time reading them.
the earth is not the property of humans, nor is any of it empty until you show zero ecosystem or wildlife or plants there.
for sure, we are doing our best to eradicate the conditions that make earth habitable, however i suggest that the first needed change is for computer screen humans to realize that other life forms exist. this requires stepping outside and questioning human hubris, so it might be a big leap, but i am fairly confident that you will discover that absolutely none of our planet is empty.
Would you be ok if instead of 97% of earth being empty 94% is empty and your rent is cut in half? Another plus point of the future: An electric autonomous bus is at your disposal every 5 minutes, bringing you to whatever nice lonely place you wish.
I've got no idea what you're going on about, but 97% of the Earth isn't empty in any useful sense. For starters, almost 70% is ocean. There are also large parts which are otherwise uninhabitable, and large parts which have agricultural use. Buses don't go to uninhabited places, since that's costs too much. Every five minutes is a frequency which no form of public transport can afford.
The nature of technological progress is that it makes formerly uninhabitable areas inhabitable.
Costs of buses are mostly the driver. Which will go away. The rest is mostly building and maintaining them. Which will be done by robots. The rest is energy. The sun sends more energy to earth in an hour than humans use in a year.
Planet earth is still resource constrained. This is easy to forget when skills availability is more frequently the bottleneck and you live in a society that for the time being has fairly easy access to raw materials.
Extrapolating from my current experience with AI-assisted work: AI just makes work more meaningful. My output has increased 10x, allowing me to focus on ideas and impact rather than repetitive tasks. Now apply that to entire industries and whole divisions of labor: manual data entry, customer support triage, etc. Will people be out of those jobs? Most certainly. But it gives all of us a chance to level up—to focus on more meaningful labor.
As a father, my forward-thinking vision for my kids is that creativity will rule the day. The most successful will be those with the best ideas and most inspiring vision.
>The most successful will be those with the best ideas and most inspiring vision.
This has never been the truth of the world, and I doubt AI will make it come to fruition. The most successful people are by and large those with powerful connections, and/or access to capital. There are millions of smart, inspired people alive right now who will never rise above the middle class. Meanwhile kids born in select zip codes will continue to skate by unburdened by the same economic turmoil most people face.
First off, is there any? That's making an assumption, one which can just as easily be attributed to human-written code. Nobody writes debt-free code, that's why you have many checks and reviews before things go to production - ideally.
Second, in theory, future generations of AI tools will be able to review previous generations and improve upon the code. If it needs to, anyway.
But yeah, tech debt isn't unique to AIs, and I haven't seen anything conclusive that AIs generate more tech debt than regular people - but please share if you've got sources of the opposite.
(disclaimer, I'm very skeptical about AI to generate code myself, but I will admit to use it for boring tasks like unit test outlines)
If it actually works like that, it'll be just like all labor-saving innovations, going back to the loom and printing press and the like; people will lose their job, but it'll be local / individual tragedies, the large scale economic impact will likely be positive.
It'd still suck to lose your job / vocation though, and some of those won't be able to find a new job.
Honestly, much of work under capitalism is meaningless (see: The Office). The optimistic take is that many of those same paper-pushing roles could evolve into far more meaningful work—with the right training and opportunity (also AI).
When the car was invented, entire industries tied to horses collapsed. But those that evolved, leveled up: Blacksmiths became auto mechanics and metalworkers, etc.
As a creatively minded person with entrepreneurial instincts, I’ll admit: my predictions are a bit self-serving. But I believe it anyway—the future of work is entrepreneurial. It’s creative.
>the future of work is entrepreneurial. It’s creative.
How is this the conclusion you've come to when the sectors impacted most heavily by AI thus far have been graphic design, videography, photography, and creative writing?
> The optimistic take is that many of those same paper-pushing roles could evolve into far more meaningful work—with the right training and opportunity (also AI).
There already isn't enough meaningful work for everyone. We see people with the "right training" failing to find a job. AI is already making things worse by eliminating meaningful jobs — art, writing, music production are no longer viable career paths.
In contrast to statements like the following from the dweebs sucking harry potter's farts out of the less-wrong bubble
>Coding AIs increasingly look like autonomous agents rather than mere assistants: taking instructions via Slack or Teams and making substantial code changes on their own, sometimes saving hours or even days
This is shameless "AI is not bad, we swear" propaganda. Study looked at 11 occupations, 25k workers, in Denmark, in 2023-2024. How this says anything of consequence for the world at large (or even just the US) with developments moving as fast as they are, in such an unstable economic environment, is beyond me. What I do know is that I have plenty of first-hand anecdotal evidence to the contrary.
It's seemed to me that all the productivity gains would be burned up by just making our jobs more and more BS, not be reducing hours worked, just like with previous technology. I expect more meetings, not less work.
This is just objectively false. My friend is a freelance copy writer and live in the freelance world. It is 100% replacing writing jobs, editing jobs, and design jobs.
Since when? If they're writing online content then that was wiped out somewhat recently by Google changing their search algorithm and killing a huge amount of content based sites
To be fair, those jobs were already pretty precarious.
Ever since the explosion in popularity of the internet in the 2000's, anything journalism related has been in terminal decline. The arrival of the smartphones accelerated this process.
November 30th, 2022 is when ChatGPT burst into the world stage and upended what people thought AI was capable of doing. It’s been less than three years since then. The technology is still imperfect but improving at an exponential rate.
I know it’s replaced marketing content writers in startups. I know it has augmented development in startups and reduced hiring needs.
The effects as it gains capability will be mass unemployment.
n=small, but I've had multiple friends who did freelance technical writing and copyediting work tell me that the market died when genAI became easily available. Repeat clients no longer interested in their work, and all the new work postings not even really worth the cost even if you tried just handing back unmodified genAI output instantly.
So I find this result improbable, at best, given that I personally know several people who had to scramble to find new ways of earning money when their opportunities dried up with very little warning.
> "My general conclusion is that any story that you want to tell about these tools being very transformative, needs to contend with the fact that at least two years after [the introduction of AI chatbots], they've not made a difference for economic outcomes."
I'm someone who tries to avoid AI tools. But this paper is literally basing its whole assessment off of two things; wages and hours. This is a disingenuous assertion.
Lets assume that I work 8 hours per day. If I am able to automate 1h of my day with AI, does that mean I get to go home 1 hour early? No. Does that mean I get an extra hour of pay? No.
So the assertion that there has been no economic impact assumes that the AI is a separate agent that would normally be paid in wages for time. That is not the case.
The AI is an augmentation for an existing human agent. It has the potential to increase the efficiency of a human agent by n%. So we need to be measuring the impact that is has on effectiveness and efficiency. It will never offset wages or hours. It will just increase the productivity for a given wage or number of hours.
AI makes people more productive so that incentivizes me to hire more people, not less. In many cases anyhow.
If each of my developers is 30% more productive that means we can ship 30% more functionally which means more budget to hire more developers. If you think you’ll just pocket that surplus you have another thing coming.
Imagine if a tool made content writers 10x as productive. You might hire more, not less, because they are now better value! You might eventually realise you spent too much, but this will come later.
ADAIK no company I know of starts a shiny new initiative by firing, they start by hiring then cutting back once they have their systems in place or hit a ceiling. Even Amazon runs projects fat then makes them lean AFAIK.
There's also pent up demand.
You never expect a new labour saving device to cost jobs while the project managers are in the export building phase.
Companies have been wanting to lay people off. Using AI as an excuse is a convenient way to turn a negative into a positive.
Truth is, companies that don’t need layoffs are pushing employees to use AI to supercharge their output.
You don’t grow a business by just cutting costs, you need to increase revenue. And increasing revenue means more work, which means it’s better for existing employees to put out more with AI.
It's not replacing jobs, but it's definitely the scarecrow invoked in layoff decisions across the tech industry. I suspect whatever metrics they use are simply too slow to measure the actual impact this is having in the job market.
I think if we go into a sharp recession companies will use this as an excuse to replace workers with other workers that effectively use AI cutting down on overhead. It just seems obvious this will happen. I don't think it's the doom and gloom scenario, but many CEOs, etc are chomping at the bit.
'"The adoption of these chatbots has been remarkably fast," Humlum told The Register. "Most workers in the exposed occupations have now adopted these chatbots. Employers are also shifting gears and actively encouraging it. But then when we look at the economic outcomes, it really has not moved the needle."'
So, as of yet, according to these researchers, the main effect is that of a data pump, certain corporations get a deep insight into people's and other corporation's inner life.
I was discussing with a colleague the past months, my view on how and why all these AI tools are being shoved down our throats (just look at Google's Gemini push into all enterprise tools, it's like Google+ for B2B) before there are clear cut use-cases you can point to and say "yes, this would have been much harder to do without LLM/AI" is because... Training data is the most valuable asset, all these tools are just data collection machines with some bonus features that make them look somewhat useful.
I'm not saying that I think LLMs are useless, far from it, I use them when I think it's a good fit for the research I'm doing, the code I need to generate, etc., but the way it's being pushed from a marketing perspective tells me that companies making these tools need people to use them to create a data moat.
Extremely annoying to be getting these pop-ups to "use our incredible Intelligence™" at every turn, it's grating on me so much that I've actively started to use them less, and try to disable every new "Intelligence™" feature that shows up in a tool I use.
It seems very simple cause and effect from a economic standpoint. Hype about AI is very high, so investors ask boards what they're doing about AI and using it, because they think AI will disrupt investments that don't.
The boards in turn instruct the CEOs to "adopt AI" and so you get all the normal processes about deciding what/if/when to do stuff get short circuited and so you get AI features that no one asked for or mandates for employees to adopt AI with very shallow KPIs to claim success.
The hype really distorts both sides of the conversation. You get the boosters for which any use of AI is a win, no matter how inconsequential the results, and then you get things like the original article which indicate it hasn't caused job losses yet as a sign that it hasn't changed anything. And while it might disprove the hype (especially the "AI is going to replace all mental labour in $SHORT_TIMEFRAME" hype), it really doesn't indicate that it won't replace anything.
Like when has a technology making the customer support experience worse for users or employees ever stopped it's rollout if there's cost savings to be had?
I think this why AI is so complicated for me. I've used it, and I can see some gains. But it's on the order of when IDE auto complete went from substring matches of single methods to when it could autocomplete chains of method calls based on types. The agent stuff fails on anything but the most bite size work when I've tried it.
Clearly some people seem it as something more transformative than that. There's other times when people have seen something transformative and it's just been so clearly nothing of value (NFTs for example) that it's easy to ignore the hype train. The reason AI is challenging for me is it's clearly not nothing, but also it's so far away from the vision that others have that it's not clear how realistic that is.
LLMs have mesmerized us, because, they are able to communicate meaning to us.
Fundamentally, we (the recipient of llm output) are generating the meaning from the words given. ie, llms are great when the recipient of their output is a human.
But, when their recipient is a machine, the model breaks down, because, machine to machine requires deterministic interactions. this is the weakness I see - regardless of all the hype about llm agents. fundamentally, the llms are not deterministic machines.
LLMs lack a fundamental human capability of deterministic symbolization - which is to create NEW symbols with associated rules which can deterministically model worlds we interact with. They have a long way to go on this.
Bingo. Especially with the 'coding assistants', these companies are getting great insight into how software features are described and built, and how software is architected across the board.
It's very telling that we see "we won't use your data for training" sometimes and opt-outs but never "we won't collect your data". 'Training' being at best ill defined.
I see Ai not replacing all workers but reducing head count.
On a software team I could see a team of 8 reduced to a team of 4 with Ai.
Especially in smaller, leaner companies.
You already see attorneys using it to write briefs; often to hilarious effect. These are clearly the precursor though to a much reduced need to for Jr / associate level attorneys at firms.
It shouldn't. Its propaganda spread by VCs and ai 'thought leaders' who are finally seeing a glimmer of their fantastical imagination coming to life (it isn't)
Keep in mind this kind of drivel is produced by economists and the tail-end of CS, who are desperately trying to stay relevant in the emerging work place.
The wise, will displace economists and consultants with LLMs, but the trend followers will hire them to prognostic about the future impact - such that the net affect could be zero.
Right now AI's impact is the equivalent of giving the ancient Egyptians a couple of computer chips. People will eventually figure out what they are, but until then it will only be used as combs, paperweights, pendants etc.
I would say the use cases are only coming into view.
I'm generally an AI skeptic, but it seems awfully early to make this call. Aside from the obvious frontline support, artist, junior coder etc, a whole bunch of white collar "pay me for advice on X" jobs (dietician, financial advice, tax agent, etc), where the advice follows set patterns only mildly tailored for the recipient, seem to be severely at risk.
Example: I recently used Gemini for some tax advice that would have cost hundreds of dollars to get from a licensed tax agent. And yes, the answer was supported by actual sources pointing to the tax office website, including a link to the office's well-hidden official calculator of precisely the thing I thought I would have to pay someone to figure out.
"I'm generally an AI skeptic, but it seems awfully early to make this call."
What call. Maybe some readers miss the (perhaps subtle) difference between "Generative AI is not ..." and "Generative Ai is not going to ..."
Then first can be based on fact, e.g., what has happened so far. The second is based on pure speculation. No one knows what will happen in the future. HN is continually being flooded with speculation, marketing, hype.
In contrast, this article, i.e., the paper it discusses, is is based on what has happened so far. There is no "call" being made. Only an examination of what has heppened so far. Facts not opinions.
> In contrast, this article, i.e., the paper it discusses, is is based on what has happened so far.
What happened in 2023 and 2024 actually
Nitpicky but it's worth noting that last year's AI capabilities are not the April 2025 AI capabilities and definitely won't be the December 2025 capabilities.
It's using deprecated/replaced technology to make a statement, that is not forward projecting. I'm struggling to see the purpose. It's like announcing that the sun is still shining at 7pm, no?
I feel like model improvement is severely overstated by the benchmarks and the last release cycle basically made no difference to my use cases. If you gave me Claude 3.5 and 3.7 I couldn't really tell the difference. OpenAI models feel like they are regressing, and LLAMA 4 regressed even on benchmarks.
And the hype was insane in 2023 already - it's useful to compare actual outcomes vs historic hype to gauge how credible the hype sellers are.
That's interesting. I think there's been some pretty significant improvements in the rate of hallucinations and accuracy of the models, especially when it comes to rule following. Perhaps the biggest improvement though is in the size of context windows which are huge compared to this time last year.
Maybe progress over the last 2-3 months is hard to see, but progress over the last 6 is very clear.
This is the real value of AI that, I think, we're just starting to get into. It's less about automating workflows that are inherently unstructured (I think that we're likely to continue wanting humans for this for some time).
It's more about automating workflows that are already procedural and/or protocolized, but where information gathering is messy and unstructured (I.e. some facets of law, health, finance, etc).
Using your dietician example: we often know quite well what types of foods to eat or avoid based on your nutritional needs, your medical history, your preferences, etc. But gathering all of that information requires a mix of collecting medical records, talking to the patient, etc. Once that information is available, we can execute a fairly procedural plan to put together a diet that will likely work for you.
These are cases that I believe LLMs are actually very well suited, if the solution can be designed in such a way as to limit hallucinations.
I recently tried looking up something about local tax law in ChatGPT. It confidently told me a completely wrong rule. There are lots of sources for this, but since some probably unknowingly spread misinformation, ChatGPT just treated it as correct. Since I always verify what ChatGPT spits out, it wasn't a big deal for me, just a reminder that it's garbage in, garbage out.
Out of curiosity, did you try this in o3?
O3's web research seems to have gotten much, much better than their earlier attempts at using the web, which I didn't like. It seems to browse in a much more human way (trying multiple searches, noticing inconsistencies, following up with more refined searches, etc).
But I wonder how it would do in a case like yours where there is conflicting information and whether it picks up on variance in information it finds.
Yeah, I also find very often llms say sth wrong just because they found it in the internet. The problem is that we know to not trust a random website, but LLMs make wrong info more believable. So the problem in some sense is not exactly the LLM, as they pick up on wrong stuff people or "people" have written, but they are really bad at figuring these errors out and particularly good at covering them or backing them up.
I think this will be fixed by having LLM trained not on the whole internet but on well curated content. To me this feels like the internet in maybe 1993. You see the potential and it’s useful. But a lot of work and experimentation has to be done to work out use cases.
I think it’s weird to reject AI based on its current form.
Chatgpt isn't any good these days. Try switching to Claude or Gemini 2.5 pro.
ChatGPT is still good. Try o3.
"Hallucination" implies that the LLM holds some relationship to truth. Output from an LLM is not a hallucination, it's bullshit[0].
> Using your dietician example: we often know quite well what types of foods to eat or avoid based on your nutritional needs
No we don't. It's really complicated. That's why diets are popular and real dietitians are expensive. and I would know, I've had to use one to help me manage an eating disorder!
There is already so much bullshit in the diet space that adding AI bullshit (again, using the technical definition of bullshit here) only stands to increase the value of an interaction with a person with knowledge.
And that's without getting into what happens when brand recommendations are baked into the training data.
0 https://link.springer.com/article/10.1007/s10676-024-09775-5
> "Hallucination" implies that the LLM holds some relationship to truth. Output from an LLM is not a hallucination, it's bullshit[0].
I understand your perspective, but the intention was to use a term we've all heard to reflect the thing we're all thinking about. Whether or not this is the right term to use for scenarios where the LLM emits incorrect information is not relevant to this post in particular.
> No we don't. It's really complicated. That's why diets are popular and real dietitians are expensive.
No, this is not why real dietitians are expensive. Real dietitians are expensive because they go through extensive training on a topic and are a licensed (and thus supply constrained) group. That doesn't mean they're operating without a grounding fact base.
Dietitians are not making up nutritional evidence and guidance as they go. They're operating on studies that have been done over decades of time and millions of people to understand in general what foods are linked to what outcomes. Yes, the field evolves. Yes, it requires changes over time. But to suggest we "don't know" is inconsistent with the fact that we're able to teach dietitians how to construct diets in the first place.
There are absolutely cases in which the confounding factors for a patient are unique enough such that novel human thought will be required to construct a reasonable diet plan or treatment pathway for someone. That will continue to be true in law, health, finances, etc. But there are also many, many cases where that is absolutely not the case, the presentation of the case is quite simple, and the next step actions are highly procedural.
This is not the same as saying dietitians are useless, or physicians are useless, or attorneys are useless. It is to say that, due to the supply constraints of these professions, there are always going to be fundamental limits to the amount they can produce. But there is a credible argument to be made that if we can bolster their ability to deliver the common scenarios much more effectively, we might be able to unlock some of the capacity to reach more people.
Exactly! All LLMs do is “hallucinate”. Sometimes the output happens to be right, same as a broken clock.
I find this way of looking at LLMs to be odd. Surely we all are aware that AI has always been probabilistic in nature. Very few people seem to go around talking about how their binary classifier is always hallucinating, but just sometimes happens to be right.
Just like every other form of ML we've come up with, LLMs are imperfect. They get things wrong. This is more of an indictment of yeeting a pure AI chat interface in front of a consumer than it is an indictment of the underlying technology itself. LLMs are incredibly good at doing some things. They are less good at other things.
There are ways to use them effectively, and there are bad ways to use them. Just like every other tool.
The problem is they are being sold as everything solutions. Never write code / google search / talk to a lawyer / talk to a human / be lonely again, all here, under one roof. If LLM marketing was staying in its lane as a creator of convincing text we'd be fine.
I think a lot of problems will be solved by explicitly training on high quality content and probably injecting some expert knowledge in addition
You imply that, like a stopped clock, LLMs are only right occasionally and randomly. Which is just nonsense.
Same is true of humans fwiw.
It can’t replace a human for support, it is not even close to replacing a junior developer. It can’t replace any advice job because it lies instead of erroring.
As an example if you want diet advice, it can lie to you very convincingly so there is no point in getting advice from it.
Main value you get from a programmer is they understand what they are doing and they can take the responsibility of what they are developing. Very junior developers are hired mostly as an investment so they become productive and stay with the company. AI might help with some of this but doesn’t really replace anyone in the process.
For support, there is massive value in talking to another human and having them trying to solve your issue. LLMs don’t feel much better than the hardcoded menu style auto support there already is.
I find it useful for some coding tasks but think LLMs were overestimated and it will blow up like NFTs
I agree with most of your points but this one
>I find it useful for some coding tasks but think LLMs were overestimated and it will blow up like NFTs
No way. NFTs did not make any headway in "the real world": their value proposition was that their cash value was speculative, like most other Blockchain technologies, and that understandably collapsed quickly and brilliantly. Right now developers are using LLMs and they have real tangible advantages. They are more successful than NFTs already.
I'm a huge AI skeptic and I believe it's difficult to measure their usefulness while we're still in a hype bubble but I am using them every day, they don't write my prod code because they're too unreliable and sloppy, but for one shot scripts <100 lines they have saved me hours, and they've entirely replaced stack overflow for me. If the hype bubble burst today I'd still be using LLMs tomorrow. Cannot say the same for NFTs
LLMs are somewhat useful compared to NFTs and other blockchain bullshit which is nearly completely useless. It will be interesting what happens when the money from the investment bubble dries out and the real costs need to be paid by the users.
> As an example if you want diet advice, it can lie to you very convincingly so there is no point in getting advice from it.
How exactly is this different from getting advice from someone who acts confidently knowledgeable? Diet advice is an especially egregious example, since I can have 40 different dieticians give me 72 different diet/meal plans with them saying 100% certainty that this is the correct one.
It's bad enough the AI marketers push AI as some all knowing, correct oracle, but when the anti-ai people use that as the basis for their arguments, it's somehow more annoying.
Trust but verify is still a good rule here, no matter the source, human or otherwise.
In the case of dieticians, investment advisors, and accountants they are usually licensed professionals who face consequences for misconduct. LLMs don’t have malpractice insurance
If a junior developer lies about something important, they can be fired and you can try to find someone else who wouldn't do the same thing. At the very least you could warn the person not to lie again or they're gone. It's not clear that you can do the same thing with an LLM as they don't know they've lied.
You're falling into the mistake of "correct" or "lied" though. Being wrong isn't lying.
Inventing answers is lying
If I ask it how to accomplish a task with the C standard library and it tells me to use a function that doesn't exist in the C standard library, that's not just "wrong" that is a fabrication. It is a lie
Lying requires intent to deceive.
If you ask me to remove whitespace from a string in Python and I mistakenly tell you use ".trim()" (the Java method, a mistake I've made annoyingly too much) instead of ".strip()", am I lying to you?
It's not a lie. It's just wrong.
You are correct that there is a difference between lying and making a mistake, however
> Lying requires intent to deceive
LLMs do have an intent to deceive, built in!
They have been built to never admit they don't know an answer, so they will invent answers based on faulty premises
I agree that for a human mixing up ".trim()" and ".strip()" is an honest mistake
In the example I gave you are asking for a function that does not exist. If it invents a function, because it is designed to never say "you are wrong that doesn't exist" or "I don't know the answer" that seems to qualify to me as "intent to deceive" because it is designed to invent something rather than give you a negative sounding answer
An LLM is not "just wrong" either. It's just bullshit.
The bullshitter doesn't care about if what they say is true or false or right or wrong. They just put out more bullshit.
It’s more like bullshitting which is inbetween the two. Basically, like that guy who always has some story to tell. He’s not lying as such, he’s just waffling.
which is interesting because AI doesn't have intent and there is incapable of lying.
" since I can have 40 different dieticians give me 72 different diet/meal plans with them saying 100% certainty that this is the correct one."
Because, as Brad Pilon of intermittent fasting fashion repeatedly stresses, "All diets work."*
* Once there is an energy deficit.
I would not say all of them but in general I agree, there is not one correct one but many correct ones.
> Trust but verify is still a good rule here
I wouldn't have a clue how to verify most things that get thrown around these days. How can I verify climate science? I just have to trust the scientific consensus (and I do). But some people refuse to trust that consensus, and they think that by reading some convincing sounding alternative sources they've verified that the majority view on climate science is wrong.
The same can apply for almost anything. How can I verify dietary studies? Just having the ability to read scientific studies and spot any flaws requires knowledge that only maybe 1 in 10000 people could do, if not worse than that.
Do people actually behave this way with you? If someone presents a plan confidently without explaining why, I tend to trust them less (even people like doctors, who just happen to start with a very high reputation). In my experience people are very forthcoming with things they don't know.
Someone can present a plan, explain that plan, and be completely wrong.
People are forthcoming with things they know they don't know. It's the stuff that they don't know that they don't know that get them. And also the things they think they know, but are wrong about. This may come as a shock, but people do make mistakes.
And if someone presents a plan, explains that plan, and is completely wrong repeatedly and often, in a way that makes it seem like they don’t even have any concept whatsoever of what they may have done wrong, wouldn’t you start to consider at some point that maybe this person is not a reliable source of information?
I trust cutting edge models now far more than the ones from a few years ago.
People talk a lot of about false info and hallucinations, which the models do in fact do, but the examples of this have become more and more far flung for SOTA models. It seems that now in order to elicit bad information, you pretty much have to write out a carefully crafted trick question or ask about a topic so on the fringes of knowledge that it basically is only a handful of papers in the training set.
However, asking "I am sensitive to sugar, make me a meal plan for the week targeting 2000cal/day and high protein with minimally processed foods" I would totally trust the output to be on equal footing with a run of the mill registered dietician.
As for the junior developer thing, my company has already forgone paid software solutions in order to use software written by LLMs. We are not a tech company, just old school manufacturing.
NFT:s never had any real value. It was just speculation hoping some bigger sucker will come after you.
LLM:s create real value. I save a bunch of time coding with an LLM vs without one. Is it perfect? No, but it does not have to be for still creating a lot of value.
Are some people hyping it up too much? Sure, an reality will set in but it wont blow up. It will rather be like the internet. 2000s and everyone thought "slap some internet on it and everything will be solved". They overestimated the (shorterm) value of the internet. But internet was still useful.
> It can’t replace a human for support
But it is replacing it. There's a rapidly-growing number of large, publicly-traded companies that replaced first-line support with LLMs. When I did my taxes, "talk to a person" was replaced with "talk to a chatbot". Airlines use them, telcos use them, social media platforms use them.
I suspect what you're missing here is that LLMs here aren't replacing some Platonic ideal of CS. Even bad customer support is very expensive. Chatbots are still a lot cheaper than hundreds of outsourced call center people following a rigid script. And frankly, they probably make fewer mistakes.
> and it will blow up like NFTs
We're probably in a valuation bubble, but it's pretty unlikely that the correct price is zero.
> It can’t replace a human for support
It doesn’t wholly replace the need for human support agents but if it can adequately handle a substantial number of tickets that’s enough to reduce headcount.
A huge percentage of problems raised in customer support are solved by otherwise accessible resources that the user hasn’t found. And AI agents are sophisticated enough to actually action on a lot of issues that require action.
The good news is that this means human agents can focus on the actually hard problems when they’re not consumed by as much menial bullshit. The bad news for human agents is that with half the workload we’ll probably hit an equilibrium with a lot fewer people in support.
I already know of at least one company that's pivoted to using a mix of AI and off-shoring their support, as well as some other functions; that's underway, with results unclear, aside from layoffs that took place. There was also a brouhaha a year or two ago when a mental health advocacy tried using AI to replace their support team... did not go as planned when it suggested self-harm to some users.
LLM is already very useful for a lot of tasks. NFT and most other crypto has never been useful for anything other than speculation.
I tend to use ai for the same things I’d have used Google for in 2005.
Google is pretty much useless now as it changed into ann ad platform, and I suspect AI will go the same way soon enough.
It seems like an obvious thing on the surface, but I've already noticed that when asked questions on LLM usage (eg building RAG pipelines and whatnot), ChatGPT will exclusively refer you to OpenAI products.
I just asked O3 for a software stack for deploying AI in a local application and it recommended llama over OpenAI API.
It has always been easy to imagine how advertising could destroy the integrity of LLM's. I can guarantee that there will be companies unable to resist the temporary cash flows from it. Those models will destroy their reputation in no time.
I'm an AI pessimist, yet I don't see this happening (at least not without some major advancements in how LLMs work).
One major problem is the payment mechanism. The nature of LLMs means you just can't really know or force it to spit out ad garbage in a predictable manor. That'll make it really tricky for an advertiser to want to invest in your LLM advertising (beyond being able to sell the fact that they are an AI ad service).
Another is going to be regulations. How can you be sure to properly highlight "sponsored" content in the middle of an AI hallucination? These LLM companies run a very real risk of running a fowl of FTC rules.
It matters a lot how much of the market they capture before then though. Oracle and Google are two companies that have spent years torching their reputation but they are still ubiquitous and wildly profitable.
It's happening, though not with ads yet
https://www.washingtonpost.com/technology/2025/04/17/llm-poi...
My bet is that free versions of models will become sponsor aligned.
I use it everyday to an extent that I’ve come to depend on it.
For copywriting, analyzing contracts, exploring my business domain, etc etc. Each of those tasks would have required me to consult with an expert a few years ago. Not anymore.
This is a great point, I was just using it understand various DMV procedures. It is invaluable for navigating bureaucracy so if your job is to ingest and regurgitate a bunch of documents and procedures you may be highly at risk here.
That is a great use for it too, rather than replacing artists we have personal advisors who can navigate almost any level of complex bureaucracy instantaneously. My girlfriend hates AI, like rails against it at any opportunity, but after spending a few hours on the DMV website I sat down and fed her questions into Claude and had answers in a few seconds. Instant convert.
I actually paid for tax advice from one of those big companies (it was recommended - last time I will take that person's recommendations!). I was very disappointed in the service. It felt like the person I was speaking to on the phone would have been better of just echoing the request into AI. So I did just that as I waited on the line. I found the answer and the tax expert "confirmed" it.
According to the article the Tax expert still has a job though.
How do you know it was correct without being a tax expert? And consulting a tax expert would give you legal recourse if it was wrong.
As for correctness, they mentioned the LLM citing links that the person can verify. So there is some protection at that level.
But, also, the threshold of things we manage ourselves versus when we look to others is constantly moving as technology advances and things change. We're always making risk tradeoff decisions measuring the probability we get sued or some harm comes to us versus trusting that we can handle some tasks ourselves. For example, most people do not have attorneys review their lease agreements or job offers, unless they have a specific circumstance that warrants they do so.
The line will move, as technology gives people the tools to become better at handling the more mundane things themselves.
But if you dont know anything about programming a link to a library/etc is not so useful. Same if you dont know about tax law and it cities the tax code and how it should be understood (the code is correct but the interpretation is not)
If it’s also returning links, wouldn’t it be faster and more authoritative to just go read the official links and skip the LLM slop entirely?
No. The LLM in the story found the necessary links. In this case the LLM was a better search engine.
Sure. But often you don’t know how to find the information or what are the right technical terms for your problem.
In a more general sense sometimes, but not always, it is easier to verify something than to come up with it at the first place.
I think in many cases, chatbots may make information accessible to people who otherwise wouldn't have it, like in the OP's case. But I'm more sceptical it's replacing experts in specialize subjects that had been previously making a living at them. They would be serving different markets.
> And yes, the answer was supported by actual sources pointing to the tax office website, including a link to the office's well-hidden official calculator of precisely the thing I thought I would have to pay someone to figure out.
Sounds like reddit could also do a good job at this, though nobody said "reddit will replace your jobs". Maybe because not as many people actively use reddit as they use generative AI now, but I cannot imagine any other reason than that.
Similairly, while not perfect I use AI to help redesign my landscaping by uploading a picture of my yard and having it come up with different options.
Also took a picture of my tire while at the garage and asked it if I really needed new tires or not.
Took a picture of my sprinkler box and had it figure out what was going on.
Potentially all situations where I would’ve paid (or paid more than I already was) a local laborer for that advice. Or at a minimum spent much more time googling for the info.
For the tire you can also use a penny. If you stick the penny in the tread with Liconln’s head down and his hair isn’t covered, then you need new tires. No AI. ;)
So in the coming few years on the question whether or not to change your tires, a suggestions for shops in your area will come with a recommendation to change them. Do you think you would trust the outcome?
I am hoping that there will always be premium paid options for LLMs, and thus the onus would be on the user whether or not they want biased answers.
These will likely be cell-phone-plan level expensive, but the value prop would still be excellent.
Why do you think that's not a problem today when you ask a car mechanic?
Once they have you hooked they’ll start jacking up the prices.
It's a race to the bottom for pricing. They can't do shit. Even if the American companies colluded to stop competing and raise prices, Chinese providers will undermine that.
There is no moat. Most of these AI APIs and products are interchangeable.
> Also took a picture of my tire while at the garage and asked it if I really needed new tires or not.
You can use a penny and your eyeballs to assess this, and all it costs is $0.01
I find it easily hallucinates this stuff. It’s understanding of a picture is decidedly worse then its understanding of words. Be careful here about asking if it needs a tire change it is likely giving you an answer that only looks real.
I agree with you, my post was about not using AI to check tread depth and relying on a penny and your own eyesight instead, illustrated here: https://www.bridgestonetire.com/learn/maintenance/how-to-che...
It's also something so trivial to determine yourself.
It blows my mind the degree that people are offloading any critical thinking to AI
There's a reason that people have to be told to not just believe everything they read on the Internet. And there's a reason some people still do that anyway.
I’m a bit of a skeptic too and kind of agree on this. Also, the human employee displacement will be slow. It will start by not eliminating existing jobs but just eliminating the need for additional headcount, so it caps the growth of these labor markets. As it does that, the folks in the roles leveraging AI the most will start slowly stealing share of demand as they find more efficient and cheaper ways to perform the work. Meanwhile, core demand is shrinking as self service by customers is increasingly enabled. Then at some step pattern, perhaps the next global business cycle down turn, the headcount starts trending downward. This will repeat a handful of times, probably taking decades to be measured in aggregate by this type of study.
Yeah, 2023 I would expect no effect. 2024 I think generally not, wasn’t good or deployed enough. I think 2025 might be the first signs, it I still think there is a lot of plumbing and working with these things. 2026 though I expect to show an effect.
2024 was already madness for translators and graphic artists, according to my personal anecdata.
> I recently used Gemini for some tax advice that would have cost hundreds of dollars to get from a licensed tax agent.
That’s like buying a wrench and changing your own spark plugs. Wrenches are not putting mechanics out of business.
Depends on how good the wrench is, if I can walk over to the wrench, kick it, say change my spark plugs now you fuck, and it does so instantly and for free and doesn't complain....
I don't trust what anyone says in this space because there is so much money to be made (by a fraction of people) if AI lives up to its promise, and money to be made to those who claim that AI is "bullshit".
The only thing I can remotely trust is my own experience. Recently, I decided to have some business cards made, which I haven't done in probably 15 years. A few years ago, I would have either hired someone on Fiverr to design my business card or pay for a premade template. Instead, I told Sora to design me a business card, and it gave me a good design the first time; it even immediately updated it with my Instagram link when I asked it to.
I'm sorry, but I fail to see how AI, as we now know it, doesn't take the wind out of the sails of certain kinds of jobs.
But didn't we have business card template programs, and even free suggested business card designs from the companies that sell business cards, almost immediately after they opened for business on the internet?
That's missing the point, and is kind of like saying why bother paying someone to build you a house when there are DIY home building kits. (or why even buy a home when you can live in a van rent-free)
The point is that I would have paid for another human being's time. Why? Because I am not a young man anymore, and have little desire to do everything myself at this point. But now, I don't have to pay for someone's time, and that surplus time doesn't necessarily transfer to something equivalent like magic.
You do pay for it though. Compute isn't free.
Could I really have been more clear?
I am not talking about whether I have to pay more or less for anything. My problem is not paying. I want to pay so that I don't have to make something myself or waste time fiddling with a free template.
What I am proposing is that, in the current day, a human being is less likely to be at the other end of the transaction when I want to spend money to avoid sacrificing my time.
Sure, one can say that whomever is working for one of these AI companies benefits, but they would be outliers and AI is effectively homogenizing labor units in that case. Someone with creative talent isn't going to feasibly spin up a competitive AI business the way they could have started their own business selling their services directly.
Your tax example isn't far off from what's already possible with Google.
The legal profession specifically saw the rise of computers, digitization of cases and records, and powerful search... it's never been easier to "self help" - yet people still hire lawyers.
Ironically, your example is what you used to get from a Google search back when Google wasn't aggressively monetized and enshittified.
I thought the value of using a licensed tax agent is that if they give you advice that ends up being bad, they have an ethical/professional obligation to clean up their mess.
The kind of person who wants to pay nothing for advice wasn’t going to hire a lawyer or an accountant anyway.
This fact is so simple and yet here we are having arguments about it. To me people are conflating an economic assessment - whose jobs are going to be impacted and how much - with an aspirational one - which of your acquaintances personally could be replaced by an AI, because that would satisfy a beef.
Google (non-Gemini) has always been a great source for tax advice, at least here in Canada because, if nothing else, the government's website appears to leave all its pages available for indexing (even if it's impossible to navigate on its own).
Here’s why I don’t think it matters , because the machine is paying for everyone’s productivity boost, even your accountants. So maybe this tide will rise all boats. Time will tell.
Your accountant also is probably saving hundreds of dollars in other areas using AI assistance.
Personally I still think you should cross check with a professional.
Not consulting a real tax advisor is probably going to cost you much more.
I wouldn't be saving on tax advisors. Moreover, I would hire two different tax advisors, so I could cross check them.
Most people’s (USA) taxes are not complex, and just require basic arithmetic to complete. Even topics like stock sales, IRA rollovers, HSAs, and rental income (which the vast majority of taxpayers don’t have) are straightforward if you just read the instructions on the forms and follow them. In 30 years of paying taxes, I’ve only had a tax professional do it once: as an experiment after I already did them myself to see if there was any difference in the output. I paid a tax professional $400 and the forms he handed me back were identical to the ones I filled out myself.
I'm one of those weird kids who liked doing those puzzles where you had to walk through a list of tricky instructions and end up with the right answers, so I'm pretty good at that sort of thing. I also have fairly simple finances: a regular W-2 job and a little side income that doesn't have taxes withdrawn. But last year the IRS sent me a $450 check and a note that said I'd made a mistake on my taxes and paid too much. Sadly, they didn't tell me what the mistake was, so I couldn't be sure to correct it this year.
Technically, all you have to do is follow the written instructions. But there are a surprising number of maybes in those instructions. You hit a checkbox that asks whether you qualify for such-and-such deduction, and find yourself downloading yet another document full of conditions for qualification, which aren't always as clear-cut as you'd like. You can end up reading page after page to figure out whether you should check a single box, and that single box may require another series of forms.
My small side income takes me from a one-page return to several pages, and next year I'm probably going to have to pay estimated taxes in advance because that non-taxed income leaves me owing at the end of the year more than some acceptable threshold that could result in fines. All because I make an extra 10% doing some evening freelancing.
Most people's taxes shouldn't be complex, but in practice they're more complex than they should be.
I don't think it makes you weird, and taxes really aren't that much of a puzzle to put together, outside of the many deduction-related edge cases (which you can skip if you just take the standard deduction). My federal and state returns last year added up to 36 pages, not counting the attachments listing investment sales. Still, they're pretty straightforward. I now at least use online software to do them, but that's only to save time filling out forms, not for the software's "expertise." I have no doubt I could do them by hand if I wanted to give myself more writing to do.
If I can do this, most people can do a simple 2-page 1040EZ.
"AI is all hype and is going to destroy the labor market"
Read Paul Tetlock's research about so-called "experts" and their inability to make good forecasts.
Here's my own take:
- It is far too early to tell.
- The roll-out of ChatGPT caused a mind-set revolution. People now "get" what is possible already now, and it encourages conceiving and persuing new use cases on what people have seen.
- I would not recommend any kinds to train to become a translator for sure; even before LLMs, people were paid penny amounts per word or line translated, and rates plummeted further due to tools that cache translations in previous versions of documents (SDL TRADOS etc.). The same decline not to be expected for interpreters.
- Graphic designers that live from logo designs and similar works may suffer fewer requests.
- Text editors (people that edit/proofread prose, not computer programs) will be replaced by LLMs.
- LLMs are a basic technology that now will be embedded into various products, from email clients over word processors to workflow tools and chat clients. This will take 2-3 years, and it may reduce the number of people needed in an office with a secretarial/admin/"analyst" type background after that.
- Industry is already working on the next-gen version of smarter tools for medics and lawyers. This is more of a 3-5 year development, but then again some early adopters started already 2-3 years ago. Once this is rolled out, there will be less demand for assitants-type jobs such as paralegals.
> Text editors (people that edit/proofread prose, not computer programs) will be replaced by LLMs.
This is such a broad category that I think it's inaccurate to say that all editors will be automated, regardless of your outlook on LLMs in general. Editing and proofreading are pretty distinct roles; the latter is already easily automated, but the former can take on a number of roles more akin to a second writer who steers the first writer in the correct direction. Developmental editors take an active role in helping creatives flesh out a work of fiction, technical editors perform fact-checking and do rewrites for clarity, etc.
My dentist already uses something called OverJet(?) that reads X-rays for issues. They seem to trust it and it agreed with what they suspected on the X-rays. Personally, I’ve been misdiagnosed through X-rays by a medical doctor so even being an LLM skeptic, Im slightly favorable to AI in medicine.
But I already trust my dentist. A new dentist deferring to AI is scary, and obviously will happen.
I had a misread X-ray once, and I can see how a machine could be better at spotting patterns than a tired technician, so I'm favorable too. I think I'd like a human to at least take a glance at it, though.
The mistake on mine was caught when a radiologist checked over the work of the weekend X-ray technician who missed a hairline crack. A second look is always good, and having one look be machine and the other human might be the best combo.
> Read Paul Tetlock's research about so-called "experts" and their inability to make good forecasts
Do you mean Philip Tetlock? He wrote Superforecasting, which might be what you're referring to?
Name a better duo: software engineering hype cycles and anti-intellectualism
We were the stochastic parrots all along.
Video VFX artists are already suffering from lower demand.
It is possible for all of the following to be true: 1. This study is accurate 2. We are early in a major technological shift 3. Companies have allocated massive amounts of capital to this shift that may not represent a good investment 4. Assuming that the above three will remain true going forward is a bad idea
The .com boom and bust is an apt reference point. The technological shift WAS real, and the value to be delivered ultimately WAS delivered…but not in 1999/2000.
It may be we see a massive crash in valuations but AI still ends up the dominant driver of software value over the next 5-10 years.
That's a repeating pattern with technologies. Most of the early investments don't pay off and the transformation does happen but also quite a bit later than people predicted. This was true of the steam engine, the telegraph, electricity, and the railroad. It actually tends to be the later stage investors who reap most of the reward because by then the lessons have been learned and solutions developed.
My primary worry since the start has been not that it would "replace workers", but that it can destroy value of entire sectors. Think of resume-sending. Once both sides are automated, the practice is actually superfluous. The concept of "posting" and "applying" to jobs has to go. So any infrastructure supporting it has to go. At no point did it successfully "do a job", but the injury to the signal to noise ratio wipes out the economic value a system.
This is what happened to Google Search. It, like cable news, does kinda plod along because some dwindling fraction of the audience still doesn't "get it", but decline is decline.
> it can destroy value of entire sectors. Think of resume-sending. Once both sides are automated, the practice is actually superfluous
"Like all ‘magic’ in Tolkien, [spiritual] power is an expression of the primacy of the Unseen over the Seen and in a sense as a result such spiritual power does not effect or perform but rather reveals: the true, Unseen nature of the world is revealed by the exertion of a supernatural being and that revelation reshapes physical reality (the Seen) which is necessarily less real and less fundamental than the Unseen" [1].
The writing and receiving of resumes has been superfluous for decades. Generative AI is just revealing that truth.
[1] https://acoup.blog/2025/04/25/collections-how-gandalf-proved...
Interesting: At first I was objecting in my mind ("Clearly, the magic - LLMs - can create effect instead of only revealing it.") but upon further reflecting on this, maybe you're right:
First, LLMs are a distillation of our cultural knowledge. As such they can only reveal our knowledge to us.
Second, they are limited even more so by the users knowledge. I found that you can barely escape your "zone of proximal development" when interacting with an LLM.
(There's even something to be said about prompt engineering in the context of what the article is talking about: It is 'dark magic' and 'craft-magic' - some of the full potential power of the LLM is made available to the user by binding some selected fraction of that power locally through a conjuration of sorts. And that fraction is a product of the craftsmanship of the person who produced the prompt).
My view has been something of a middle ground. It's not exactly that it reveals relevant domains of activity are merely performative, but its a kind of "accelerationism of the almost performative". So it pushes these almost-performative systems into a death spiral of pure uselessness.
In this sense, I have rarely seen AI have negative impacts. Insofar as an LLM can generate a dozen lines of code, it forces developers to engage in less "performative copy-paste of stackoverflow/code-docs/examples/etc." and engage the mind in what those lines should be. Even if, this engagement of the mind, is a prompt.
Yeah man, I'm not so sure about that. My father made good money writing resumes in his college years studying for his MFA. Same for my mother. Neither of them were under the illusion that writing/receiving resumes was important or needed. Nor were the workers or managers. The only people who were confused about it were capitalists who needed some way to avoid losing their sanity under the weight of how unnecessary they were in the scheme of things.
Resume-sending is a great example: if everyone's blasting out AI-generated applications and companies are using AI to filter them, the whole "application" process collapses into meaningless busywork
No, the whole process is revealed to be meaningless busywork. But that step has been taken for a long time, as soon as automated systems and barely qualified hacks were employed to filter applications. I mean, they're trying to solve a hard and real problem, but those solutions are just bad at it.
Doesn't this assume that a resume has no actual relation to reality?
The technical information on the cv/resume is, in my opinion, at most half of the process. And that's assuming that the person is honest, and already has the cv-only knowledge of exactly how much to overstate and brag about their ability and to get through screens.
Presenting soft skills is entirely random, anyway, so the only marker you can have on a cv is "the person is able to write whatever we deem well-written [$LANGUAGE] for our profession and knows exactly which meaningless phrases to include that we want to see".
So I guess I was a bit strong on the low information content, but you better have a very, very strong resume if you don't know the unspoken rules of phrasing, formatting and bragging that are required to get through to an actual interview. For those of us stuck in the masses, this means we get better results by adding information that we basically only get by already being part of the in-group, not by any technical or even interpersonal expertise.
Edit: If I constrain my argument to CVs only, I think my statement holds: They test an ability to send in acceptably written text, and apart from that, literally only in-group markers.
For some applications it feels like half the signal of whether you're qualified is whether the CV is set in Computer Modern, ie was produced via LaTeX.
input -> ai expand -> ai compress -> input'
Where input' is a distorted version of input. This is the new reality.
We should start to be less impressed volume of text and instead focus on density of information.
> the whole "application" process collapses into meaningless busywork
Always was.
Im not sure this is a great example... yes the infrastructure of posting and applying to jobs has to go, but the cost of recruitment in this world would actually be much higher... you likely need more people and more resources to recruit a single employee.
In other words, there is a lot more spam in the world. Efficiencies in hiring that implicitly existed until today may no longer exist because anyone and their mother can generate a professional-looking cover letter or personal web page or w/e.
I'm not sure that is actually a bad thing. Being a competent employee and writing a professional-looking resume are two almost entirely distinct skill sets held together only by "professional-looking" being a rather costly marker of being in the in-group for your profession.
> This is what happened to Google Search
This is completely untrue. Google Search still works, wonderfully. It works even better than other attempts at search by the same Google. For example, there are many videos that you will NEVER find on Youtube search that come up as the first results on Google Search. Same for maps: it's much easier to find businesses on Google Search than on maps. And it's even more true for non-google websites; searching Stack Overflow questions on SO itself is an exercice in frustration. Etc.
Are you sure suggesting google search is in decline? The latest Google earnings call suggests it’s still growing
Google Search is distinct from Google's expansive ad network. Google search is now garbage, but their ads are everywhere are more profitable than ever.
On Google's earnings call - within the last couple of weeks - they explicitly stated that their stronger-than-expected growth in the quarter was due to a large unexpected increase in search revenues[0]. That's a distinct line-item from their ads business.
>Google’s core search and advertising business grew almost 10 per cent to $50.7bn in the quarter, surpassing estimates for between 8 per cent and 9 per cent.[0]
The "Google's search is garbage" paradigm is starting to get outdated, and users are returning to their search product. Their results, particularly the Gemini overview box, are (usually) useful at the moment. Their key differentiator over generative chatbots is that they have reliable & sourced results instantly in their overview. Just concise information about the thing you searched for, instantly, with links to sources.
[0] https://www.ft.com/content/168e9ba3-e2ff-4c63-97a3-8d7c78802...
This is anecdotal but here's a random thing I searched for yesterday https://i.imgur.com/XBr0D17.jpeg
> The "Google's search is garbage" paradigm is starting to get outdated
Quite the opposite. It's never been more true. I'm not saying using LLMs for search is better, but as it stands right now, SEO spammers have beat Google, since whatever you search for, the majority of results are AI slop.
Their increased revenue probably comes down to the fact that they no longer show any search results in the first screenful at all for mobile and they've worked hard to make ads indistinguishable from real results at a quick glance for the average user. And it's not like there exists a better alternative. Search in general sucks due to SEO.
Can you give an example of an everyday person search that generates a majority of AI slop?
If anything my frustration with google search comes from it being much harder to find niche technical information, because it seems google has turned the knobs hard towards "Treat search queries like they are coming from the average user, so show them what they are probably looking for over what they are actually looking for."
Basically any product comparison or review for example.
> Quite the opposite. It's never been more true. I'm not saying using LLMs for search is better, but as it stands right now, SEO spammers have beat Google, since whatever you search for, the majority of results is AI slop.
It's actually sadder than that. Google appear to have realised that they make more money if they serve up ad infested scrapes of Stack Overflow rather than the original site. (And they're right, at least in the short term).
Most Google ads comes from Google search, its a misconception Google derives most of their profits from third party ads that is just a minor part of Googles revenue.
You are talking past each other. They say "Google search sucks now" and you retort with "But people still use it." Both things can be true at the same time.
You misunderstand. Making organic search results shittier will drive up ad revenue as people click on sponsored links in the search results page instead.
Not a sustainable strategy in the long term though.
I've all but given up on google search and have Gemini find me the links instead.
Not because the LLM is better, but because the search is close to unusable.
We're in the phase of yanking hard on the enshittification handle. Of course that increases profits whilst sufficient users can't or won't move, but it devalues the product for users. It's in decline insomuch as it's got notably worse.
The line goes up, democracy is fine, the future will be good. Disregard reality
GenAI is like plastic surgery for people who want to look better - looks good only if you can do it in a way it doesn't show it's plastic surgery.
Resume filtering by AI can work well on the first line (if implemented well). However, once we get to the the real interview rounds and I see the CV is full of AI slop, it immediately suggests the candidate will have a loose attitude to checking the work generated by LLMs. This is a problem already.
> looks good only if you can do it in a way it doesn't show it's plastic surgery.
I think the plastic surgery users disagree here: it seems like visible plastic surgery has become a look, a status symbol.
Probably the first significant hit are going to be drivers, delivery men, truckers etc. a demographic of 5 million jobs in US and double that in EU, with ripple effects costing other millions of jobs in industries such as roadside diners and hotels.
The general tone of this study seems to be "It's 1995, and this thing called the Internet has not made TV obsolete"; same for the Acemoglu piece linked elsewhere in the. Well, no, it doesn't work like that, it first comes for your Blockbuster, your local shops and newspaper and so on, and transforms those middle class jobs vulnerable to automation into minimum wages in some Amazon warehouse. Similarly, AI won't come for lawyers and programmers first, even if some fear it.
The overarching theme is that the benefits of automation flow to those who have the bleeding edge technological capital. Historically, labor has managed to close the gap, especially trough public education; it remains to be seen if this process can continue, since eventually we're bound to hit the "hardware" limits of our wetware, whereas automation continues to accelerate.
So at some point, if the economic paradigm is not changed, human capital loses and the owners of the technological capital transition into feudal lords.
I think that drivers are probably pretty late in cycle. Many environments they operate in are somewhat complicated. Even if you do a lot to make automation possible. Say with garbage move to containers that can simply be lifted either by crane or forks. Still places were those containers are might need lot of individual training to navigate to.
Similar thing goes to delivery. Moving single pallet to store or replacing carpets or whatever. Lot of complexity if you do not offload it to receiver.
More regular the environment is easier it is to automate. A shelving in store in my mind might be simpler than all environments where vehicles need to operate in.
And I think we know first to go. Average or below average "creative" professionals. Copywriter, artists and so on.
LLMs are the least deterministic means you could possibly ever have for automation.
What you are truly seeking is high level specifications for automation systems, which is a flawed concept to the degree that the particulars of a system may require knowledgeable decisions made on a lower level.
However, CAD/CAM, and infrastructure as code are true amplifiers of human power.
LLMs destroy the notion of direct coupling or having any layered specifications or actual levels involved at all, you try to prompt a machine trained in trying to ascertain important datapoints for a given model itself, when the correct model is built up with human specifications and intention at every level.
Wrongful roads lead to erratic destinations, when it turns out that you actually have some intentions you wish to implement IRL
If you want to get to a destination you use google maps.
If you want to reach the actual destination because conditions changed (there is a wreck in front of you) you need a system to identify changes that occur in a chaotic world and can pick from an undefined/unbounded list of actions.
Generative AI has failed to automate anything at all so far.
(Racist memes and furry pornography doesn't count.)
Yeah no, I'm seeing more and more shitty ai generated ads, shop logos, interior design & graphics for instance in barber shops, fast food places etc.
The sandwich shop next to my work has a music playlist which is 100% ai generated repetitive slop.
Do you think they'll be paying graphic designers, musicians etc. for now on when something certainly shittier than what a good artist does, but also much better than what a poor one is able to achieve, can be used in five minutes for free?
That's not automation, that's replacing a product with a cheaper and shittier version.
> Do you think they'll be paying graphic designers, musicians etc. for now on
People generating these things weren't ever going to be customers of those skillsets. Your examples are small business owners basically fucking around because they can, because it's free.
Most barber shops just play the radio, or "spring" for satellite radio, for example. AI generated music might actively lose them customers.
Given that the world is fast deglobalizing there will be a flood of factory work being reshored in the next 10 years.
There's also going to be a shrinkage in the workforce caused by demographics (not enough kids to replace existing workers).
At the same time education costs have been artificially skyrocketed.
Personally the only scenario I see mass unemployment happening is under a "Russia-in-the-90s" style collapse caused by an industrial rugpull (supply chains being cut off way before we are capable of domestically substituting them) and/or the continuation of policies designed to make wealth inequality even worse.
The world is not deglobalizing, US is.
The world is deglobalizing. EU has been cutting off from Russia since the war started, and forcing medical industries to reshore since covid. At the same time it has begun drive to remilitarize itself. This means more heavy industry and all of it local.
There is brewing conflict across continents. India and Pakistan, Red sea region, South China sea. The list goes on and on. It's time to accept it. The world has moved on.
navel gazing will be shown to be a reactionary empty step, as all current global issues require more global cooperation to solve, not less.
the individual phenomena you describe are indeed detritus of this failed reaction to an increasing awareness of all humans of our common conditions under disparate nation states.
nationalism is broken by the realization that everyone everywhere is paying roughly 1/4 to 1/3 of their income in taxes, however what you receive for that taxation varies. your nation state should have to compete with other nation states to retain you.
the nativist movement is wrongful in the usa for the reason that none of the folks crying about foreigners is actually native american,
but it's globally in error for not presenting the truth: humans are all your relatives, and they are assets, not liabilities: attracting immigration is a good thing, but hey feel free to recycle tired murdoch media talking points that have made us nothing but trouble for 40 years.
> Global connectedness is holding steady at a record high level based on the latest data available in early 2025, highlighting the resilience of international flows in the face of geopolitical tensions and uncertainty.
https://www.dhl.com/global-en/microsites/core/global-connect...
Source for counter argument?
Source for counter argument is in the page that you just linked here. You have cherry picked one sentence.
"Nothing to see here, folks! Keep shipping your stuff internationally!"
> The world is deglobalizing.
We have had thousands of years of globalising. The trend has always been towards a more connected world. I strongly suspect the current Trump movement (and to an extent brexit depending on which brexit version you chose to listen to) will be blips in that continued trend. That is because it doesn't make sense for there to be 200 countries all experts in microchip manufacturing and banana growing.
But doesn't make sense to be dependent on your enemies either.
>We have had thousands of years of globalising.
It happens in cycles. Globalization has followed deglobalization before and vice versa. It's never been one straight line upward.
>That is because it doesn't make sense for there to be 200 countries all experts in microchip manufacturing and banana growing.
It'll break down into blocs, not 200 individual countries.
Ask Estonia why they buy overpriced LNG from America and Qatar rather than cheap gas from their next door neighbor.
If you think the inability to source high end microchips from anywhere apart from Taiwan is going to prevent a future conflict (the Milton Friedman(tm) golden arches theory) then I'm afraid I've got bad news.
Much of the globalized system is dependent upon US institutions which currently dont have a substitute.
BRICs have been trying to substitute for some of them and have made some nonzero progress but theyre still far, far away from stuff like a reserve currency.
Yeah you need a global navy that can assure the safe passage of thousands of ships daily. Now, how do you ensure that said navy will protect your interests? Nothing is free.
What's the alternative here? Apart from well-known, but not so useful useful advice to have a ton of friends who can hire you or be so famous as to not need an introduction.
Making dumb processes dumber to the point of failure is actually a feature.
Funny you call it value I call it inefficiency.
Why is this a worry? Sounds wonderful
I'm a bit worried about the social impacts.
When a sector collapses and become irrelevant, all its workers no longer need to be employed. Some will no longer have any useful qualifications and won't be able to find another job. They will have to go back to training and find a different activity.
It's fine if it's an isolated event. Much worse when the event is repeated in many sectors almost simultaneously.
> They will have to go back to training
Why? When we've seen a sector collapse, the new jobs that rush in to fill the void are new, never seen before, and thus don't have training. You just jump in and figure things out along the way like everyone else.
The problem, though, is that people usually seek out jobs that they like. When that collapses they are left reeling and aren't apt to want to embrace something new. That mental hurdle is hard to overcome.
What if no jobs, or fewer jobs than before, rush in to fill the void this time? You only need so many prompt engineers when each one can replace hundreds of traditional workers.
> What if no jobs, or fewer jobs than before, rush in to fill the void this time?
That means either:
1. The capitalists failed to redeploy capital after the collapse.
2. We entered into some kind of post-capitalism future.
To explore further, which one are you imagining?
As others in this thread have pointed out, this is basically what happened in the relatively short period of 1995 to 2015 with the rise of global wireless internet telecommunications & software platforms.
Many, many industries and jobs transformed or were relegated to much smaller niches.
Overall it was great.
Until we solve the hallucination problem google search still has a place of power as something that doesn’t hallucinate.
And even if we solve this problem of hallucination, the ai agents still need a platform to do search.
If I was Google I’d simply cut off public api access to the search engine.
>google search still has a place of power as something that doesn’t hallucinate.
Google search is fraught with it's own list of problems and crappy results. Acting like it's infallible is certainly an interesting position.
>If I was Google I’d simply cut off public api access to the search engine.
The convicted monopolist Google? Yea, that will go very well for them.
LLMs are already grounding their results in Google searches with citations. They have been doing that for a year already. Optional with all the big models from OpenAI, Google, xAI
People talk about LLM hallucinations as if they're a new problem, but content mill blog posts existed 15 years ago, and they read like LLM bullshit back then, and they still exist. Clicking through to Google search results typically results in lower-quality information than just asking Gemini 2.5 pro. (which can give you the same links formatted in a more legible fashion if you need to verify.)
What people call "AI slop" existed before AI and AI where I control the prompt is getting to be better than what you will find on those sorts of websites.
I had similar thoughts, but then remembered companies still burn billions on Google Ads, sure that humans...and not bots...click them, and thinking that in 2025 most people browse without ad-blockers.
Most people do browse without ad blockers, otherwise the entire DR ads industry would have collapsed years ago.
Note also that ad blockers are much less prevalent on mobile.
People will pay for what works. I consult for a number of ecommerce companies and I assure you they get a return on their spend.
I humbly disagree. I've seen team members and sometimes entire teams being laid off because of AI. It's also not just layoffs, the hiring processes and demand have been affected as well.
As an example, many companies have recently shifted their support to "AI first" models. As a result, even if the team or certain team members haven't been fired, the general trend of hiring for support is pretty much down (anecdotal).
I agree that some automation is better for the humans to do their jobs better, but this isn't one of those. When you're looking for support, something has clearly went wrong. Speaking or typing to an AI which responds with random unrelated articles or "sorry I didn't quite get that" is just evading responsibility in the name of "progress", "development", "modernization", "futuristic", "technology", <insert term of choice>, etc.
How do you know that these layoffs are the result of AI, rather than AI being a convenient place to lay the blame? I've seen a number of companies go "AI first" and stop hiring or have layoffs (Salesforce comes to mind) but I suspect they would have been in a slump without AI entirely.
> How do you know that these layoffs are the result of AI, rather than AI being a convenient place to lay the blame?
Both of those can be true, because companies are placing bets that AI will replace a lot of human work (by layoffs and reduced hiring), while also using it in the short term as a reason to cut short term costs.
AI is not hurting jobs in Denmark they said.
Software development jobs there have bigger threat: outsourcing to cheaper locations.
As well for teachers: it is hard to replace a person supervising kids with a chatbot.
Has any serious person every suggested replacing teachers with chatbots? Seems like a non sequitur.
> I humbly disagree
Both your experience and what the article (research) says can be valid at the same time. That’s how statistics works.
The study looks at 11 occupations in Denmark in 2023-24.
Maybe instead look at the US in 2025. EU labor regulations make it much harder to fire employees. And 2023 was mainly a hype year for GenAI. Actual Enterprise adoption (not free vendor pilots) started taking off in the latter half of 2024.
That said, a lot of CEOs seem to have taken the "lay off all the employees first, then figure out how to have AI (or low cost offshore labor) do the work second" approach.
2025 US has some really big complicating factors that'd make assessing the job market impact really hard to gauge.
For example, the mass layoffs of federal employees.
>"lay off all the employees first, then figure out how to have AI (or low cost offshore labor) do the work second"
Case in point: Klarna.
2024: "Klarna is All in on AI, Plans to Slash Workforce in Half" https://www.cxtoday.com/crm/klarna-is-all-in-on-ai-plans-to-...
2025: "Klarna CEO “Tremendously Embarrassed” by Salesforce Fallout and Doubts AI Can Replace It" https://www.salesforceben.com/klarna-ceo-tremendously-embarr...
Surprisingly, Denmark is one of the easiest countries in which to fire someone.
My biggest concern about AI is that it will make us better at things that we're already doing. Things that we would've stopped doing if we hadn't had such a slow introduction to their consequences, consequences that we're now accustomed to--but not adapted to. Frog in slowly warming water stuff like the troubling relationship between advertising and elections, or the lack of consent in our monetary systems.
I'm worried the shock will not be abrupt enough to encourage a proper rethink.
Oh, it's because it's not as useful and productive as the hype is trying to convince us of.
Have any of these economists ever tried to scrape by as an entry-level graphic designer/illustrator?
Apparently not, since the sort of specific work which one used to find for this has all but vanished --- every AI-generated image one sees represents an instance where someone who might have contracted for an image did not (ditto for stock images, but that's a different conversation).
> every AI-generated image one sees represents an instance where someone who might have contracted for an image did not
This is not at all true. Some percentage of AI generated images might have become a contract, but that percentage is vanishingly small.
Most AI generated images you see out there are just shared casually between friends. Another sizable chunk are useless filler in a casual blog post and the author would otherwise have gone without, used public domain images, or illegally copied an image.
A very very small percentage of them are used in a specific subset of SEO posts whose authors actually might have cared enough to get a professional illustrator a few years ago but don't care enough to avoid AI artifacts today. That sliver probably represents most of the work that used to exist for a freelance illustrator, but it's a vanishingly small percentage of AI generated images.
There is more to entry-level illustrators than SEO posts. In my daily life I've witnessed a bakery, an aspiring writer of children's books, and two University departments go for self-made AI pictures instead of hiring an illustrator. Those jobs would have definitely gone to a local illustrator.
> That sliver probably represents most of the work that used to exist for a freelance illustrator, but it's a vanishingly small percentage of AI generated images.
I prefer to get my illegally copied images from only the most humanely trained LLM instead of illegally copying them myself like some neanderthal or, heaven forbid, asking a human to make something. Such a though is revolting; humans breathe so loud and sweat so much and are so icky. Hold on - my wife just texted me. "Hey chat gipity, what is my wife asking about now?" /s
I miss the old internet, when every article didn't have a goofy image at the top just for "optimization." With the exception of photography in reporting, it's all a waste of time and bandwidth.
Most if it wasn't bespoke assets created by humans but stock art picked by if lucky, a professional photo editor, but more often the author themselves.
Yeah, I saw a investment app that was filled with obviously AI generated images. One of the more recommended choices in my country.
It feels very short-sighted from the company side because I nope'd right out of there. They didn't make me feel any trust for the company at all.
It looks like the writing is on the wall too for other menial and low-value creative jobs too - so basic music and videos - I fully expect that 90+% of video adverts will be entirely AI generated within the next year or two. see Google Veo - they have the tech already and they have YouTube already and they have the ad network already ...
Instead of uploading your video ad you already created, you'll just enter a description or two and the AI will auto-generate the video ads in thousands of iterations to target every demographic.
Google is going to run away with this with their ecosystem - OpenAI etc al can't compete with this sort of thing.
People will develop an eye for how AI-generated looks and that will make human creativity stand out even more. I'm expecting more creativity and less cookie-cutter content, I think AI generated content is actually the end of it.
>People will develop an eye for how AI-generated looks
People will think they have an eye for AI-generated content, and miss all the AI that doesn't register. If anything it would benefit the whole industry to keep some stuff looking "AI" so people build a false model of what "AI" looks like.
This is like the ChatGPT image gen of last year, which purposely put a distinct style on generated images (that shiny plasticy look). Then everyone had an "eye for AI" after seeing all those. But in the meantime, purpose made image generators without the injected prompts were creating indistinguishable images.
It is almost certain that every single person here has laid eyes on an image already, probably in an ad, that didn't set off any triggers.
People already know what the ads are and what is content, but yet the advertisers keep on paying for ads on videos so they must be working.
It feels to me that the SOTA video models today are pretty damn good already, let alone in another 12 months when SOTA will no doubt have moved on significantly.
Given that the goal of generative AI is to generate content that is virtually indistinguishable from expert creative people, I think it's one of these scenarios:
1. If the goal is achieved, which is highly unlikely, then we get very very close to AGI and all bets are off.
2. If the goal is not achieved and we stay in this uncanny valley territory (not at the bottom of it but not being able to climb out either), then eventually in a few years' time we should see a return to many fragmented almost indie-like platforms offering bespoke human-made content. The only way to hope to achieve the acceptable quality will be to favor it instead of scale as the content will have to be somehow verified by actual human beings.
> If the goal is achieved, which is highly unlikely, then we get very very close to AGI and all bets are off.
Question on two fronts:
1. Why do you think, considering the current rate of progress think it is very unlikely that LLM output becomes indistinguishable from expert creatives? Especially considering a lot of tells people claim to see are easily alleviated by prompting.
2. Why do you think a model whose output reaches that goal would rise in any way to what we’d consider AGI?
Personally, I feel the opposite. The output is likely to reach that level in the coming years, yet AGI is still far away from being reached once that has happened.
This eye will be a driving force for improving ai until it becomes in parity with real non generated pictures.
> fully expect that 90+% of video adverts will be entirely AI generated within the next year or two
And on the other end we'll have "AI" ad blockers, hopefully. They can watch each other.
I don't know. Even with these tools, I don't want to be doing this work.
I'd still hire an entry level graphic designer. I would just expect them to use these tools and 2x-5x their output. That's the only changing I'm sensing.
Also pay them less, because they don’t need to be as skilled anymore since ai is covering it.
Probably not, economists generally stay in school straight to becoming professors or they’ll go into finance right after school.
That said I don’t think entry level illustration jobs can be around if software can do their job better than they do. Just like we don’t have a lot of calculators anymore, technological replacement is bound to occur in society, AI or not.
AI I different. It impacts everything directly. It's like the computer in boost. It's like trains taking over horses but for every job out there.
Well at least that's the potential.
> Have any of these economists ever tried to scrape by as an entry-level graphic designer/illustrator?
"Equip yourself with skills that other people are willing to pay for." –Thomas Sowell
The general thought works good until it doesn't.
> The economists found for example that "AI chatbots have created new job tasks for 8.4 percent of workers, including some who do not use the tools themselves."
For me, the most interesting takeaway. It's easy to think about a task, break it down into parts, some of which can be automated, and count the savings. But it's more difficult to take into account any secondary consequences from the automation. Sometimes you save nothing because the bottleneck was already something else. Sometimes I guess you end up causing more work down the line by saving a bit of time at an earlier stage.
This can make automation a bit of a tragedy of the commons situation: It would be better for everyone collectively to not automate certain things, but it's better for some individually, so it happens.
> you end up causing more work down the line by saving a bit of time at an earlier stage
in this case, the total cost would've gone up, and thus, eventually the stakeholder (aka, the person who pays) is going to not want to pay when the "old" way was cheaper/faster/better.
> It would be better for everyone collectively to not automate certain things, but it's better for some individually, so it happens.
not really, as long as the precondition i mentioned above (the total cost dropping) is true.
That's probably true as long as the workers generally cooperate.
But there's also adversarial situations. Hiring would be one example: Companies use automated CV triaging tools that make it harder to get through to a human, and candidates auto generate CVs and cover letters and even auto apply to increase their chance to get to a human. Everybody would probably be better off if neither side attempted to automate. Yet for the individuals involved, it saves them time, so they do it.
Right, so it's like advertising when the market is already saturated (see coca cola vs pepsi advertising).
Short-term gains for individuals can gradually hollow out systems that, ironically, worked better when they were a little messy and human
There are few problems in this research, first:
> AI chatbots have had no significant impact on earnings or recorded hours in any occupation
But Generative AI is not just AI chatbots. There are ones that generate sounds/music, ones that generates imagines etc.
Another thing is, the research only looked Denmark, a nation with fairly healthy altitude towards work-life-balance, not a nation that gives proud to people who work their own ass off.
And the research also don't cover the effect of AI generated product: if music or painting can be created by an AI within just 1 minute based on prompt typed in by a 5 year old, then your expected value for "art work" will decrease, and you'll not pay the same price when you're buying from a human artist.
That last point is especially important.
Also in the news today:
> Duolingo will replace contract workers with AI. The company is going to be ‘AI-first,’ says its CEO.
https://www.theverge.com/news/657594/duolingo-ai-first-repla...
-
And within that article:
> von Ahn’s email follows a similar memo Shopify CEO Tobi Lütke sent to employees and recently shared online. In that memo, Lütke said that before teams asked for more headcount or resources, they needed to show “why they cannot get what they want done using AI.”
The headline is a bit baity (in that the article is describing no job losses because there hasn't been any economic benefit to LLM/GenAI to justify it), but what if we re-ran the study in a country _without_ exceptionally strong unionisation participation? Would we see the same results?
> We examine the labor market effects of AI chatbots using two large-scale adoption surveys (late 2023 and 2024) covering 11 exposed occupations (25,000 workers, 7,000 workplaces), linked to matched employer-employee data in Denmark.
It sounds like they didn't ask those who got laid off.
Yeah this is like counting horses a few years after the automobile was invented.
I think the methods here are highly questionable, and appear to be based on self report from a small amount of employees in Denmark 1 year ago.
The overall rate of participation in the labor work force is falling. I expect this trend to continue as AI makes the economy more and more dynamic and sets a higher and higher bar for participation.
Overall GDP is rising while labor participation rate is falling. This clearly points to more productivity with fewer people participating. At this point one of the main factors is clearly technological advancement, and within that I believe if you were to make a survey of CEOS and ask what technological change has allowed them to get more done with fewer people, the resounding consensus would definitely be AI
Based on the speed most companies operate at - no surprises here. The internet also didn't have most of its impact in the first decade. And as is fairly well understood, most of the current generation of AI models are a bit dicey in practice. There isn't much of a question that this early phase where AI is likely to create new jobs and opportunities. The real question is what happens when AI is reliably intellectually superior to humans in all domains and it has been proven to everyone's satisfaction, which is still some uncertain time away.
It is like expecting cars to replace horses before anyone starts investing in the road network and getting international petroleum supply chains set up - large capital investment is an understatement when talking about how long it takes to bring in transformative tech and bed it in optimally. Nonetheless, time passed and workhorses are rare beasts.
Does the same follow for The Metaverse, or for Blockchain?
My absolutely unqualified opinion is that blockchain will survive but won't find much uses apart from those it already has; while the metaverse- or vr usage and contents- will have an explosive growth at some point, especially when mixed with AI generated and rendered worlds- which will be lifelike and almost infinitely flexible. Which btw, is also a great way to spend your time when your job has been replaced by another AI and you have little money for anything else.
If they end up going somewhere? Absolutely, we haven't seen anything out of the crypto universe yet compared to what'll start to happen when the tech is a century old and well understood by the bankers.
The thing about AI is that it doesn't work, you can't build on top of it, and it won't get better.
It doesn't work: even for the tiny slice of human work that is so well defined and easily assessed that it is sent out to freelancers on sites like Fiverr, AI mostly can't do it. We've had years to try this now, the lack of any compelling AI work is proof that it can't be done with current technology.
You can't build on top of it: unlike foundational technologies like the internet, AI can only be used to build one product, a chatbot. The output of an AI is natural language and it's not reliable. How are you going to meaningfully process that output? The only computer system that can process natural language is an AI, so all you can do is feed one AI into another. And how do you assess accuracy? Again, your only tool is an AI, so your only option is to ask AI 2 if AI 1 is hallucinating, and AI 2 will happily hallucinate its own answer. It's like The Cat in the Hat Comes Back, Cat E trying to clean up the mess Cat D made trying to clean up the mess Cat C made and so on.
And it won't get any better. LLMs can't meaningfully assess their training data, they are statistical constructions. We've already squeezed about all we can from the training corpora we have, more GPUs and parameters won't make a meaningful difference. We've succeeded at creating a near-perfect statistical model of wikipedia and reddit and so on, it's just not very useful even if it is endlessly amusing for some people.
Not replacing jobs yet.
Seen a whole lot of gen AI deflecting customer questions which would have been previously tickets. That is a reduced ticket volume that would have been taken by a junior support engineer.
We are a couple of years away from the death of the level 1 support engineer. I can't even imagine what's going to happen to the level 0 IT support.
> We are a couple of years away from the death of the level 1 support engineer.
And this trend isn't new; a lot of investments into e.g. customer support is to need less support staff, for example through better self-service websites, chatbots / conversational interfaces / phone menus (these go back decades), or to reduce expenses by outsourcing call center work to low-wage countries. AI is another iteration, but gut feeling says they will need a lot of training/priming/coaching to not end up doing something other than their intended task (like Meta's AIs ending up having erotic chats with minors).
One of my projects was to replace the "contact" page of a power company with a wizard - basically, get the customers to check for known outages first, then check their own fuse boxes etc, before calling customer support.
Yeah, exactly. It's not about a sudden "mass firing" event - it's more like a slow erosion of entry-level roles
Those types of jobs are mostly in India & Philippines, not the US or Denmark, so let them deal with it.
I have had AI support agents deflect my questions, but not resolve them. It is more companies ending customer support under the guise of automation than AI obsoleting the support workers.
Perhaps briefly. Companies tried this with offshoring support. Some really took a hit and had to bring it back. Some didn't though, so it's not all or nothing in the medium term. In the short term, most of the execs will buy into the hype and try it. I suspect the lower quality companies will use it, but the companies whose value is in their reputation for quality will continue to use people.
I mean, if it really works in the end, we just redefine levels humans need to deal with. There are lots of problems with AI, but I can't see one here.
All the jobs (11) they looked at are at least medium level complexity and task delegating. They are the ones giving out time-consuming, low level jobs to cheap labour (assistants etc.) . They can save time and money by directly doing it using AI assistants instead of waiting to have an assistant available.
I am 100% convinced that Ai will and already has destroyed lots of Jobs. We will likely encounter world order disrupting changes in the coming decades when computer get another 1000 times faster and powerful in the coming 10 years.
The jobs described might get lost (obsolete or replaced) as well in the longer term if AI gets better than them. For example just now another article was mentioned in HN: "Gen Z grads say their college degrees were a waste of time and money as AI infiltrates the workplace" which would make teachers obsolete.
That person apparently didn't talk to copy writers, photographers, content creators and authors.
Or customer service. My last few online store issues have been fully chatbot when they used to be half chatbot for intake and half person.
Same, after a little back and forth it became obvious I was not talking to a real person.
I like to get the chatbot to promise me massive discounts just to get whoever is reading the logs to sweat a little.
I have survived until today using the shibboleth "let me speak to a human" [1] The day this doesn't work any more, is the day I stop paying for that service. We should make a list of companies that still have actual customer service.
1: https://xkcd.com/806/ - from an era when the worst that could happen was having to speak with incompetent, but still human, tech support.
It no longer works for virgin media (UK cable monopoly).
I got myself into a loop where no matter what I did, there was no human in the loop.
Even the "threaten to cancel" trick didn't work, still just chatbots / automated services.
Thankfully more and more of the UK is getting FTTH. Sadly for me I accidentally misunderstood the coverage checker when I last moved house.
> is the day I stop paying for that service.
You're acting like it's not the companies that are monopolies that implement these systems first.
> Many of these occupations have been described as being vulnerable to AI: accountants, customer support specialists, financial advisors, HR professionals, IT support specialists, journalists, legal professionals, marketing professionals, office clerks, software developers, and teachers.
Chatbots probably won't be the final interface. But machine learning in general is a full on revolutionary tech (much clearer now than ten years ago) that hasn't been explored fully and will eventually be quite disruptive on the scale of computers on the economy probably. Though it likely won't take the form it's taking today (chatbots etc).
The results are basically what Acemoglu and others have also been saying; e.g.,
https://economics.mit.edu/news/daron-acemoglu-what-do-we-kno...
Translators? Graphic artists? The omission of the most obviously impacted professions immediately identifies this as a cooked study, along with talking about LLMs as "chatbots". I wonder who paid for it.
are graphic artists actually getting replaced by AI? If so that would surprise me for as impressive as AI image generation is, very little of what it does seems like it would replace a graphic artists.
No opinion on the topic but "say economists" doesn't inspire trust
Thank you
The report looks at "at the labor market impact of AI chatbots on 11 occupations, covering 25,000 workers and 7,000 workplaces in Denmark in 2023 and 2024."
As with all other technologies the jobs it removes are not normally in country that introduces it but that they never happen elsewhere.
For example, while the automated looms that the Luddites were protesting about didn't result in significant job losses in the UK. How much clothing manufacturing has been curtailed in Africa because of it and similar innovations since that have lead to cheap mass produced clothes making it uneconomic to produce there.
As suggest by this report, Denmark and West will probably be make good elsewhere and be largely unaffected.
However, places like India, Vietnam with large industries based on call centres and outsourced development servicing the West are likely to be more vulnerable.
The survey questions they asked are bad questions if you’re attempting to answer the question about future labor state. However they didn’t ask that, they asked existing employees how LLMs have changed their workplace.
This is the wrong question.
The question should be to hiring managers: Do you expect LLM based tools to increase or decrease your projected hiring of full time employees?
LLM workflows are already *displacing* entry-level labor because people are reaching for copilot/windsurf/CGPT instead of hiring a contract developer, researcher, BD person. I’m watching this happen across management in US startups.
It’s displacing job growth in entry level positions across primary writing copy, admin tasks or research.
You’re not going to find it in statistics immediately because it’s not a 1:1 replacement.
Much like the 1971 labor-productivity separation that everyone scratched their head about (answer: labor was outsourced and capital kept all value gains), we will see another asymptote to that labor productivity graph based on displacement not replacement.
AI scaring students away from the software field and simultaneously making it hard for new developers to learn (because its too temping to click a button rather than struggle for 30 minutes) could be balancing out some job losses as well
AI can't replace jobs or hurt wages. AI doesn't make these decisions & wages have been suppressed for a very long time, well before general AI adoption. Managers make these decisions. Don't blame AI if you get laid off or if your wages aren't even keeping up with inflation, let alone your productivity. Blame your manager.
Be wary of people trying to deflect the away from the managerial class for these issues.
A bold assumption, that it will continue not to.
I have a 185 year old treatise on wood engraving. At the time, to reproduce any image required that it be engraved in wood or metal for the printer; the best wood engravers were not mere reproducers, as they used some artistry when reducing the image to black and white, to keep the impression from continuous tones. (And some, of course, were also original artists in their own right). The wood engraving profession was destroyed by the invention of photo-etching (there was a weird interval before the invention of photo etching, in which cameras existed but photos had to be engraved manually anyway for printing).
Maybe all the wood engravers found employment; although I doubt it. But at this speed, there will be a lot of people who won't be able to retrain during employment and will either have to use up their savings while doing so, or have to take lower paid jobs.
This is a completely meaningless study with no correlation at all to reality in the US in right now. The hockey stick started around 2/25. We are in a completely different world now for devs.
When new technology that seemingly replaces human effort it often doesn't directly replace humans (e.g. businesses don't rush to immediately replace them with the technology). More often than not, these systems are put in place to help scale a business. We've seen this time and time again and AI seems to be no different.
Anecdotal situation - I use ChatGPT daily to rewrite sentences in the client reports I write. I would have traditionally had a marketing person review these and rewrite them, but now AI does it.
FYI: The actual study may not quite say what this article is suggesting. Unless I'm missing something, the study seems to focus on employee use of chat-based assistants, not on company-wide use of AI workflow solutions. The answers come from interviewing the employees themselves. There is an analysis of impacts on the labor market, but that is likely flawed if the companies are segmented based on employee use of chat assistants versus company-wide deployment of AI technology.
In other words, this more likely answers the question "If customer support agents all use ChatGPT or some in-house equivalent, does the company need fewer customer support agents?" than it answers the question "If we deploy an AI agent for customers to interact with, can it reduce the volume of inquiries that make it to our customer service team and, thus, require fewer agents?"
One thing nobody seems to discuss is:
In the future, we will do a lot more.
In other terms: There will be a lot more work. So even if robots do 80% of it, if we do 10x more - the amount of work we need humans to do will double.
We will write more software, build more houses, build more cars, planes and everything down the supply chain to make these things.
When you look at planet earth, it is basically empty. While rent in big cities is high. But nobody needs to sleep in a big city. We just do so because getting in and out of it is cumbersome and building houses outside the city is expensive.
When robots build those houses and drive us into town in the morning (while we work in the car) that will change. I have done a few calculations, how much more mobility we could achieve with the existing road infrastructure if we use electric autonomous buses, and it is staggering.
Another way to look at it: Currently, most matter of planet earth has not been transformed to infrastructure used by humans. As work becomes cheaper, more and more of it will. There is almost infinitely much to do.
For my part, I would like for there still to be wild and quiet places to go to when I need time away from my fellow man, and I don't envision a world paved over for modern infrastructure as desirable, but rather the stuff of nightmares such as the movie _Silent Running_ envisioned.
That said, the fact that I can't find an opensource LLM front-end which will accept a folder full of images to run a prompt on sequentially, then return the results in aggregate is incredibly frustrating.
I agree! People will become more productive, meaning fewer people can do more work. That said, I hope this does not result in the production of evermore things at the cost of nature!
I think we are at a crossroads as to what this will result in, however. In one case, the benefits will accrue at the top, with corporations earning greater profits while employing less people, leaving a large part of the population without jobs.
In the second case, we manage to capture these benefits, and confer them not just on the corporations but also the public good. People could work less, leaving more time for community enhancing activities. There are also many areas where society is currently underserved which could benefit from freed up workforce, such as schooling, elderly care, house building and maintenance etc etc.
I hope we can work toward the latter rather than the former.
> That said, I hope this does not result in the production of evermore things at the cost of nature!
It will for sure! Just today the impact is collosal.
As an example, people used to read technical documentation, now, they ask LLMs. Which replaces a simple static file by 50k matrix multiplication.
...and saves humongous amounts of time in the process. Documentations are rarely a good read (however sad, I like good docs), and we should waste less engineering time reading them.
the earth is not the property of humans, nor is any of it empty until you show zero ecosystem or wildlife or plants there.
for sure, we are doing our best to eradicate the conditions that make earth habitable, however i suggest that the first needed change is for computer screen humans to realize that other life forms exist. this requires stepping outside and questioning human hubris, so it might be a big leap, but i am fairly confident that you will discover that absolutely none of our planet is empty.
Yes, lets extract even more resources from the Earth when we're already staring down the barrel of long term environmental issues.
I like that some places are empty.
Would you be ok if instead of 97% of earth being empty 94% is empty and your rent is cut in half? Another plus point of the future: An electric autonomous bus is at your disposal every 5 minutes, bringing you to whatever nice lonely place you wish.
Rents, or any living costs going down? But everything is based on "stocks only go up".
I've got no idea what you're going on about, but 97% of the Earth isn't empty in any useful sense. For starters, almost 70% is ocean. There are also large parts which are otherwise uninhabitable, and large parts which have agricultural use. Buses don't go to uninhabited places, since that's costs too much. Every five minutes is a frequency which no form of public transport can afford.
The nature of technological progress is that it makes formerly uninhabitable areas inhabitable.
Costs of buses are mostly the driver. Which will go away. The rest is mostly building and maintaining them. Which will be done by robots. The rest is energy. The sun sends more energy to earth in an hour than humans use in a year.
How will this 3% be selected?
Which of the few remaining wild creatures will be displaced?
https://www.worldwildlife.org/press-releases/catastrophic-73...
Planet earth is still resource constrained. This is easy to forget when skills availability is more frequently the bottleneck and you live in a society that for the time being has fairly easy access to raw materials.
Extrapolating from my current experience with AI-assisted work: AI just makes work more meaningful. My output has increased 10x, allowing me to focus on ideas and impact rather than repetitive tasks. Now apply that to entire industries and whole divisions of labor: manual data entry, customer support triage, etc. Will people be out of those jobs? Most certainly. But it gives all of us a chance to level up—to focus on more meaningful labor.
As a father, my forward-thinking vision for my kids is that creativity will rule the day. The most successful will be those with the best ideas and most inspiring vision.
>The most successful will be those with the best ideas and most inspiring vision.
This has never been the truth of the world, and I doubt AI will make it come to fruition. The most successful people are by and large those with powerful connections, and/or access to capital. There are millions of smart, inspired people alive right now who will never rise above the middle class. Meanwhile kids born in select zip codes will continue to skate by unburdened by the same economic turmoil most people face.
What about technical debts related to the generated code?
First off, is there any? That's making an assumption, one which can just as easily be attributed to human-written code. Nobody writes debt-free code, that's why you have many checks and reviews before things go to production - ideally.
Second, in theory, future generations of AI tools will be able to review previous generations and improve upon the code. If it needs to, anyway.
But yeah, tech debt isn't unique to AIs, and I haven't seen anything conclusive that AIs generate more tech debt than regular people - but please share if you've got sources of the opposite.
(disclaimer, I'm very skeptical about AI to generate code myself, but I will admit to use it for boring tasks like unit test outlines)
Presumably as a father they are thinking about ways for their children to be employed.
If it actually works like that, it'll be just like all labor-saving innovations, going back to the loom and printing press and the like; people will lose their job, but it'll be local / individual tragedies, the large scale economic impact will likely be positive.
It'd still suck to lose your job / vocation though, and some of those won't be able to find a new job.
Honestly, much of work under capitalism is meaningless (see: The Office). The optimistic take is that many of those same paper-pushing roles could evolve into far more meaningful work—with the right training and opportunity (also AI).
When the car was invented, entire industries tied to horses collapsed. But those that evolved, leveled up: Blacksmiths became auto mechanics and metalworkers, etc.
As a creatively minded person with entrepreneurial instincts, I’ll admit: my predictions are a bit self-serving. But I believe it anyway—the future of work is entrepreneurial. It’s creative.
>the future of work is entrepreneurial. It’s creative.
How is this the conclusion you've come to when the sectors impacted most heavily by AI thus far have been graphic design, videography, photography, and creative writing?
> The optimistic take is that many of those same paper-pushing roles could evolve into far more meaningful work—with the right training and opportunity (also AI).
There already isn't enough meaningful work for everyone. We see people with the "right training" failing to find a job. AI is already making things worse by eliminating meaningful jobs — art, writing, music production are no longer viable career paths.
In contrast to statements like the following from the dweebs sucking harry potter's farts out of the less-wrong bubble
>Coding AIs increasingly look like autonomous agents rather than mere assistants: taking instructions via Slack or Teams and making substantial code changes on their own, sometimes saving hours or even days
https://ai-2027.com/
This is shameless "AI is not bad, we swear" propaganda. Study looked at 11 occupations, 25k workers, in Denmark, in 2023-2024. How this says anything of consequence for the world at large (or even just the US) with developments moving as fast as they are, in such an unstable economic environment, is beyond me. What I do know is that I have plenty of first-hand anecdotal evidence to the contrary.
At what point would any one trust an AI to do a job versus just giving advice. even when you have it "write" code it's really just giving advice.
even customer service bots are just nicer front ends for knowledge bases.
Link to abstract and the underlying paper "Large Language Models, Small Labor Market Effects": https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5219933
It's seemed to me that all the productivity gains would be burned up by just making our jobs more and more BS, not be reducing hours worked, just like with previous technology. I expect more meetings, not less work.
This is just objectively false. My friend is a freelance copy writer and live in the freelance world. It is 100% replacing writing jobs, editing jobs, and design jobs.
Since when? If they're writing online content then that was wiped out somewhat recently by Google changing their search algorithm and killing a huge amount of content based sites
To be fair, those jobs were already pretty precarious.
Ever since the explosion in popularity of the internet in the 2000's, anything journalism related has been in terminal decline. The arrival of the smartphones accelerated this process.
November 30th, 2022 is when ChatGPT burst into the world stage and upended what people thought AI was capable of doing. It’s been less than three years since then. The technology is still imperfect but improving at an exponential rate.
I know it’s replaced marketing content writers in startups. I know it has augmented development in startups and reduced hiring needs.
The effects as it gains capability will be mass unemployment.
n=small, but I've had multiple friends who did freelance technical writing and copyediting work tell me that the market died when genAI became easily available. Repeat clients no longer interested in their work, and all the new work postings not even really worth the cost even if you tried just handing back unmodified genAI output instantly.
So I find this result improbable, at best, given that I personally know several people who had to scramble to find new ways of earning money when their opportunities dried up with very little warning.
I think it's time for OpenAI to release an AI economist.
Because it doesn't do anything useful.
> "My general conclusion is that any story that you want to tell about these tools being very transformative, needs to contend with the fact that at least two years after [the introduction of AI chatbots], they've not made a difference for economic outcomes."
I'm someone who tries to avoid AI tools. But this paper is literally basing its whole assessment off of two things; wages and hours. This is a disingenuous assertion.
Lets assume that I work 8 hours per day. If I am able to automate 1h of my day with AI, does that mean I get to go home 1 hour early? No. Does that mean I get an extra hour of pay? No.
So the assertion that there has been no economic impact assumes that the AI is a separate agent that would normally be paid in wages for time. That is not the case.
The AI is an augmentation for an existing human agent. It has the potential to increase the efficiency of a human agent by n%. So we need to be measuring the impact that is has on effectiveness and efficiency. It will never offset wages or hours. It will just increase the productivity for a given wage or number of hours.
Its also doing no meaningful quantity of "work".
These monkeys should look into the recent history of the music industry.
AI makes people more productive so that incentivizes me to hire more people, not less. In many cases anyhow.
If each of my developers is 30% more productive that means we can ship 30% more functionally which means more budget to hire more developers. If you think you’ll just pocket that surplus you have another thing coming.
Tools can either increase or decrease employment.
Imagine if a tool made content writers 10x as productive. You might hire more, not less, because they are now better value! You might eventually realise you spent too much, but this will come later.
ADAIK no company I know of starts a shiny new initiative by firing, they start by hiring then cutting back once they have their systems in place or hit a ceiling. Even Amazon runs projects fat then makes them lean AFAIK.
There's also pent up demand.
You never expect a new labour saving device to cost jobs while the project managers are in the export building phase.
Companies have been wanting to lay people off. Using AI as an excuse is a convenient way to turn a negative into a positive.
Truth is, companies that don’t need layoffs are pushing employees to use AI to supercharge their output.
You don’t grow a business by just cutting costs, you need to increase revenue. And increasing revenue means more work, which means it’s better for existing employees to put out more with AI.
Economists are pr people. Of course they would say that.
It's not replacing jobs, but it's definitely the scarecrow invoked in layoff decisions across the tech industry. I suspect whatever metrics they use are simply too slow to measure the actual impact this is having in the job market.
I think if we go into a sharp recession companies will use this as an excuse to replace workers with other workers that effectively use AI cutting down on overhead. It just seems obvious this will happen. I don't think it's the doom and gloom scenario, but many CEOs, etc are chomping at the bit.
Also economists, during every bubble ever:
I spend much more time coding now that I can code 5x faster
Demand for software has high elasticity
you’re not going to see the firing but you’re also not going to see the hiring
watch out for headcount lacking in segments of the market
'"The adoption of these chatbots has been remarkably fast," Humlum told The Register. "Most workers in the exposed occupations have now adopted these chatbots. Employers are also shifting gears and actively encouraging it. But then when we look at the economic outcomes, it really has not moved the needle."'
So, as of yet, according to these researchers, the main effect is that of a data pump, certain corporations get a deep insight into people's and other corporation's inner life.
I was discussing with a colleague the past months, my view on how and why all these AI tools are being shoved down our throats (just look at Google's Gemini push into all enterprise tools, it's like Google+ for B2B) before there are clear cut use-cases you can point to and say "yes, this would have been much harder to do without LLM/AI" is because... Training data is the most valuable asset, all these tools are just data collection machines with some bonus features that make them look somewhat useful.
I'm not saying that I think LLMs are useless, far from it, I use them when I think it's a good fit for the research I'm doing, the code I need to generate, etc., but the way it's being pushed from a marketing perspective tells me that companies making these tools need people to use them to create a data moat.
Extremely annoying to be getting these pop-ups to "use our incredible Intelligence™" at every turn, it's grating on me so much that I've actively started to use them less, and try to disable every new "Intelligence™" feature that shows up in a tool I use.
It seems very simple cause and effect from a economic standpoint. Hype about AI is very high, so investors ask boards what they're doing about AI and using it, because they think AI will disrupt investments that don't.
The boards in turn instruct the CEOs to "adopt AI" and so you get all the normal processes about deciding what/if/when to do stuff get short circuited and so you get AI features that no one asked for or mandates for employees to adopt AI with very shallow KPIs to claim success.
The hype really distorts both sides of the conversation. You get the boosters for which any use of AI is a win, no matter how inconsequential the results, and then you get things like the original article which indicate it hasn't caused job losses yet as a sign that it hasn't changed anything. And while it might disprove the hype (especially the "AI is going to replace all mental labour in $SHORT_TIMEFRAME" hype), it really doesn't indicate that it won't replace anything.
Like when has a technology making the customer support experience worse for users or employees ever stopped it's rollout if there's cost savings to be had?
I think this why AI is so complicated for me. I've used it, and I can see some gains. But it's on the order of when IDE auto complete went from substring matches of single methods to when it could autocomplete chains of method calls based on types. The agent stuff fails on anything but the most bite size work when I've tried it.
Clearly some people seem it as something more transformative than that. There's other times when people have seen something transformative and it's just been so clearly nothing of value (NFTs for example) that it's easy to ignore the hype train. The reason AI is challenging for me is it's clearly not nothing, but also it's so far away from the vision that others have that it's not clear how realistic that is.
LLMs have mesmerized us, because, they are able to communicate meaning to us.
Fundamentally, we (the recipient of llm output) are generating the meaning from the words given. ie, llms are great when the recipient of their output is a human.
But, when their recipient is a machine, the model breaks down, because, machine to machine requires deterministic interactions. this is the weakness I see - regardless of all the hype about llm agents. fundamentally, the llms are not deterministic machines.
LLMs lack a fundamental human capability of deterministic symbolization - which is to create NEW symbols with associated rules which can deterministically model worlds we interact with. They have a long way to go on this.
Bingo. Especially with the 'coding assistants', these companies are getting great insight into how software features are described and built, and how software is architected across the board.
It's very telling that we see "we won't use your data for training" sometimes and opt-outs but never "we won't collect your data". 'Training' being at best ill defined.
Most likely they can identify very good software developers, or at least acquire this ability in the short term. That information has immediate value.
I see Ai not replacing all workers but reducing head count. On a software team I could see a team of 8 reduced to a team of 4 with Ai. Especially in smaller, leaner companies.
You already see attorneys using it to write briefs; often to hilarious effect. These are clearly the precursor though to a much reduced need to for Jr / associate level attorneys at firms.
An LLM wouldn't intentionally confuse "didn't" with "isn't"
"Life is awesome", said the frog, "the owners arranged a jacuzzi for me, it's warm and lovely in the water, not dangerous at all".
It shouldn't. Its propaganda spread by VCs and ai 'thought leaders' who are finally seeing a glimmer of their fantastical imagination coming to life (it isn't)
LMAO it's too early and too small to see anything yet
Keep in mind this kind of drivel is produced by economists and the tail-end of CS, who are desperately trying to stay relevant in the emerging work place.
The wise, will displace economists and consultants with LLMs, but the trend followers will hire them to prognostic about the future impact - such that the net affect could be zero.
Right now AI's impact is the equivalent of giving the ancient Egyptians a couple of computer chips. People will eventually figure out what they are, but until then it will only be used as combs, paperweights, pendants etc.
I would say the use cases are only coming into view.
I’m starting to think most jobs are performative. Hiring is just managers wanting more people in the office to celebrate their birthdays.
And any important jobs won’t be replaced because managers are too lazy and risk averse to try AI.
We may never see job displacement from AI. Did you know bank teller jobs actually increased in the decades following the roll out of ATMs.
You should take time to learn what those jobs are for. You'd be surprised what it takes to keep a business running past any reasonable level of scale.
I’ve worked 10+ of those jobs, guy.