Investors expect AI use to soar. That's not happening

(economist.com)

45 points | by gaius_baltar 5 hours ago ago

67 comments

  • venturecruelty 4 hours ago

    Investors don't "expect" AI to soar, they NEED AI to soar. Why are we still engaging in this absolutely ridiculous kabuki theatre? This entire cracking edifice is propped up by fictitious capital and pipe dreams, and the music is about to stop. Turns out, you can't charge $20 a month for something no one wants and expect to get a trillion dollars out of it. Shocker!

    • strangattractor 4 hours ago

      Shocker! is correct.

      A popular belief these days is that investors from 2000 ultimately got it right. Truth - they simply got it wrong. They dumped tons of money into things that had no hope of justifying an ROI. They thought adoption of the technology would happen at a pace that was unprecedented or even possible. They assumed things would happen in 3 years that actually took 20. Yes - Shocker!

  • hintymad 4 hours ago

    What is the definition of "soaring"? The charts in the article showed that the percentage of the companies that adopt AI for automation has increase 3X. At least 40% of the companies pay for GenAI, and at least 10% of the employees use GenAI daily. Combined with the fact that the companies like OpenAI and Anthronpic frequently run out of capacity, how is the AI use not soaring?

    • qdog 2 hours ago

      There was a dip on the first chart in the article, it also sbows something like 9% of companies using it.

      What I wonder is beyond "using" AI, is what value the companies are actually seeing. Revenue growth at both OpenAI and Anthropic are increasing rapidly at the moment, but it's not clear if individual companies are really growing their useage, or if it is everyone starting to try it out.

      Personally, I have used it sparingly at work, as the lack of memory seems to make it quite difficult to use for most of my coding tasks. I see other people spending hours or even days trying to craft sub-agents and prompts, but not delivering much, if any, output above average. Any output that looks correct, but really isn't cause a number of headaches.

      For the VC's, one issue is constant increase in compute. Currently it looks to me like every new release is only slightly better, but the compute and training costs increase at the same rate. The AI companies need the end users to need their product so much they can significantly increase the price to the end users. I think this is what they want to see in "adoption", such a high demand that they can see the future of increasing prices.

    • orphea 4 hours ago

        > at least 10% of the employees use GenAI daily.
      
      Remember that this includes people who are forced to use it (otherwise they wouldn't meet KPIs and would expect conversations with HR)
      • anon7725 3 hours ago

        how much of this usage is replacing a web search or spelling/grammar checks with something orders of magnitude more costly?

    • swatcoder 4 hours ago

      I don't want to be all "did you read the article?" since that's against guidelines, but the text of the article (the stuff in between the graphics and ads) is kind of about exactly that.

      Adoption was widespread at first but seems to have hit a ceiling and stayed there for a while now. Meanwhile, there's been little evidence of major changes to net productivity or profitability where AI has been piloted. Nobody is pulling away with radical growth/efficiency for having adopted AI, and in fact the entire market of actual goods and services is mostly still just stagnating outside of the speculative investment being poured into AI itself.

      Investment isn't just about making a bet on whether an company/industry will go up or down, but about making the right bet about how much it will do so over what period of time. The scale of AI investment over the last few years was making the bet that AI adoption would keep growing very very fast and would revolutionize the productivity and profitability of the firms that integrated it. That's not happening yet, which suggests the bet may have been too big or too fast, leaving a lot of investors in an increasingly uncomfortable position.

      • strangattractor 3 hours ago

        I get confused about the word "adoption". By adoption is it meant that a company tried to use AI, determined it useful and continues to use it. Just trying something out is not adoption in my mind. Companies try and abandon things all the time.

        It has been my experience that technology has to perform significantly better than people do before it gets massively adopted. Self driving cars come to mind. Tesla has self driving that almost works everywhere but Waymo has self driving that really works in certain areas. Adoption rates for consumers has been much higher with Waymo (I was surrounded by 4 yesterday) and they are expanding rather rapidly. I have yet to see a self driving Tesla.

    • gtowey 4 hours ago

      It's the engagement fallacy all over again.

      Companies are shoving AI into everything and making it intrusive into everyone's workflow. Thus they can show how "adoption" is increasing!

      But adoption and engagement don't equal productive, useful results. In my experience it simply doesn't and the bottom is going to fall out on all these adoption metrics when people see the productivity gains aren't real.

      The only place I've seen real utility is for coding. All other tasks, such as Gemini for document writing, produces something that's about 80% ok, and 20% errors and garbage. The work of going back through with a fine toothed comb to root out the garbage is actually more work and less productive than any simply writing the darn thing from scratch.

      I fear that the future of AI driven productivity is going to push a mountain of shoddy work into the mainstream. Imagine if the loan documents for your new car had all the qualities of a spam email. It's going to be a nightmare for the administrative world to untangle what is real from the AI slop.

    • bossyTeacher 4 hours ago

      - If microsoft bundles copilot to their standard office product, you become a company that pays for AI even if you didn't opt in

      - Accidentally tapping the AI mode on the Google search will count as an AI search. DDG doesn't even wait for you to tap and triggers an AI response. Still counts as AI use even if you didn't mean to use

      - OpenAI, Google and Microsoft have been advertising heavily (usage will naturally go up)

      - Scammers using GenAI to scam increases AI usage and GenAI is GREAT for scammers

      - Using AI after a meeting to get a summary is nice but not to enough to make a visible impact in a company output. Most AI usages fall in this bucket

      This tech was sold as a civilisation defining. Not GPT-X but the GPT that is out now. Tech that was "ready to join the workforce" while the reality is that these tools are not reliable in the sense he implied. They are not "workers" and won't change the output of your average company in any significant way.

      Sweet talking investors is easy, but walking the talk is another thing altogether. Your average business has no interest or time in supervising a worker that at random times behaves unpredictably and doesn't learn not to make mistakes when told off.

    • DougN7 4 hours ago

      So with 10X the employees using it and double the current companies (so nearly 100%) will it finally the investment? I’m guessing not.

    • outside1234 4 hours ago

      Those two sets of facts can be true at the same time.

      40% of companies and 10% of employees can be using AI daily, but just for a small amount of tasks, and that usage can be leveling off.

      At the same time, AI can be so inefficient that servicing this small amount of usage is running providers out of capacity.

      This is a bad combination because it points to the economic instability of the current system. There isn't enough value to drive higher usage and/or higher prices and even if there was, the current costs are exponentially higher.

  • throwaway981120 4 hours ago

    > A survey by Dayforce, a software firm, finds that while 87% of executives use ai on the job, just 57% of managers and 27% of employees do. Perhaps middle managers set up ai initiatives to satisfy their superiors’ demands, only to wind them down quietly at a later date.

    The article quietly ignored two better explanations: the day to day work of executives can be automated more easily (Manna vibes) and/or the execs have a vested interest in AI succeeding so they can cut headcount so they are evangelists for AI.

    • fullshark 4 hours ago

      I think it's that employees are afraid to say they use AI and executives are eager to say they use AI. Managers of course occupy both worlds.

      • rvnx 4 hours ago

        There is a big compliance issue as well, in many corporations, AI is strictly forbidden so employees will claim they do not use AI at all, but they do.

        Medical doctors as well, officially 0%, reality ?

        Also many programmers hide the truth, because it is quite difficult to justify their salary (that was priced from the pre-AI times when programming was much more difficult).

  • zkmon 3 hours ago

    A lot of people who think AI is being used heavily, are coders. It's like a blacksmith making a hammer for himself and thinking that everyone is using the hammer everyday all the time.

    Let's check agentic AI. Which agents do people mostly talk about? Aha - coding agents!

    • tim333 2 hours ago

      A lot of the non coding use is large in quantity but low in usefulness like the AI summaries that Google sticks in my searches. I actually quite like them but doubt I would use them much if I had to do something like click a button to make them appear, let alone pay.

  • AstroBen 4 hours ago

    "We're talking about systems that don't exist, and that we don't know how to build" - something to keep in mind from Ilya's interview yesterday

    People are captivated by good stories, and AI makes for one hell of a sci fi narrative

    It's hard to separate the maybe one day plausible fictional future from the on-the-ground reality

  • m463 an hour ago

    all new technology is overestimated in the short term, and underestimated in the long term.

    (soar - overestimated)

    for example, years ago in the era of dragon naturallyspeaking, ALL computers would momentarily be using speech recognition.

    and it didn't happen

    but quietly speech recognition started working in the background - call trees on the phone, and other places where a strict vocabulary could help. it quietly grew and nowadays it is everywhere.

  • lowlevel 4 hours ago

    Over and over again, we see there is no one willing to call a spade a spade.

  • _pdp_ 4 hours ago

    Anecdotally, I use a lot more AI then ever before - at least 5x more - hard to measure.

    • petetnt 4 hours ago

      And to support the claim in the heading you are also invested in AI through your ChatBotKit project, so…

      • _pdp_ 3 hours ago

        I am sure everyone is invested in their own project - I just happen to be invested in a project that is somewhat connected to this topic.

        But yes I do use a lot more AI then I used to 6 months ago - some of them internally built - many others are sourced externally. I bet I will be using even more AI going forward.

        I think it is inevitable!

      • rvnx 4 hours ago

        Based on what he is building, it feels like _pdp_ is actually passionate about AI himself, and then ChatBotKit is a by-product of this passion. So pretty sure he'd use AI as much and root for it, even if not involved with that specific project.

  • zkmon 4 hours ago

    There is something called hype curve that doesn't always go up.

  • lateforwork 4 hours ago

    > Three years into the generative-AI wave, demand for the technology looks surprisingly flimsy.

    Lets compare to the adoption of the internet. Mosaic was released in 1993. Businesses adopted the internet progressively during the 90s, starting slow but accelerating toward the decade's end with broad adoption of the internet as a business necessity by 2000.

    Three years is a ridiculously small amount of time for businesses to make dramatic changes.

    • swatcoder 4 hours ago

      Ironically, you're making the point you mean to be arguing against.

      The dot-com bubble didn't form and burst because the technology or opportunity of the web wouldn't be revolutionary. It formed and burst because investment grossly outpaced how fast the technology could mature into commercial value. That's pretty much what we're seeing here.

      LLM's, diffusion, etc are radical new avenues for technology and probably will have made a huge impact on society and business when we look back in 20 years, but investors desperate for high yield in an otherwise stagnating economy flooded the engine, betting as though these dramatic changes would happen immediately rather than gradually.

      Unsurprisingly, to people who didn't put their chips on the table at all, this all-in bet on immediacy is proving more and more to be a losing one.

  • hamasho 4 hours ago

    Honestly, I sometimes use heavy thinking models only to avoid wasting tokens on my expensive pro plans. In many cases, I prefer using quicker models to discuss something, gain better ideas about the topic, do "my own research" using Google, then discuss again. ChatGPT Pro mode is helpful, but it's too slow, and I have no idea what they say is true (even with sources) or I'm familiar with the topic so I can research faster myself.

    I use coding agents often, but I don't burn all the tokens out of my Claude Max plan and ChatGPT Business plan with two seats.

  • chrsw 4 hours ago

    It will probably take decades for machine learning to transform the way we live and work.

    • slibhb 4 hours ago

      Yes, just like computers and later the internet. The technology always preceeds the cultural/economic changes by decades.

      • anon7725 4 hours ago

        Growth in the PC market and internet usage had a substantial bottom-up component. The PC, even without connectivity, was useful for word processing, games, etc. Families stretched their budgets to buy one in the 80's and 90's.

        The internet famously doubled in connectivity every 100 days during its expansion era. Its usefulness was blindingly obvious - there was no need for management to send out emails warning that they were monitoring internet usage, and you'd better make sure that you were using it enough. Can you imagine!

        We are at a remarkable point in tech. The least-informed people in an organization (execs) are pushing a technology onto their organizations. A jaw-droppingly enormous amount of capital is being deployed in essentially a "pushing on a rope" scenario.

      • thewebguyd 4 hours ago

        And sometimes it disappears entirely for a while because either culturally, the world isn't ready for it/to adapt to it, or it wasn't delivered in the right form.

        Google Glass comes to mind, which died 11 years ago and XR is only just now starting to resurface.

        Tablets also come to mind, pre-iPad, they more or less failed to achieve any meaningful adoption, and again sort of disappeared for a while until Apple released the iPad.

        Then you have Segway as an example of innovation failure which never really returned in the same form like the others, and instead now we have e-scooters and e-bikes which fit better into existing infrastructure and cultural attitudes.

        It's quite possible LLMs are just like those other examples, and the current form is not the going to be the successful form the technology takes.

    • 4 hours ago
      [deleted]
  • bossyTeacher 4 hours ago

    As soon as every big corp started stuffing their UIs with AI buttons, we all knew it was investors pushing for AI use to go sky high without a care for the nuances of the current state of AI. The reality is that AI usage isn't as impactful as it was promised. Where is the productivity increase in being able to generate a picture via some prompt? When deep research could contain hallucinated text or references, where is the productivity increase? It is undeniable that these tools have uses but when you look at all the investment made into this tech, the outcomes are not great.

    • MarkLowenstein 4 hours ago

      Example: new Yahoo! Mail AI summaries helpfully added to the top of each mail. Thanks, now I get to read each email twice! With the original text now placed in a variable location on the screen.

      Unfortunately it's the coders who are most excited to put themselves out of business with incredible code-generation facilities. The techies that remain employed will be the feature vibers with 6-figure salaries supplied by the efforts of the now-unemployed programmers. The cycle will thus continue.

  • cjbenedikt an hour ago

    All very interesting. Lean startup anyone? As a "serial" startup founder one thing that was always hammered into my brain was: customer discovery! Clearly something that was drowned out by all the money thrown at AI startups.Motto: we just buy our customers.

  • keeda 4 hours ago

    Maybe the archive link stripped it out, but it would be really useful to look at the actual sources, because TFA seems to be, uhh, "selective" in what stats it presents. For example, this source (Alex Bick at the St Louis Federal Bank) seems to be cited:

    https://www.genaiadoptiontracker.com/

    TFA presents the most pessimistic stat it could find: daily GenAI usage at work growing from 12.1% to 12.6% in a year. (Interestingly there was a dip to 9% in Nov 2024; maybe end-of-year holidays?)

    It does not mention that the same tracker also shows that overall usage (at and outside work, at least once last week) has steadily climbed from 44% to 54%. That is a 10 percentage point growth in a year. (This may also be why OpenAI reveals WAU rather than DAU; people mostly regularly use it on a weekly basis.)

    Here is something even more interesting from the same authors at the St Louis Fed using the same data:

    https://www.stlouisfed.org/on-the-economy/2025/nov/state-gen...

    Really, read that article, it is short and a bit astounding. Money quote:

    > When we feed these estimates into a standard aggregate production model, this suggests that generative AI may have increased labor productivity by up to 1.3% since the introduction of ChatGPT. This is consistent with recent estimates of aggregate labor productivity in the U.S. nonfarm business sector. For example, productivity increased at an average rate of 1.43% per year from 2015-2019, before the COVID-19 pandemic. By contrast, from the fourth quarter of 2022 through the second quarter of 2025, aggregate labor productivity increased by 2.16% on an annualized basis. Relative to its prepandemic trend, this corresponds to excess cumulative productivity growth of 1.89 percentage points since ChatGPT was publicly released. ... ...

    > We stress that this correlation cannot be interpreted as causal, and that labor productivity is determined by many factors. However, the current results are suggestive that generative AI may already be noticeably affecting industry-level productivity.

  • worik 4 hours ago

    > the economic pay-off from AI...[may]... arrive more slowly, more unevenly and at a greater cost than implied by the current investment boom

    This is the point.

    This is what matters.

    A revolutionary technology birthed in a bonfire of cash

  • dmezzetti 4 hours ago

    AI can do some good things with the right expectations and people.

  • david927 5 hours ago

    I watched the Google interview with Ilya yesterday and this came up. There's a large disconnect between the evals and the real-world performance, and he admitted that the evals are targeted.

    There was a storm of hype the last couple weeks for Gemini 3 and everyone, correctly, rolled their eyes. Investors are demanding a return and it's not happening. They're just going to have to face reality at some point.

    • venturecruelty 4 hours ago

      Unfortunately, investors facing reality means homelessness for the rest of us. Maybe the real treasure was the billions of dollars we made along the way. :)

      • rvnx 4 hours ago

        My close friends got showered so much with money poured down by investors that it is indecent.

        Just because they happened to be in the right place, at the right time, and idling, gets paid 10M USD+ due to stock options vetting.

        Sounds like crypto^2; money is spread completely irrationally and unfairly (lucky folks who launched a ponzi get rewarded instead of jailed) and completely disconnected from actual efforts.

        In the long-term this can only lead to a very unhealthy society.

        Good that we won't need money anymore, thanks to AGI right ?

        • fullshark 4 hours ago

          This is how VC and on some level professional success always worked. Gotta be in the right place, knowing the right people, at the right time. Although the scale and speed is more absurd.

    • techblueberry 5 hours ago

      I was wondering if AI was essentially the last hype cycle, that’s what it’s sort of been billed as, the tech that can do everything, but I guess robotics could be a next big thing to replace it, basically applied AI.

      • bsenftner 4 hours ago

        The scheduled hype cycles have AI+genetics turning each form of life into a programmable platform after robotics.

      • phkahler 4 hours ago

        >> I was wondering if AI was essentially the last hype cycle

        The next hype wants to be quantum computing, but its just not there yet - never mind the lack of real-world applications.

        I thought nVidia would start promoting GPUs (whole data centers) to run classical simulations of QC to develop the applications while real hardware gets figured out.

        • katmannthree 4 hours ago

          If you want to go off trying to predict the cycle I’d suspect ag/weather tech. Political complexity aside it appears to be the biggest thing we’ll need to work on in the coming decades if we want to sustain the planet’s carrying capacity.

          Probably more likely though to be something novel that few took seriously before it demonstrates utility. And this is the issue for QC, we already know what it’s useful for: a handful of niche search algorithms. It’s a bit like fusion in that even if you work out the (very significant) engineering issues you’re left with something that while useful is far from transformative.

      • andersmurphy 4 hours ago

        You mean the last 5 year plan? VCs seem to lack so much imagination and are so prone to group think. That we effectively have a top down command economy with 5 year plans in tech. Interestingly VR seems to punctuate every 15 year cycle.

        VR -> Cloud -> Crypto -> VR -> AI -> ?

    • htrp 4 hours ago

      > Google interview?

      did he do something other than that podcast?

    • otterdude 4 hours ago

      did you forget up is down and down is up?

  • AlexandrB 3 hours ago

    I like how the framing of the article assumes that AI is a revolutionary technology that everyone should be using and the adoption is just mysteriously slow. This was particularly funny:

    > In recent earnings calls, nearly two-thirds of executives at S&P 500 companies mentioned AI. At the same time, the people actually responsible for implementing AI may not be as forward-thinking, perhaps because they are worried about the tech putting them out of a job.

    Ah, those brave, forward-looking executives with their finger on the pulse of the future while their employees are just needlessly stalling adoption. Completely absent from the article is the possibility that the technology is not as revolutionary as claimed.

  • 1vuio0pswjnm7 5 hours ago

    Alternative to archive.is

       x="\"Mozilla/5.0 (Linux; Android 14) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/127.0.6533.103 Mobile Safari/537.36 Lamarr\""
       y=https://www.economist.com/finance-and-economics/2025/11/26/investors-expect-ai-use-to-soar-thats-not-happening
       busybox wget -U "$x" -O 1.htm $y
       firefox ./1.htm
    • p1mrx 4 hours ago

      > busybox wget -U $x -O 1.htm $y

      There's no way that could work. $x expands to multiple arguments.

    • gaius_baltar 5 hours ago

      Thanks, I didn't know this trick! Perhaps soon we'll need to keep lists of per-site white-listed user agents...

      • 1vuio0pswjnm7 2 hours ago

        "Perhaps soon we'll need to keep lists of per-site white-listed user agents..."

        Been doing this for many years now. It's a short list, small enough to be contained in the local fwd proxy config

           # economist.com
           http-request set-header user-agent "Mozilla/5.0 (Linux; Android 14) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/127.0.6533.103 Mobile Safari/537.36 Lamarr" if { hdr(host) -m end economist.com }
        
        I don't use curl, wget, browser extensions/add-ons, etc. except in HN examples. I don't have to worry about command line arguments ilke "-A" or "-U". Proxy controls HTTP headers
  • 5 hours ago
    [deleted]
  • 4 hours ago
    [deleted]
  • dpedu 4 hours ago

    Is it not soaring? I can't think of a recent time a new technology was invented and I began using it almost every day, and I don't even consider myself that heavy of a user of AI.

    • da02 4 hours ago

      What are some tasks you use AI on?

  • ckbkr10 4 hours ago

    I am using it for ansible, php, java, c, linux configuration issues or general questions. Preparing excel sheets etc..

    It's sped the time I need to produce projects from a usual span of 4-20 days to 1-2 days with another 2-3 Testing. Of course I still bill the time it would have taken me but for a professional it can be a great improvement.

    While my country will be slow to adopt, we haven't even adopted to smartphones yet - hooray Germany, it will have to adopt eventually ( in 10 years or so )

    • keeda 4 hours ago

      > Of course I still bill the time it would have taken me but for a professional it can be a great improvement.

      This may be a flippant comment, but it actually represents one of the reasons it is difficult to track GenAI usage and impact!

      Multiple researchers have hypothesized (often based on discrepancies in data) that the gains from workers using GenAI are not necessarily propagated to their employers. E.g. any time savings may be dedicated to other professional or leisure pursuits.