What Will You Do When AI runs Out of Money and Disappear?

(louwrentius.com)

46 points | by louwrentius 14 hours ago ago

45 comments

  • benlivengood 7 hours ago

    I dunno, GPT-OSS and Llama and QWEN and any half dozen of other large open-weight models?

    I really can't imagine OpenAI or Anthropic turning off inference for a model that my workplace is happy to spend >$200*person/month on. Google still has piles of cash and no reason to turn off Gemini.

    The thing is, if inference is truly heavily subsidized (I don't think it is, because places like OpenRouter charge less than the big players for proportionally smaller models) then we'd probably happily pay >$500 a month for the current frontier models if everyone gave up on training new models because of some oddball scaling limit.

    • crimsoneer 7 hours ago

      Yeah, this is silly. Plenty of companies are hosting their own now, sometimes on prem. This isn't going away

    • iLoveOncall 7 hours ago

      > we'd probably happily pay >$500 a month for the current frontier models

      Try $5,000. OpenAI loses hundreds of billions a year, they need a 100x, not 2x.

      • gingersnap 7 hours ago

        But they are not losing 100x on inference on high paying customers. Their biggest loss is free user + training/development cost

      • weirdmantis69 5 hours ago

        Why lie on a site where people know things.

      • filoleg 7 hours ago

        OpenAI loses hundreds of billions a year on inference? I strongly doubt it

      • ndriscoll 7 hours ago

        $60k/yr still seems like a good deal for the productivity multiplier you get on an experienced engineer costing several times that. Actually, I'm fairly certain that some optimizations I had codex do this week would already pay for that from being able to scale down pod resource requirements, and that's just from me telling it to profile our code and find high ROI things to fix, taking only part of my focus away from planned work.

        Another data point: I gave codex a 2 sentence description (being intentionally vague and actually slightly misleading) of a problem that another engineer spent ~1 week root causing a couple months ago, and it found the bug in 3.5 minutes.

        These things were hot garbage right up until the second they weren't. Suddenly, they are immensely useful. That said, I doubt my usage costs anywhere near that much to openai.

        • Marsymars 5 hours ago

          > $60k/yr still seems like a good deal for the productivity multiplier you get on an experienced engineer costing several times that.

          Maybe, but that's a hard sell to all the workplaces who won't even spring for >1080p monitors for their experienced engineers.

        • thot_experiment 6 hours ago

          Wildly different experience of frontier models than I have, what's your problem domain? I had both Opus and Gemini Pro outright fail at implementing a dead simple floating point image transformation the other day because neither could keep track of when things were floats and when they were uint8.

          • ndriscoll 5 hours ago

            Low-level networking in some cloud applications. Using gpt-5.2-codex medium. I've cloned like 25 of our repos on my computer for my team + nearby teams and worked with it for a day or so coming up with an architecture diagram annotated with what services/components live in what repos and how things interact from our team's perspective (so our services + services that directly interact with us). It's great because we ended up with a mermaid diagram that's legible to me, but it's also a great format for it to use. Then I've found it does quite well at being able to look across repos to solve issues. It also made reference docs for all available debug endpoints, metrics, etc. I told it where our prometheus server is, and it knows how to do promql queries on its own. When given a problem, it knows how to run debug commands on different servers via ssh or inspect our kubernetes cluster on its own. I also had it make a shell script to go figure out which servers/pods are involved for a particular client and go check all of their debug endpoints for information (which it can then interpret). Huge time saver for debugging.

            I'm surprised it can't keep track of float vs uint8. Mine knew to look at things like struct alignment or places where we had slices (Go) on structures that could be arrays (so unnecessary boxing), in addition to things like timer reuse, object pooling/reuse, places where local variables were escaping to heap (and I never even gave it the compiler escape analysis!), etc. After letting it have a go with the profiler for a couple rounds, it eventually concluded that we were dominated by syscalls and crypto related operations, so not much more could be microoptimized.

            I've only been using this thing since right before Christmas, and I feel like I'm still at a fraction of what it can do once you start teaching it about the specifics of your workplace's setup. Even that I've started to kind-of automate by just cloning all of our infra teams' repos too. Stuff I have no idea about it can understand just fine. Any time there's something that requires more than a super pedestrian application programmer's knowledge of k8s, I just say "I don't really understand k8s. Go look at our deployment and go look at these guys' terraform repo to see all of what we're doing" and it tells me what I'm trying to figure out.

            • thot_experiment 2 hours ago

              Yeah wild, I don't really know how to bridge the gap here because I've recently been continuously disappointed by AI. Gemni Pro wasn't even able to solve a compiler error the other day, and the solutions it was suggesting were insane (manually migrating the entire codebase) when the solution was like a 0.0.xx compiler version bump. I still like AI a lot for function-scale autocomplete, but I've almost stopped using agents entirely because they're almost universally producing more work for me and making the job less fun, I have to do so much handholding for them to make good architectural decisions and I still feel like I end up on shaky foundations most of the time. I'm mostly working on physics simulation and image processing right now. My suspicion is that there's just so many orders of magnitude more cloud app plumbing code out there that the capability is really unevenly distributed, similarly with my image processing stuff my suspicion is that almost all the code it is trained on works in 8bit and it's just not able to get past it's biases and stop itself from randomly dividing things that are already floats by 255.

  • apf6 6 hours ago

    > it's the running costs of these major AI services that are also astronomical

    There's wildly different reports about whether the cost of just inference (not the training) is expensive or not...

    Sam Altman has said “We’re profitable on inference. If we didn’t pay for training, we’d be a very profitable company.”

    But a lot of folks are convinced that inference prices are currently being propped up by burning through investor capital?

    I think if we look at open source model hosting then it's pretty convincing - Look at say https://openrouter.ai/z-ai/glm-4.7 . There's about 10 different random API providers that are competing on price and they'll serve GLM 4.7 tokens at around $1.50 - $2.50 per output Mtokens. (which by the way is a tenth of the cost of Opus 4.5)

    I seriously doubt that all these random services that no one has ever heard of are also being propped up by investor capital. It seems more likely that $1.50 - $2.50 is the "near cost" price.

    If that's the actual cost, and considering that the open source models like GLM are still pretty useful when used correctly, then it's pretty clear that AI is here to stay.

    • UncleEntity 6 hours ago

      >> Sam Altman has said “We’re profitable on inference. If we didn’t pay for training, we’d be a very profitable company.”

      Any individual Sunday service is nearly cost free if we don't calculate in the 100+ years it took to build the church...

      • apf6 5 hours ago

        Lol anyway, the point is that even in a scenario where all the major models disappeared tomorrow (including OpenAI, Anthropic, etc), we would still keep using the existing open source models (GLM, Deepseek, Qwen) for a long long time.

        There's no scenario where AI goes away completely.

        I don't think the "major AI services go away completely" scenario is realistic at all when you look at those companies' revenue and customer demand, but that's a different debate I guess.

        • blibble 3 hours ago

          > There's no scenario where AI goes away completely.

          the scenario is if training becomes impossible (for any reason), then the currently available models quickly become out of date

          say this had happened 30 years ago

          today, would you be using an "AI" that only supported up to COBOL?

  • program_whiz 5 hours ago

    Perhaps this is a helpful model, rather than worrying about the "billions spent" and whether its inference vs training.

    How much would it cost you to deploy a model that you and maybe a few coworkers could effectively use? $400k probably to buy all the hardware required to host a top-tier model that could do a few hundred tokens per second for 10 concurrent users? That's $40k per person. Ammortize the hardware over 5 years, thats $8k per person per year (roughly), with no training costs (that's just you buying hardware and running it yourself). So that means, you need ~$800 per user monthly just to cover hardware to run the model (this is with no staffing costs, internet, taxes, electricity, hosting, housing, etc).

    So just food for thought, but $200 claude code is probably still losing money even just on inference.

    Since they are in the software realm, they are probably shooting for a 90% profit margin. Using the above example, it would be ($800 + R&D + opex) x 10. My guess is assuming no more training (which probably can never be profitable at current rates), they need $20k per month per user, which is why that number was floated by OpenAI previously.

    • estimator7292 4 hours ago

      After a hypothetical AI crash, the cost of hardware will plummet. It will suddenly become quite affordable to spin up a GPU or five on-prem to host a couple of models for internal use.

      The only reason hardware is so expensive now is to scalp the hyperscalers. Once that demand crashes, the supply will skyrocket and prices will crash.

      • program_whiz an hour ago

        Fair point -- but my overall point about how much users may have to pay to make these companies profitable stands. Maybe if prices stay depressed for years, but these companies are doing buildout at current prices, and they need to make returns on hardware they are buying now. I suppose they could bank on prices coming down in 2 to 3 years by a factor of 10, then current price ($200 per month) might be profitable (disregarding training, employees, power, etc).

    • Blemiono 4 hours ago

      [dead]

  • davidfiala 7 hours ago

    Missing Option 3) hardware and software continue to evolve and AI becomes cost efficient at the same price and eventually even lower

    • UncleEntity 6 hours ago

      There's no reason I can think of where this isn't the case.

      I mean, we're not even up to the "Model T" era of AI development and more like in the 'coach-built' phase where every individual instance needs a bunch of custom work and tuning. Just wait until they get them down to where every Teddy Ruxpin has a full LLM running on a few AA batteries and then see where the market lands.

      I always imagine these AI discussion in the context of a bunch of horses discussing these 'horseless carriages' circa 1900...

  • t0mas88 7 hours ago

    The author may have a point, but the handwavy numbers read as if he has no idea how accounting works. Seems like he doesn't understand capex vs opex and how they influence profitability (and their cashflow effects)

  • pixl97 6 hours ago

    As a business in our current age you are stuck in a valley between two wildly different risks.

    1. AI disappears, goes up in price, etc. All the money you've spent goes up in smoke, or you have to spend a lot more money to keep the engine running.

    2. AI does not disappear, becomes cheaper and eats your businesses primary revenue generation for lunch.

    Number 1 could happen tomorrow. Number 1 could happen after number 2. Number 1 may never happen.

    Also expect that even if the AI market crashes that AI has already massively changed the economy, and that at least some investment will go into making AI more efficient and at any point number 2 could spring out of nowhere yet again.

  • yellowapple 5 hours ago

    > Self-hosting an AI with your own hardware is probably just as cost-prohibitive, even if you don't value your time. In part because a ton of people will get this idea at the same time, impacting hardware prices even more. And the operating costs of AI seem significant. Would it even be possible to setup your own AI and achieve the same productivity level?

    I know this is probably an annoying question, but… has the author actually tried self-hosting an AI with one's own hardware? I have; ollama (and various frontends thereof) makes it straightforward, and it's absolutely not cost-prohibitive — I've ran my share of LLMs even on laptops without dedicated GPUs at all, and while the experience wasn't great compared to the commercial options, it wasn't outright unusable, either. Locally-hosted LLMs are already finding their way into various applications; that's only going to get more viable over time, not less (unless the computing hardware industry takes a catastrophic nosedive, in which case AI affordability is arguably the least of our worries).

    I'm sure the author understands this and is just being hyperbolic in the article's title, but the AI bubble bursting ≠ AI disappearing, for the same reason the dotcom bubble bursting ≠ the World Wide Web disappearing. The bubble will burst when AI shifts from being novel to being mundane, just as with any other technology-related bubble — and that entails a degree of affordability and ubiquity that's mutually exclusive with any notion of AI “disappearing”. Hopefully it'll mean companies being less motivated to shove AI “features” down everyone's throats, but the virtually-intelligent cat is already out of Pandora's box: the technology's here to stay, and I think it's presumptuous to think the race to the bottom w.r.t. cost is anywhere near the finish line.

    • estimator7292 4 hours ago

      If you have a linux machine, you can just install ollama, ollama-cuda, or ollama-rocm. That's it. It runs out of the box. If your GPU is supported, that Just Works, too. Usually, anyway.

      I have an old dual Xeon server from about 2015. 32 2.4GHz cores and 128GB of RAM. It runs models painfully slow (and loud) but they run just fine. My modern Ryzen system from last year works out of the box with full AMD GPU support.

      I have yet to find a situation where ollama doesn't work at all out of the box. It literally just turns on and goes. Maybe slow, maybe without GPU, but by god you'll have an LLM running

  • pvab3 8 hours ago

    Training is the expensive part here. It seems much more likely that the training of these models slows down drastically and is written off as a sunk cost, a few companies continue running inference on years-old models, and the free versions go away.

    • iLoveOncall 7 hours ago

      This is addressed in the very first sentence of the article that you obviously didn't read.

      • pvab3 6 hours ago

        No it's not. It never makes any distinction between training and inference. It just lumps it all together as "running" the models.

      • gingersnap 7 hours ago

        But he's not wrong. Training + inference on free customers is the black hole here.

      • crazygringo 5 hours ago

        Except it's not. And the footnote that might be expected to clarify turns out to be a joke footnote.

  • nurumaik 4 hours ago

    > And there seems to be no realistic path to profitability

    Ads, obviously

  • emsign 7 hours ago

    Finally buy a new PC.

    • ta9000 7 hours ago

      This made me smile and sigh at the same time. Hoping prices improve soon.

  • rightbyte 5 hours ago

    I think there is too much focus on the article and too little focus on that the host is some sort of dyi solar powered server.

  • keernan an hour ago

    Go back to asking people for help deciphering the Bash Manual

  • hulitu 4 hours ago

    Nothing. AI hasn't changed anything.

    • sph 2 hours ago

      Before AI: open Emacs, write code.

      After AI: open Emacs, write code.

  • notenlish 6 hours ago

    Buy ram

  • thatguy0900 8 hours ago

    Try to buy some ram and cheap used computer parts hopefully

    • cornhole 8 hours ago

      I can’t wait to put 256 gb of ram in every system I possibly can

  • cyanydeez 4 hours ago

    At work, i had them purchas 2x 48GB last gen A6000.

    For the valuable kick start usecase it pays off. It cant do all the magic bootsrraps but for baseline technical questions its perfect. Will put in a rag search eventuallly.

    Im not optimistic any use case will come to substantiate todays valuations. But the intertwined fascist businesses is going to stunt a lot of people trying to chain their product to 3rd parties.

  • partomniscient 6 hours ago

    I've never used AI except for messing around with Stable Diffusion in its early days (my then-current graphics card didn't have enough ram to run it), played with it a bit after an upgrade and that was it.

    Never used a LLM or anything explicitly.

    Got annoyed when I had to deal with AI chatbots as front-line customer service - although that only happened once or twice in the last couple of months.

    So basically, keep doing what I'm doing.

    I like AI for specifically targeted applications: - e.g. 100,000+ AI "eyeballs" vs. a few 100 for diagonstic imaging, working out whether there's something to worry about or not. I hate the idea of generalised AI, LLM's etc.

    Lowering the bar to enable 'creative output' from non-creative individuals just fucks up the world, because natural talent is replaced by unnatural talent, especially in (late) capitalism, where money is worth more than human experience to those few control-freak managers.

    I'm old. I even earnt enough to buy a house with lawn over 4 years ago during my (pre-AI) career as a Software Developer. Get off my damn lawn.

  • freejazz 3 hours ago

    Not care as I do not use it at all.