37 comments

  • 0000000000100 2 hours ago

    Not egregious API spending, but ChatGPT Pro was been one of the best investments our company has paid for.

    It is fantastic at reasonable scale ports / refactors, even with complicated subject matter like insurance. We have a project at work where Pro has saved us hours of time just trying to understand the over complicated that is currently in place.

    For context, it’s a salvage project with a wonderful mix of Razor pages and a partial migration to Vue 2 / Vuetify.

    It’s best with logic, but it doesn’t do great with understanding the particulars of UI.

    • neuronic 2 hours ago

      How are you getting these results? Even with grounding in sources, careful context engineering and whatever technique comes to your mind we are just getting sloppy junk out of all models we have tried.

      The sketchy part is that LLMs are super good at faking confidence and expertise all while randomly injected subtle but critical hallucinations. This ruins basically all significant output. Double-checking and babysitting the results is a huge time and energy sink. Human post-processing negates nearly all benefits.

      Its not like there is zero benefit to it, but I am genuinely curious how you get consistently correct output for a "complicated subject matter like insurance".

      • gamblor956 a minute ago

        A lot of programmers that say that LLMs are awesome tend to be inexperienced, not good programmers, or just gloss over the significant amount of extra work that using LLMs requires.

        Programmers tend to overestimate their knowledge of non-programming domains, so the OP is probably just not understanding that there are serious issues with the LLM's output for complicated subject matters like insurance.

      • bdangubic an hour ago

        I genuinely think that biggest issue LLM tools is that most people expect magic because first attempts at some simple things feel magical. however, they take insane amount of time to get expertise in. what is confusing is that I think SWEs spent immense amounts of time in general learning the tools of the trade but this seems to escape a lot of people when it comes to LLMs. on my team, every developer is using LLMs all day, every day. on average based on sprint retros each developer spends no less than an hour each day experimenting/learning/reading… how to make them work. the realization we made early is that when it comes to LLMs there are two large groups:

        - group that see them as invaluable tools capable of being an immense productivity multiplier

        - group that tried things here and there and gave up

        we collectively decided that we want to be in the first group and were willing to put time to be in that group.

        • lomase 35 minutes ago

          I have been in teams that do this and in teams that dont.

          I have not see any tangible difference in the output of both.

      • cjbarber an hour ago

        What are you trying to use LLMs for and what model are you using?

      • oblio an hour ago

        > Its not like there is zero benefit to it, but I am genuinely curious how you get consistently correct output for a "complicated subject matter like insurance".

        Most likely by trying to get a promotion or bonus now and getting the hell out of Dodge before anyone notices those subtle landmines left behind :-)

        • fn-mote 19 minutes ago

          Cynical, but maybe not wrong. We are plenty familiar with ignoring technical debt and letting it pile up. Dodgy LLM code seems like more of that.

          Just like tech debt, there's a time for rushing. And if you're really getting good results from LLMs, that's fabulous.

          I don't have a final position on LLM's but it has only been two days since I worked with a colleague who definitely had no idea how to proceed when they were off the "happy path" of LLM use, so I'm sure there are plenty of people getting left behind.

      • kace91 an hour ago

        Could you give an example of a prompt?

        • yeasku 14 minutes ago

          You are a top stackoverflow contributor with 20 years of experience in...

  • scuff3d an hour ago

    While it has its uses I have yet to see a single use case, or combination of use cases, that warrants the insane spending. Not to mention the environmental damage and wide spread theft and copyright infringement required to make it work.

    • duped 39 minutes ago

      The people funding this seem to believe that firstly text inference and gradient descent can synthesize a program that can operate on information tasks as good or better than humans, secondly that the only way of generating the configuration data for those programs to work is by powering vast farms of processors doing matrix arithmetic but requiring the worlds most complex supply chain tethered to a handful of geopolitically volatile places, thirdly that those farms have power demands comparable to our biggest metropolises, and finally if they succeed, they'll have unlocked economic power amplification that hasn't been seen since James Watt figured out how to move water out of coal mines a bit quicker.

      Oh and the really fucky part is half of them just want to get a bit richer, but the other half seem to be in a cult that thinks AI's gross disruption of human economies and our environment is actually the most logically ethical thing to do.

  • rubyfan an hour ago

    Is it possible all this capital would be better deployed creating value through jobs that leverage human creativity?

    • colkassad an hour ago

      Meat-based LLMs trained for billions of years are underrated! Too bad they need healthcare (and sleep).

  • kingo55 2 hours ago

    I look forward to the cheap compute flooding the market when the music stops.

    • lomase 27 minutes ago

      People still waiting for GPUs to be cheat after the blockchain bubble.

  • naveen99 an hour ago

    It’s not like anyone is going in debt to pay for gpu’s though. So it’s probably ok. Now if banks start selling 30 year mortgages for gpu’s, I might get a little worried.

    • noosphr an hour ago

      People act like big tech didn't have a mountain of cash they didn't know what to do with. Each of the big players has around 100 billion that just sitting there doing nothing.

      • prewett 5 minutes ago

        Well, Apple spends some of that pre-paying TSMC for their next node in exchange for exclusivity...

  • profsummergig 2 hours ago

    (1999) - "Spending on Amazon warehouses Is at Epic Levels. Will It Ever Pay Off?"

    • afavour 29 minutes ago

      (2000) - “Spending on Kozmo warehouses is at epic levels. Will it ever pay off?”

      I believe the relevant term here is “survivorship bias”.

    • simonw 2 hours ago

      Amazon weren't spending a single digit percentage of GDP on GPUs with a shelf life measured in just a few years though.

      • kanwisher 43 minutes ago

        but we collectively there was a single digit spend on things like fiber that ended up paying off for the public later

        • layoric 19 minutes ago

          The on going costs via power consumption are on a completely different scale

    • SaberTail an hour ago

      I'd suggest a better analogy would be telecommunications fiber[1].

      [1] https://internethistory.org/wp-content/uploads/2020/01/OSA_B...

      • lomase 28 minutes ago

        Is not similar at all.

        Even the smallest and poorest countries in the world invested in their fiber networks.

        Only China and the US have money to create models.

        • ACCount37 5 minutes ago

          In that, it's closest to the semiconductor situation.

          Few companies and very few countries have the bleeding edge frontier capabilities. A few more have "good enough to be useful in some niches" capabilities. The rest of the world has to get and use what they make - or do without, which isn't a real option.

  • pizlonator an hour ago

    Someone should create a tracker that measures the number of bearish AI takes that make the front page of HN each day.

  • techblueberry 3 hours ago

    No

  • option an hour ago

    Yes

  • mwkaufma 2 hours ago

    It's almost tiresome to keep citing Betteridge's law of headlines, but editors at legacy publications keep it relevant. If there was any compelling evidence, they wouldn't have to phrase it as a hypothetical.

  • g42gregory 41 minutes ago

    Are we still getting AGI in 2026, per OpenAI?

    Based on AGI 2026, they convinced US Government to block high-end GPU sales to China. They said, we only needed 1-2 more years to hold them off. Then AGI and OpenAI/US rules the world. Is this still the plan? /s

    If AGI does not materialize in 2026, I think there might be trouble, as China develops alternative GPUs and NVIDIA loses that market.

    • lomase 26 minutes ago

      Altman says in a few years Chat GPT 8 will solve quantum physics

      • RajT88 22 minutes ago

        I mean, I'll take it if it comes true.

        insert Rick & Morty "Show me what you got" gif here

  • hettygreen an hour ago

    Pay off for who? After learning about "The Gospel" [0], does anyone else wonder if spending on AI is actually just an arms race?

    [0] https://en.wikipedia.org/wiki/AI-assisted_targeting_in_the_G...