The AI Investment Boom

(apricitas.io)

125 points | by m-hodges 11 hours ago ago

154 comments

  • hn_throwaway_99 9 hours ago

    Reading this makes me willing to bet that this capital intensive investment boom will be similar to other enormous capital investment booms in US history, such as the laying of the railroads in the 1800s, the proliferation of car companies in the early 1900s, and the telecom fiber boom in the late 1900s. In all of these cases there was an enormous infrastructure (over) build out, followed by a crash where nearly all the companies in the industry ended up in bankruptcy, but then that original infrastructure build out had huge benefits for the economy and society as that infrastructure was "soaked up" in the subsequent years. E.g. think of all the telecom investment and subsequent bankruptcies in the late 90s/early 00s, but then all that dark fiber that was laid was eventually lit up and allowed for the explosion of high quality multimedia growth (e.g. Netflix and the like).

    I think that will happen here. I think your average investor who's currently paying for all these advanced chips, data centers and energy supplies will walk away sorely disappointed, but this investment will yield huge dividends down the road. Heck, I think the energy investment alone will end up accelerating the switch away from fossil fuels, despite AI often being portrayed as a giant climate warming energy hog (which I'm not really disputing, but now that renewables are the cheapest form of energy, I believe this huge, well-funded demand will accelerate the growth of non-carbon energy sources).

    • aurareturn 9 hours ago

      I'm sure you are right. At some point, the bubble will crash.

      The question remains is when the bubble will crash. We could be in the 1995 equivalent of the dotcom boom and not 1999. If so, we have 4 more years of high growth and even after the crash, the market will still be much bigger in 2029 than in 2024. Cisco was still 4x bigger in 2001 than in 1995.

      One thing that is slightly different from past bubbles is that the more compute you have, the smarter and more capable AI.

      One gauge I use to determine if we are still at the beginning of the boom is this: Does Slack sell an LLM chatbot solution that is able to give me reliable answers to business/technical decisions made over the last 2 years in chat? We don't have this yet - most likely because it's probably still too expensive to do this much inference with such high context window. We still need a lot more compute and better models.

      Because of the above, I'm in the camp that believe we are actually closer to the beginning of the bubble than at the end.

      Another thing I would watch closely to see when the bubble might pop is if LLM scaling laws are quickly breaking down and that more compute no longer yields more intelligence in an economical way. If so, I think the bubble would pop. All eyes are on GPT5-class models for signs.

      • vladgur 9 hours ago

        Re: Slack chat:

        Glean.com does it for the enterprise I work at: It consumes all of our knowledge sources including Slack, Google docs, wiki, source code and provides answers to complex specific questions in a way that’s downright magical.

        I was converted into a believer when I described an issue to it, pointers to a source file in online git repo and it pointed me to another repository that my team did not own that controlled DNS configs that we were not aware about. These configs were the reason our code did not behave as we expected.

        • _huayra_ 9 hours ago

          This is the main "killer feature" I've personally experienced from GPT things: a much better contextual "search engine-ish" tool for combing through and correlating different internal data sources (slack, wiki, jira, github branches, etc).

          AI code assistants have been a net neutral for me (they get enough idioms in C++ slightly incorrect that I have to spend a lot of time just reading the generated code thoroughly), but being able to say "tell me what the timeline for feature X is" and have it comb through a bunch of internal docs / tickets / git commit messages, etc, and give me a coherent answer with links is amazing.

          • aaronblohowiak 5 hours ago

            >they get enough idioms in C++ slightly incorrect

            this is part of why I stay in python when doing ai-assisted programming; there's so much training information out there for python and I _generally_ don't care about if its slightly off-idiom, its still probably fine.

          • aurareturn 9 hours ago

            This is partly why I believe OS makers, Apple, Microsoft, Google, have a huge advantage in the future when it comes to LLMs.

            They control the OS so they can combine and feed all your digital information to an LLM in a seamless way. However, in the very long term, I think their advantage will go away because at some point, LLMs could get so good that you don't need an OS like iOS anymore. An LLM could simply become standalone - and function without a traditional OS.

            Therefore, I think the advantage for iOS, Android, Windows will increase in the next few years, but less powerful after that.

            • thwarted 3 hours ago

              An LLM is an application that runs on an operating system like any other application. That the vendor of the operating system has tied it to the operating system is purely a marketing/force-it-onto-your-device/force-it-in-front-of-your-face play. It's forced bundling, just like Microsoft did with Internet Explorer 20 years ago.

              • aurareturn 3 hours ago

                I predict that OpenAI will try to circumvent iOS and Android by making their own device. I think it will be similar to Rabbit R1, but not a scam, and a lot more capable.

                They recently hired Jony Ive on a project - it could be this.

                I think it'll be a long term goal - maybe in 3-4 years, a device similar to the Rabbit R1 would be viable. It's far too early right now.

        • aurareturn 9 hours ago

          Thanks. I didn't know that existed. But does it scale? Would it still work if large companies with many millions of Slack messages?

          I suppose one reason Slack doesn't have a solution yet is because they're having a hard time getting it to work for large companies.

          • hn_throwaway_99 9 hours ago

            Yeah, Glean does this and there are a bunch of other competitors that do it as well.

            I think you may be confused about the length of the context window. These tools don't pull all of your Slack history into the context window. They use a RAG approach to index all of your content into a vector DB, then when you make a query only the relevant document snippets are pulled into the context window. It's similar for example to how Cursor implements repository-wide AI queries.

            • aurareturn 9 hours ago

              I'm aware that one can't feed millions of messages into an LLM all at once. The only way to do this now is to use a RAG approach. But RAG approach has pros and cons and can miss crucial information. I think context window still matters a lot. The bigger the window, the more information you can feed in and the quality of answer should increase.

              The point I'm trying to make is that increase context window will require more compute. Hence, we could still just be in the beginning of the compute/AI boom.

              • reissbaker 3 hours ago

                We might be even earlier — the 90s was a famous boom with a fast bust, but to me this feels closer to the dawn of the personal computer in the late 70s and early 80s: we can automate things now that were impossible to automate before. We might have a long time before seeing diminishing returns.

        • mvdtnz 3 hours ago

          My workpalce uses Glean and since it was connected to Slack it has become significantly worse. It routinely gives incorrect or VERY incomplete information, misattributes work to developers who may have casually mentioned a project at some time and worst of all presents jokes or sarcastic responses as fact.

          Not only is it an extremely poor source of information, it has ruined the company's Slack culture as people are no longer willing to (for lack of a better term) shitpost knowing that their goofy sarcasm will now be presented to Glean users as fact.

          • dcsan 2 hours ago

            Maybe have some off limits to glean shit posting channels?

      • HarHarVeryFunny 8 hours ago

        > the more compute you have, the smarter and more capable AI

        Well, this is taken on faith by OpenAI/etc, but obviously the curve has to flatten at some point, and appears to already be doing so. OpenAI are now experimenting with scaling inference-time compute (GPT-O1), but have said that it takes exponential increases in compute to produce linear gains in performance, so it remains to be seen if customers find this a worthwhile value.

        • aurareturn 8 hours ago

          GPT-o1 does demonstrate my point: the more compute you have, the smarter the AI.

          If you run chain of thoughts on an 8B model, it becomes a lot smarter too.

          GPT-o1 isn't GPT5 though. I think OpenAI will have a chain of thoughts model for GPT5-class models as well. They're separate from normal models.

          • HarHarVeryFunny 8 hours ago

            There is only so much that an approach like O1 can do, but anyways in terms of AI boom/bust the relevant question is whether this is a viable product. All sorts of consumer products could be improved by making them a lot more expensive, but there are cost/benefit limits to everything.

            GPT-5 and Claude-4 will be interesting, assuming these are both pure transformer models (not COT), as they will be a measure how much benefit remains to be had from training set scaling. I'd expect gains will be more against narrow benchmarks, than in the overall feel of intelligence (LLM arena score?) one gets from the model.

            • aurareturn 2 hours ago

              I think OpenAI has already proven that it's a viable product. Their gross margins must be decent. I doubt they're making a loss for every token they inference.

              • HarHarVeryFunny 2 hours ago

                I don't think they've broken out O1 revenue, but it must be very small at the moment since it was only just introduced. Their O1-preview pricing doesn't seem to reflect the exponential compute cost, so perhaps it is not currently priced to be profitable. Overall, across all models and revenue streams, their revenue does exceed inference costs ($4B vs $2B), but they still are projected to lose $5B this year, $14B next year, and not make a profit until 2029 (and only then if they've increased revenue by 100x ...).

                Training costs are killing them, and it's obviously not sustainable to keep spending more on research and training than the revenue generated. Training costs are expected to keep growing fast, while revenue per token in/out is plummeting - they need massive inference volume to turn this into a profitable business, and need to pray that this doesn't turn into a commodity business where they are not the low cost producer.

                https://x.com/ayooshveda/status/1847352974831489321

                https://x.com/Gloraaa_/status/1847872986260341224

      • jackcosgrove 4 hours ago

        > One gauge I use to determine if we are still at the beginning of the boom is this

        Has your barber/hairdresser recommended you buy NVDA?

        • arach 3 hours ago

          There was an NVDA earnings watch party in NY this summer and Jensen signed some boobs earlier this year. There are some signs but still room to run

      • Terr_ 2 hours ago

        > Does Slack sell an LLM chatbot solution that is able to give me reliable answers to business/technical decisions made over the last 2 years in chat?

        Note that the presence of such a feature isn't the same as whether it's secure enough for normal use.

        In particular, anything anyone said in the last 2 years in chat could poison the LLM into exfiltrating your data or giving false results chosen by the attacker, because of the fundamental problems of LLMs.

        https://promptarmor.substack.com/p/data-exfiltration-from-sl...

      • tim333 3 hours ago

        You can never really tell, though following some market tea leaf readers they seem to think a few months from now, after a bit of a run up in the market. Here's one random datapoint, on mentions of "soft landing" in Bloomberg https://x.com/bravosresearch/status/1848047330794385494

        • aurareturn 3 hours ago

          I read through a few pages of tweets from this author and it looks just like another perpetual doomsday pundit akin to Zerohedge.

          • tim333 2 hours ago

            Well yeah there may be a bit of that. I find them quite interesting for the data they bring up like the linked tweet but I don't really have an opinion as to whether they are any good at predicting things.

            I was thinking re the data in the tweet, that there were a lot of mentions of "soft landing" before the dot com crash, before the 2006 property crash and now, it is quite likely there was an easy money policy preceding them and then government policy mostly focuses on consumer price inflation and unemployment, so they relax when those are both low and then hit the brakes when inflation goes up and then it moderates and things look good similar to now. But that ignores that easy money can also inflate asset prices, eg dot com stocks, houses in 06, or money losing AI companies like now. And then at some point that ends and the speculative asset prices go down rather than up, leaving people thinking oh dear we've borrowed to put loads of money into that dotcom/house/ai thing and now it's not worth much and we still have the debts...

            At least that's my guess.

    • ben_w 5 hours ago

      > but now that renewables are the cheapest form of energy, I believe this huge, well-funded demand will accelerate the growth of non-carbon energy sources

      I think the renewables would have been built at the same rate anyway precisely because they're so cheap; but nuclear power, being expensive, would not be built if this bubble had not happened, and somehow nuclear does seem to be getting some of this money.

      • synergy20 5 hours ago

        based on my reading nuclear power is much cheaper overall compared to wind solar etc?

        • ViewTrick1002 4 hours ago

          Not at all. Old paid of nuclear plants are competitive but new builds are insanely expensive leading to $140-220/MWh prices for the ratepayers before factoring in grid stability and transmission costs.[1]

          The US has zero commercial reactors under construction and this is for one reason: economics.

          The recent announcements from the hyperscalers are PPAs. If the company building the reactor can provide power at the agreed price they will take it off their hands. Thus creating a more stable financial environment to get funding.

          They are not investing anything on their own. For a recent example NuScale another SMR developer essentially collapsed when their Utah deal fell through when nice renders and PowerPoints met real world costs and deadlines. [2]

          [1]: https://www.lazard.com/media/gjyffoqd/lazards-lcoeplus-june-...

          [2]: https://iceberg-research.com/2023/10/19/nuscale-power-smr-a-...

          • synergy20 14 minutes ago

            Thanks! I always thought it is due to people's safety concerns here instead of economic reasons. After all, nuclear plant is quite 'popular' in Europe, and China too these days.

          • floren 3 hours ago

            > leading to $140-220/MWh prices for the ratepayers

            I'm on PG&E, I wish I could get my electricity for only $0.14/kWh

            • ViewTrick1002 3 hours ago

              That cost is excluding grid stability and transmission costs.

              From what I’ve understood PG&E’s largest problem is the massive payouts and infrastructure upgrades needed from the wildfires, not the cost of the electricity itself.

        • atomic128 5 hours ago

          Yes, that's right. See the recent discussion here:

          https://news.ycombinator.com/item?id=41860341

          Basically, nuclear fission is clean baseload power. Wind and solar are not baseload power sources. They don't really compete. See discussion here: https://news.ycombinator.com/item?id=41858892

          Furthermore, we're seeing interest (from Google and Amazon and Dow Chemical) in expensive but completely safe TRISO (HALEU) reactors (https://www.energy.gov/ne/articles/triso-particles-most-robu...). These companies want clean baseload power, with no risk of meltdown, and they're willing to pay for it. Here's what Amazon has chosen: https://x-energy.com/fuel/triso-x

          TRISO (HALEU) reactors use more than 1.5 times the natural uranium per unit of energy produced because the higher burnup is offset by higher enrichment inputs (see page 11 at https://fuelcycleoptions.inl.gov/SiteAssets/SitePages/Home/1...), and the fuel is even more expensive to manufacture, but they are completely safe. This is a technology from the 1960's but it's attractive now because so much money is chasing clean baseload nuclear fission for data centers.

          These "impossible to melt down" TRISO small modular nuclear fission reactors are what Elon Musk was talking about on the campaign trail last week, when he said:

            ELON MUSK: "The dangers of nuclear power are greatly
            overstated. You can make a nuclear reactor that is
            literally impossible to melt down even if you tried to
            melt it down. You could try to bomb the place, and it
            still wouldn't melt down. There should be no regulatory
            issues with that. There should be significant nuclear 
            reform."
          
          https://x.com/AutismCapital/status/1847452008502219111
          • ViewTrick1002 4 hours ago

            > Basically, nuclear fission is clean baseload power. Wind and solar are not baseload power sources. They don't really compete.

            This means you don't understand how the grid works. California's baseload is ~15 GW while it peaks at 50 GW.

            New built nuclear power is wholly unsuitable for load following duty due to the economics. It is an insane prospect when running at 100% 24/7, and even worse when it has to adapt.

            Both nuclear power and renewables need storage, flexibility or other measures to match their inflexibility to the grid.

            See the recent study where it was found that nuclear power needs to come down 85% in cost to be competitive with renewables, due to both options requiring dispatchable power to meet the grid load.

            > The study finds that investments in flexibility in the electricity supply are needed in both systems due to the constant production pattern of nuclear and the variability of renewable energy sources. However, the scenario with high nuclear implementation is 1.2 billion EUR more expensive annually compared to a scenario only based on renewables, with all systems completely balancing supply and demand across all energy sectors in every hour. For nuclear power to be cost competitive with renewables an investment cost of 1.55 MEUR/MW must be achieved, which is substantially below any cost projection for nuclear power.

            https://www.sciencedirect.com/science/article/pii/S030626192...

            > These companies want clean baseload power, with no risk of meltdown, and they're willing to pay for it. Here's what Amazon has chosen

            The recent announcements from the hyperscalers are PPAs. If the company building the reactor can provide power at the agreed price they will take it off their hands. Thus creating a more stable financial environment to get funding.

            They are not investing anything on their own. For a recent example NuScale another SMR developer essentially collapsed when their Utah deal fell through when nice renders and PowerPoints met real world costs and deadlines.

            https://iceberg-research.com/2023/10/19/nuscale-power-smr-a-...

            > with no risk of meltdown

            Then we should be able to remove the enormous subsidy the Price Anderson act adds to the industry right? Let all new reactors buy insurance for a Fukushima level accident in the open market.

            Nuclear powerplants are currently insured for ~0.05% of the cost of a Fukushima style accident and pooled together the entire US industry covers less than 5%.

            https://en.wikipedia.org/wiki/Price%E2%80%93Anderson_Nuclear...

      • atomic128 5 hours ago

        I want to point out to anyone who's interested in the nuclear angle that even before the AI data center demand story arrived, the uranium market was facing a persistent undersupply for the first time in its many decades of history. As a result, the (long-term contract) price of uranium has been steadily rising for years: https://www.cameco.com/invest/markets/uranium-price

        After Fukushima (https://news.ycombinator.com/item?id=41768726), Japanese reactors were shut down and there was a glut of uranium available in the spot market. Simultaneously, Kazatomprom flooded the market with cheap ISR uranium. The price of uranium fell far below the cost of production and the mining companies were obliterated. The few miners that survived via their long-term contracts (primarily Cameco) put their less efficient mines into care and maintenance.

        Now we're seeing the uranium mining business wake up. But after a decade of bear-market conditions the miners cannot serve the demand: they've underinvested, they've lost skilled labor, they've shrunk. The rebound in uranium supply will be slow, much slower than the rebound in demand. This is because uranium mining is an extremely difficult process. Look at how long NexGen Energy's Rook 1 Arrow mine has taken to develop, and that's prime ore (https://s28.q4cdn.com/891672792/files/doc_downloads/2022/03/...). Look at Kazatomprom's slowing growth rate (https://world-nuclear-news.org/Articles/Kazatomprom-lowers-2...), look at the incredible complexity of Cameco's mining operations: https://www.petersenproducts.com/articles/an-inflatable-tunn...

        Here is a discussion of the uranium mining situtation: https://news.ycombinator.com/item?id=41661768 (including a very risky method of profiting from the undersupply of uranium, stock ticker SRUUF, not recommended). Note that Numerco's uranium spot price was put behind a paywall last week. You can still get the intra-day spot uranium price for free here: https://www.yellowcakeplc.com/

        • ben_w 3 hours ago

          Uranium, at least the un-enriched kind you can just buy, was never the problem.

          Even the peak of that graph (136… er, USD per lb?) is essentially a rounding error compared to everything else.

          0.00191 USD/kWh? Something like that, depends on the type of reactor it goes in.

          • atomic128 3 hours ago

            You are correct. This is one of the advantages of nuclear power.

            The fuel is a tiny fraction of the cost of running the plant. See discussion here, contrasting with natural gas: https://news.ycombinator.com/item?id=41858892

            It is also important that the fuel is physically small so you can (and typically, do) store years of fuel on-site at the reactor. Nuclear is "secure" in the sense that it can provide "energy security".

            • ben_w 2 hours ago

              It would only be an advantage if everything in the power plant else wasn't so expensive.

              And I'm saying that as someone who finds all this stuff cool and would like to see it used in international shipping.

              • atomic128 2 hours ago

                Discussed at length here: https://news.ycombinator.com/item?id=41863388

                I already linked this above, twice. I know it's a hassle to read, it's Sunday afternoon, so don't worry about it.

                It's not important whether you as an individual get this right or not, as long as society reaches the correct conclusion. Thankfully, we're seeing that happen, a worldwide shift toward the adoption of nuclear power.

                Have a pleasant evening!

    • fsndz an hour ago

      Building robust LLM-based applications is token-intensive. You often have to plan for the parsing and digestion of a lot of tokens for summarization or even retrieval augmented generation. Even the mere generation of marketing blogposts consumes a lot of output tokens in most cases. Not to mention that all robust cognitive architectures often rely on the generation of several samples for each prompt, custom retry logics, feedback loops, and reasoning tokens to achieve state of the art performance, all solutions powerfully token-intensive.

      Luckily, the cost of intelligence is quickly dropping. GPT-4, one of OpenAI’s most capable models, is now priced at $2.5 per million input tokens and $10 per million output tokens. At its initial release in March 2023, the cost was respectively $10/1M input tokens and $30/1M for output tokens. That’s a huge $7.5/1M input tokens and $20/1M output tokens reduction in price. https://www.lycee.ai/blog/drop-o1-preview-try-this-alternati...

    • jacobgorm 4 hours ago

      Railroads and computer networks create network effects, I am not sure the same is true for data centers full of hardware that becomes outdated very quickly.

      • CSMastermind 4 hours ago

        If they're building new power plants to support all those data centers than that power generation capacity might be put to good use doing something else.

    • candiddevmike 9 hours ago

      What will be the advantage of having a bunch of obsolete hardware? All I see is more e-waste.

      • aurareturn 9 hours ago

        >What will be the advantage of having a bunch of obsolete hardware? All I see is more e-waste.

        The energy build out, data centers are not wasted. You can swap out A100 GPUs for BH200 GPUs in the same datacenter. A100s will be 5 years old when Blackwell is out - which is just about right for how long datacenter chips are expected to last.

        I do, however, think that the industry will move to newer hardware faster to try to squeeze as much efficiency as possible due to the energy bottleneck. Therefore, I expect TSMC's N2 nodes to have huge demand. In fact, TSMC themselves have said designs for N2 far outnumber N3 at the same stage of the node. This is most likely due to AI companies that want to increase efficiency due to the lack of electricity.

      • Mengkudulangsat 30 minutes ago

        All those future spare GPUs will make video game streaming dirt cheap.

        Even poor people can enjoy 8k gaming on a phone soon.

      • iTokio 9 hours ago

        Well, at least they are paving the way to more efficient hardware, GPU are way, way more energy efficient than CPU and the parallel architectures are the only remaining way to scale compute.

        But yes, a lot of energy wasted in the growing phase.

        • dartos 9 hours ago

          GPUs are different than CPUs.

          They’re way more efficient at matmuls, but start throwing branching logic at them and they slow down a lot.

          Literally a percentage of their cores will noop while others are executing a branch, since all cores are lockstep.

        • aurareturn 6 hours ago

          >But yes, a lot of energy wasted in the growing phase.

          Why exactly is energy wasted during this phase?

          Are you expecting hardware to become obsolete much faster? But that only depends on TSMC's node cadence, which is still 2-3 years. Therefore, AI hardware will still be bound to TSMC's cadence.

      • almost_usual 9 hours ago

        In the case of dark fiber the hardware was fine, wavelength division multiplexing was created which increased capacity by 100x in some cases crashing demand for new fiber.

        I think OP is suggesting AI algorithms and training methods will be improve resulting in enormous performance gains with existing hardware causing a similar surplus of infrastructure and crash in demand.

        • llamaimperative 9 hours ago

          How much of current venture spending is going into reusable R&D that can be moved forward in time the way that physical infrastructure in their examples were able to be used in the future?

      • goda90 9 hours ago

        Even if the hardware quickly becomes outdated, I'm not sure it'll become worthless so quickly. And there's also the infrastructure of the data center and new electricity generation to power them. Another thing that might survive a crash and carry on to help the future is all the code used to support valuable use cases.

      • CamperBob2 9 hours ago

        Do you expect better hardware to suddenly start appearing on the market, fully-formed from the brow of Zeus?

      • wslh 9 hours ago

        I don't think the parent was specifically referring to hardware alone. The 'rails' in this context are also the AI algorithms and the underlying software. New research and development could lead to breakthroughs that allow us to use significantly less hardware than we currently do. Just as the dot-com crash wasn’t solely about the physical infrastructure but also about protocols like HTTP, I believe the AI boom will involve advancements beyond just hardware. There may be short-term excess, but the long-term benefits, particularly on the software side, could be immense.

      • bee_rider 9 hours ago

        Maybe the bust will be so rough that TSMC will go out of business, and then these graphics cards will not go obsolete for quite a while.

        Like Intel and Samsung might make a handful of better chips or whatever, but neither of their business models really involve being TSMC. So if the bubble pop took out TSMC, there wouldn’t be a new TSMC for a while.

    • HarHarVeryFunny 9 hours ago

      Similarly, I like to compare AI (more specifially LLM) "investment" to the cost of building the channel tunnel between UK and France. The original investors lost their shirt, but once built it is profitable to operate.

      • tim333 4 hours ago

        I was an original investor. I still have shirts and shares in it but they could have done better.

    • bwanab 9 hours ago

      While I agree with your essential conclusions, I don't think the automobile companies really fit. Many of the early 1900s companies (e.g. Ford, GM, Mercedes, even Chrysler) are still among the largest auto companies in the world.

      • throwaway20222 9 hours ago

        There were hundreds of failed automotive companies and parts suppliers though. I think the argument is that many will die, some will survive and take all (most)

        • aurareturn 8 hours ago

          But that happens in every bubble. Over investment, consolidation, huge winners in the end, and maybe eventually a single monopoly.

          • danielmarkbruce 4 hours ago

            There isn't a rule as to how it plays out. No huge winners in cars, no huge winners in rail. Lots of huge winners in internet.

            • aurareturn 2 hours ago

              There were huge winners in cars. Ford and GM have historically been huge companies. Then oil companies became the biggest companies in the world mostly due to cars.

      • cloud_hacker 9 hours ago

        > While I agree with your essential conclusions, I don't think the automobile companies really fit. Many of the early 1900s companies (e.g. Ford, GM, Mercedes, even Chrysler) are still among the largest auto companies in the world.

        American Automative filled for Bankruptcy multiple times.

        American Government had to step in to back them up and bail them out.

      • nemo44x 9 hours ago

        That phase is called consolidation. It’s part of the cycle. The speculative over leveraged and mismanaged companies get merged into the winners or disappear if they have nothing of value.

      • grecy 8 hours ago

        A couple of them went bankrupt and got bailouts.

    • from-nibly 5 hours ago

      The problem is that all that mal investment will get bailed about by us regular shmucks. Get ready for the hamster wheel to start spinning faster.

    • jiggawatts 4 hours ago

      Something people forget is that a training cluster with tens of thousands of GPUs is a general purpose supercomputer also! They can be used for all sorts of numerical modelling codes, not just AI. Protein folding, topology optimisation, route planning, satellite image processing, etc…

      We bought a lot of shovels. Even if we don’t find more gold, we can dig holes for industry elsewhere.

      • fhdsgbbcaA an hour ago

        I think there is an LLM bubble for sure, but I’m very bullish on the ease with which one can generate new specialized models for various tasks that are not LLM.

        For example, there’s a ton of room for developing all kinds of low latency, highly reliable, embedded classifiers in a number of domains.

        It’s not as gee-whiz/sci-fi as an LLM demo, but I think potentially much bigger impact over time.

    • kjkjadksj 5 hours ago

      Not all of that infrastructure gets soaked up, plenty is abandoned. Look at the state of american passenger rail for example and how quickly the bottom of that industry dropped out. Many old rail right of ways sit abandoned today. Likewise with telecoms, e.g. the microwave network that also sits abandoned today.

  • m101 4 hours ago

    There is a comment on this thread about this being like the railroads, but this is nothing like the railroads except insofar as it costs a lot of money.

    The railroads have lasted decades and will remain relevant for many more decades. They slowly wear out, and they are the most efficient form of land transport.

    These hardware investments will all be written off in 6 years time and won't be worth running given the power costs and relative output. They will be junked.

    There's also the extra risk that for some reason future AI systems just don't run efficiently on current gen hardware.

    • tim333 3 hours ago

      Some stuff like the buildings and power supplies will probably remain good. But year, probably new chips in a short while.

      • jacurtis 2 hours ago

        Power plants and power infrastructure are probably an example of a positive consequence that comes from this.

        We have been terrified to whisper the words "nuclear power" for decades now, but the AI boom is likely to put enough demand on the power grid that it forces us to face this reality and make appropriate buildouts.

        Even if the AI Boom crashes, these power plants will have positive impacts on the country for decades, likely centuries to come. Keeping bountiful power available and likely low-cost.

        • WillyWonkaJr 2 hours ago

          It is so bizarre that reducing pollution was not a sufficient driver to build more nuclear power, but training AI models is.

  • uxhacker 9 hours ago

    It feels like there’s a big gap in this discussion. The focus is almost entirely on GPU and hardware investment, which is undeniably driving a lot of the current AI boom. But what’s missing is any mention of the software side and the significant VC investment going into AI-driven platforms, tools, and applications. This story could be more accurately titled ‘The GPU Investment Boom’ given how heavily it leans into the hardware conversation. Software investment deserves equal attention

    • _delirium 9 hours ago

      Is there a good estimate of the split? My impression from AI startups whose operations I know something about is that a majority of their VC raise is currently going back into paying for hardware directly or indirectly, even though they aren’t hardware startups per se, but I don’t have any solid numbers.

      • bbor 3 hours ago

        I think your analysis is spot on, based on my expertise of "reading too much hacker news since every day since March 2023". This Bain report[1] barely mentions "Independent Software Vendors" and when they do it's clearly an afterthought, and the only software focused takes I can find are[2] extremely speculative, e.g.

          "For every dollar invested in hardware, we expect 8-to-20 times the amount to be spent on software... While the initial wave of AI investment lays the foundational infrastructure, the next wave is clearly set to capitalise on the burgeoning AI software market."
        
        I'm hoping someone more knowledgeable about capital markets can fill us in here, I'd be curious to see some hard numbers still! Maybe this is what a Bloomberg terminal does...?

        Regardless, I think this makes a lot of sense; there's no clear scientific consensus on the path forward for these models other than "keep going?", so building out preparatory infrastructure is seen as the clear, safe move. As the common refrain goes: "in a gold rush, sell shovels!"

        As a big believer in the upcoming cognitive era of software and society, I would only add a short bit onto the end of that saying: "...until the hydraulic mining cannons[3] come online."

        [1] https://www.bain.com/insights/ais-trillion-dollar-opportunit...

        [2] https://www.privatebankerinternational.com/comment/is-ai-sof...

        [3] https://en.wikipedia.org/wiki/Hydraulic_mining

    • aurareturn 9 hours ago

      I think GPUs and datacenter is to AI is what fiber was to the dotcom boom.

      A lot of LLM based software is uneconomical because we don't have enough compute and electricity for what they're trying to do.

      • bee_rider 9 hours ago

        The actual physical fiber was useful after the companies popped though.

        GPUs are different, unless things go very poorly, these GPUs should be pretty much obsolete after 10 years.

        The ecosystem for GPGPU software and the ability to design and manufacture new GPUs might be like fiber. But that is different because it doesn’t become a useful thing at rest, it only works while Nvidia (or some successor) is still running.

        I do think that ecosystem will stick around. Whatever the next thing after AI is, I bet Nvidia has a good enough stack at this point to pivot to it. They are the vendor for these high-throughput devices: CPU vendors will never keep up with their ability to just go wider, and coders are good enough nowadays to not need the crutch of lower latency that CPUs provide (well actually we just call frameworks written by cleverer people, but borrowing smarts is a form of cleverness).

        But we do need somebody to keep releasing new versions of CUDA.

        • aurareturn 9 hours ago

          But computer chips have always had limited usefulness because newer chips are simply faster and more efficient. The datacenter build outs and increase in electricity capacity will always be useful.

    • pfisherman 9 hours ago

      Also they talk a lot about data centers and cloud compute, but do not mention chips for on decide inference.

      Given where mobile sits in the hierarchy of interfaces, that is where I would be placing my bets if I were a VC.

  • joshdavham 6 hours ago

    I'm curious how this will affect cloud costs for the rest of us. On the one hand, we may get some economies of scale, but on the other hand, cloud resources being used up by others may drive prices up. Does anyone have any guesses as to what will happen?

    • aurareturn 5 hours ago

      I doubt it will increase cost for traditional CPU-based clouds. Just take a look at Ampere 192 core and AMD 196 core CPUs. Their efficiency will continue to drive down traditional cloud $/perf.

      • joshdavham 2 hours ago

        > I doubt it will increase cost for traditional CPU-based clouds.

        Yeah I think you’re right about that. But what about GPU’s? Will they benefit from economies of scale or the opposite?

  • GolfPopper 5 hours ago

    I've yet to find an "AI" that doesn't seamlessly hallucinate, and I don't see how "AIs" that hallucinate will ever be useful outside niche applications.

    • zone411 3 hours ago

      Confabulations are decreasing with newer models. I tested confabulations based on provided documents (relevant for RAG) here: https://github.com/lechmazur/confabulations/. Note the significant difference between GPT-4 Turbo and GPT-4o.

    • edanm 4 hours ago

      You don't really need to imagine this though - generative AI is already extremely useful in many non-nice applications.

    • sean_pedersen 3 hours ago
    • Ekaros 5 hours ago

      I believe that there is lot of content creation where quality really does not matter. And hallucinations don't really matter. Unless they are legally actionable, that is something like hate speech or libel.

      Throwing dozens of articles, social media posts and why not even videos. Hallucinations really don't matter at scale. And enough content is already generating enough views to make it somewhat viable strategy.

      • xk_id 16 minutes ago

        It amazes me the level of nihilism needed to talk about this with casual indifference.

      • flashman an hour ago

        > quality really does not matter

        What an inspiring vision for the future of news and entertainment.

    • dragonwriter 5 hours ago

      Humans also confabulate (a better metaphor for AI errors than hallucination) when called on to respond without access to the ground truth, and most AI models have limited combination of access and ability to use that access when it comes to checking ground truth.

    • jacurtis 2 hours ago

      I've never met a human that doesn't "hallucinate" either (either intentionally or unintentionally). Humans either intentionally lie or will fill in gaps in their knowledge with assumptions or inaccurate information. Most human generated content on social media is inaccurate, to an even higher percentage than what ChatGPT gives me.

      I guess humans are worthless as well since they are notoriously unreliable. Or maybe it just means that artificial intelligence is more realistic than we want to admit, since it mimics humans exactly as we are, deficiencies and all.

      This is kind of like the self-driving car debate. We don't want to allow self-driving cars until we can guarantee that they have a zero percent failure rate.

      Meanwhile we continue to rely on human drivers which leads to 50,000 deaths per year in America alone, all because we refuse to accept a failure rate of even one accident from a self-driving car.

      • tim333 an hour ago

        It's not quite the case with cars though - people are ok with Waymos which are not zero accident but probably safer than human drivers. The trouble with other systems like Tesla FSD is they are probably not safer than human yet if you don't have a human there nannying them.

        Similarly I think people will be ok with other AI if it performs well.

  • wslh an hour ago

    I wonder what lessons the current hardware-intensive AI boom could learn from the history of proof-of-work (PoW) mining, particularly regarding energy consumption, hardware specialization, and market dynamics.

  • jimmySixDOF 9 hours ago

    I remember listening to Dr Robert Martin who was leading Bell Labs in the late 90s and he talked about how the bandwidth capacity was pushing to infinity while cost per bit was pushing towards zero and we all know how that ended for the optical capacity builders of that time before the bubble popped. Is there a case for intelligence being inexhaustible to demand ? Is there a case for, as Sama says, the cost for intelligence as in input to a system converges with the price of electricity needed to power the gpu behind it in the daya center? Yes and yes. Still the same could be said for a bit of bandwidth.

    • torginus 3 hours ago

      I am very skeptical of the positive effects of infite intelligence on the living standards of knowledge workes.

      On the more pessimistic end, AI will replace us and we'll be sent to the coal mines.

      On the possibly most optimistic end, living standards are a composite of many things rooted in reality, so I'd say the actual cap is about doubling of life quality, which is not nothing, but not unprecendented if we look at the past century and a half.

    • dangerwill 4 hours ago

      Generated text =! Intelligence.

  • andxor 2 hours ago

    I don't take financial advice from HN and it has served me well.

  • openrisk 4 hours ago

    There is an interesting contrast between the phenomenal investment boom versus functionally zero job growth...

    > Even the software publishers and computing infrastructure industries at the forefront of this AI boom have seen functionally zero net employment growth over the last year - the dismal job market that has beleaguered recent computer science graduates simply has not improved much.

    ... which may explain, in broad brush, the polarised HN attitude: bitter cynics on one side and aggresive zealots on the other.

  • apwell23 9 hours ago

    > AI products are used ubiquitously to generate code, text, and images, analyze data, automate tasks, enhance online platforms, and much, much, much more—with usage expected only to increase going forward.

    Why does every hype article start with this. Personally my copilot usage has gone down while coding. I tried and tried but it always gets lost and starts spitting out subtle bugs that takes me more time to debug than if i had written it myself.

    I always have this feeling of 'this might fail in production in unknown ways' because i might have missed checking the code throughly . I know i am not the only one, my coworkers and friends have expressed similar feelings.

    I even tried the new 'chain of thought' model, which for some reason seems to be even worse.

    • bugbuddy 9 hours ago

      This just reminded I forgot I have had a Copilot subscription. It has not made any useful code suggestions in months to the point of fading from my memory. I just logged in to cancel it. Now, I need to check my other subscriptions that I can cancel or reduce to a lower tier.

    • sksxihve 9 hours ago

      Because they all use AI to write the articles.

      • Ekaros 6 hours ago

        There is a market for AI. And it is exactly these articles and maybe pictures attached to them. Soon could be some videos as well. But how far beyond that. Is very good question.

      • __MatrixMan__ 5 hours ago

        AI trained on a web that's primarily about selling things

    • whiplash451 4 hours ago

      My experience is similar. I used Claude for a coding task recently and it drove me into an infinite number of rabbit holes, each one seeming worse than the previous one. All the while being enable to stop and say: I’m sorry, I actually don’t know how to help you.

    • osigurdson 9 hours ago

      My feeling is (current) AI is more of a teacher than an implementor. It really does help when learning about something new or to give you ideas about directions to take. The actual code however still needs to be written by humans for the most part it seems.

      AI is a great tool and does speed things up massively, it just doesn't align with the magical thought that we provide the ideas and AI does all of the grunt work. In general, always better to form mental models about things based on actual evidence as opposed to fantasy (and there is a lot of fantasy involved at the moment). This doesn't mean being pessimistic about potential future advancements however. It is just very hard to predict what the shape of those improvements will be.

    • falcor84 9 hours ago

      From my experience, it is getting better over time, and I believe that there's still a lot of relatively low hanging fruit, particularly in terms of integrating the LLM with the language server protocol and other tooling. But having said that, at this point in time, it's just not good enough for independent work, so I would suggest using it only as you would pair-program with a mid-level human developer who doesn't have much context on the project, and has a short attention span. In particular, I generally only have the AI help me with one function/refactoring at a time, and in a way that is easy for me to test as we go, and am finding immense value.

      • dangerwill 5 hours ago

        I think some of the consternation we see from the anti LLM crowd (of which I'm one) is this line of reasoning. These LLMs produce fine code when the code you are asking for is in its training set. So they can be better than a mid level dev and much faster in narrow, unknown contexts. But with no feedback to warn you, if you ask it for code that it has no or only a bit of data on, it is much worse than a rubber duck.

        That and tech's status inflation means when we are talking about "mid level" engineers, really we are talking about engineers with a couple years of experience who have just graduated to the training wheels phase of producing production code. LLMs are still broadly aimed at removing the need for what I would just call junior engineers.

        • whiplash451 4 hours ago

          That and the fact that code does not live in a standalone bubble, but in a complex setup of OSes, APIs, middleware and other languages. My experience trying to use Claude to help me with that was disappointing.

    • righthand 9 hours ago

      I see the same results as my TabNine+Template generator+language server as I do with things like CoPilot. I get TabNine issues when the code base isn’t huge. I think also tossing away language servers and template generators for just LLM will just lead to seeking “proper predictive path”. Most of the time the LLM will spit out the create-express/react-template for you, when you ask it to customize it will guess using the most common patterns. Do you need something to guess for you?

      It’s also getting worse because people are poisoning the well.

    • badgersnake 9 hours ago

      It’s kinda true though. They are increasingly used for those things. Sure, the results are terrible and doing it without AI almost always yields better results but that doesn’t seem to stop people.

      Look at this nonsense for example: https://intouch.family/en

      • anon7725 9 hours ago

        That’s one of the saddest bits of AI enshittification yet.

      • 123yawaworht456 5 hours ago

        holy shit, if that isn't satire... wow, just fucking wow.

    • drowsspa 9 hours ago

      Yeah, it's actually frustrating that even when writing Go code, which is statically typed, it keeps messing up the arguments order. That would seem to me a pretty easy thing to generate.

      Although it's much better when writing standard REST and gRPC APIs

    • bongodongobob 9 hours ago

      Well I have the exact opposite experience. I don't know why people struggle to get good results with llms.

      • amonith 9 hours ago

        Seriously though, what are you doing? Every single example everywhere throughout the internet that tries to show how good AI is at programming shows so mindbogglingly simplistic examples that it's getting annoying. It sure is a great learning tool when you're trying to do something experimental in a new stack or completely new project, I'll give you that, but once you reach the skill level where someone would hire you to be an X developer (which most developers disagreeing with you are, mid+ developers of some stack X) the thing becomes a barely useful autocomplete. Maybe that's the problem? It's just not a tool for professional developers?

        • FeepingCreature 5 hours ago

          I mean, let me just throw in an example here: I am currently working on https://guesspage.github.io , which is basically https://getguesstimate.com but for flowtext instead of a spreadsheet. The site is ... 99.9% Claude Sonnet written. I have literally only been debugging and speccing.

          Sonnet can absolutely get very confused and break things. And there were tasks where I had a really hard time getting it to do the right thing, or understand what I wanted. But I need you to understand: Sonnet made this thing for me in two and a half days of part-time prompting. That is probably ten times faster than it would have taken me on my own, especially as I have absolutely no design ability.

          Now, is this a big project? No, it's like 2kloc. But I don't think you can call it "simple" exactly. It's potentially useful technology. This sort of "just make this small tool exist for me" is where I see most of the value for AI in the next year. And the definition of "small tool" can stretch surprisingly far.

          • hnthrowaway6543 4 hours ago

            This is a simple project. Nobody is disputing that GenAI can automate a large chunk of the initial setup work, which dominates the time spent on small projects like this. But 99.999% of professional, paid software development is not working on the basic React infrastructure for a 2,000 loc javascript app.

            Also your Google Drive API key is easily discoverable with about 15 seconds of looking at the JS source code -- this is something a professional software developer would (hopefully) have picked up without you asking, but an LLM isn't going to tell you that you shouldn't ship the `const API_KEY = ...` code as a file to the client, because you didn't ask.

            • FeepingCreature 4 hours ago

              > This is a simple project.

              I mean, it would have taken me a lot longer on my own. Sure it's not a huge project, I agree; I wouldn't call it entirely trivial.

              > Also your Google Drive API key is easily discoverable with about 15 seconds of looking at the JS source code

              No, I'm aware of that. That's deliberate. There's no way to avoid it for a serverless webapp. (Note that Guesspage is entirely hosted on Github Pages.) All the data stored is public anyways, the key is limited to only have permission to access the stored data, and you still have to log in and grab a token that is only stored in your browser and cannot be accessed from other sites. Literally the only unique thing you can do with it is trigger a login request on your own site that looks like it comes from Guesspage; and you can do that just as easily by creating a new API key and setting its name to "Guesspage".

              The AI actually told me that was unsafe, and I corrected it. To the best of my understanding, the only thing that you can do with the API key is do Google Drive uploads to your own drive or that of someone who lets you that look to Google as if my app is triggering them. If there's a danger that can arise from that, and I don't think there is, then it's on me, not on Sonnet.

              (It's also referer domain limited, but that's worthless. If only there was a way to cryptographically sign a referer...)

        • Viliam1234 8 hours ago

          I am happy with the LLMs, but I only tried them on small projects done at my free time.

          As a back end developer I am not familiar with the latest trends in JavaScript and CSS, and frankly I do not want to spend my time studying these. A LLM can generate an interactive web game based on my description. I review the code, it is usually okay, sometimes I suggest an improvement. I could have done all of that -- but it would take me a week, and the LLM does it in seconds. So it is a difference between a hobby project done or not done.

          I also tried a LLM at work, not to code, but to explain some complex topics that were new to me. Once it provided a great high-level description that was very useful. And once it provided a great explanation... which was a total lie, as I found out when I tried to do a hello-world example. I still think the 50% success rate is great, as long as you can quickly verify it.

          Shortly, we need to know the strengths and the weaknesses, and use the LLMs accordingly. Too much trust will get you burned. But properly used, they can save a lot of time.

      • hnthrowaway6543 9 hours ago

        LLMs are great for simple, common tasks, i.e. CRUD apps, RESTful web endpoints, unit tests, for which there's an enormous amount of examples and not much unique complexity. There's a lot of developers whose day mostly involves these repetitive, simple tasks. There's also a lot of developers who work on things that are a lot more niche and complicated, where LLMs don't provide much help.

        • danenania 9 hours ago

          In my experience this underrates them. They can do pretty complex tasks that go well beyond your examples if prompted correctly.

          The real limiting factor is not so much task complexity as the level of abstraction and indirection. If you have code that requires following a long chain of references to understand, LLMs will struggle to work with it.

          For similar reasons, they also struggle with:

          - generic types

          - inheritance hierarchies

          - long function call chains

          - dependency injection

          - deeply nested structures

          They're also bad at counting, which can be an issue when dealing with concurrency—i.e. you started 5 operations concurrently at different points in your program and now need to block while waiting for 5 corresponding success or failure messages. Unless your code explicitly uses the number 5 somewhere, an LLM is often going to fail at counting the operations.

          All in all, the main question I think in determining how well an LLM can do a task is whether the limiting factor for your task is knowledge or abstraction. If it's knowledge (the intricacies of some arcane OS API, for example), an LLM can do very well with good prompting even on quite large and complex tasks. If it's abstraction, it's likely to fail in all kinds of seemingly obvious ways.

          • layer8 an hour ago

            > If it's knowledge (the intricacies of some arcane OS API, for example), an LLM can do very well

            Only if that knowledge is sufficiently represented in the training data or on the web. If, on the other hand, it’s knowledge that isn’t well (or at all) represented, and instead requires experience or experimentation with the relevant system, LLMs don’t do very well. I regularly fail with applying LLMs to tasks that turn out to require such “hidden” knowledge.

        • 101008 9 hours ago

          Yeah, exactly this. If I ask Cursor to write the serializer for a new Django model, it does it (although sometimes it invents fields that do not exist). It saves me 2 minute.

          When I ask him to write a function that should do something much more complex, it usually do something so bad it takes me more time because it confuses me and now I have to back to my original reasoning (after trying to understand what it did).

          What I found useful is to ask him to explain me what a function does in a new codebase I am exploring, although I have to be very careful because a lot of time invents or skips steps that are crucial.

          • dartos 9 hours ago

            See, I recently picked up the Ash framework for elixir and it does all that too, but in a declarative, precise language which codegens the implementation in a deterministic way.

            It just does the job that cursor does there, but better.

            Maybe us programmers should focus on making higher order programming tools instead of black box text generators for existing tools.

        • apwell23 9 hours ago

          > LLMs are great for simple, common tasks, i.e. CRUD apps, RESTful web endpoints

          i gave it a yaml and asked it to generate a json call to rest api . It missed a bunch of keys and made up a random new key. I threw out the whole thing and did it with awk/sed.

      • thuuuomas 9 hours ago

        Would you feel comfortable pushing generated code to production unaudited?

        • charrondev 9 hours ago

          For my I have a company subscription for Copilot and I just use the line based autocomplete. It’s mildly better than the built in autocomplete. I never have it do more than though and probably wouldn’t buy a license for myself.

        • bongodongobob 9 hours ago

          Would you feel comfortable pushing human code to production unaudited?

          • dijksterhuis 9 hours ago

            depends on the human.

            but i would never push llm generated code. never.

            -

            edit to add some substance:

            if it’s someone who

            * does a lot of manual local testing

            * adds good unit / integration tests

            * writes clear and well documented PRs

            * knows the code style, and when to break it

            * tests themselves in a staging environment, independent of any QA team or reviews

            * monitors the changes after they’ve gone out

            * has repeatedly found things in their own PRs and asked to hold off release to fix them

            * is reviewing other people’s PRs and spotting things before they go out

            yea, sure, i’ll release the changes. they’re doing the auditing work for me.

            they clearly care about the software. and i’ve seen enough to trust them.

            and if they got it wrong, well, shit, they did everything good enough. i’m sure they’ll be on the ball when it comes to rolling it back and/or fixing it.

            an llm does not do those things. an llm *does not care about your software* and never will.

            i’ll take people who give a shit any day of the week.

            • amonith 9 hours ago

              I'd say it depends more on "the production" than the human. There are legal means to hold all people accountable for their actions ("Gross neglience" and all that). So you can basically always trust that people will fix what they messed up given the possibility. So if you can afford for the production to be broken (e.g. the downtime will just annoy some people) you might as well allow your team to deploy straight to prod without audits. It's not that rare actually.

          • candiddevmike 9 hours ago

            Only on Fridays before a three day weekend.

      • threeseed 6 hours ago

        I just asked Claude to generate some code using the SAP SuccessFactors API.

        Every single example was completely useless. The code wouldn't compile, it would invent methods and variables and the instructions to go along with it were incoherent. All whilst gaslighting along with the way.

        I have also previously tried using it with some Golang code and it would constantly add weird statements e.g. locking on non-concurrent operations.

        LLMs are great when you are doing the same things as everyone else. Step outside of that and it's far more trouble than it's worth.

  • SubiculumCode 9 hours ago

    Question:It seems like the requirements needed to become an investor in a startup is a multimillion pool of money...which is not what a salaried professional has at their disposal. Yet, a lot of the 100x opportunities come before public offerings, making a whole class of investment unavailable to modest but knowledgeable professionals. I know that sometimes, through personal connections, pools of investment money are built across individuals, but as a public service it seems like, no, it is not.

  • shortrounddev2 9 hours ago

    I can't wait for the AI bubble to be over so HN can talk about something else

    • tim333 an hour ago

      Of the top 30 HN stories of the last month (https://hn.algolia.com/?dateRange=pastMonth&page=0&prefix=fa...)

      only 6 were AI, the highest being "OpenAI to become for-profit" coming at number 10. Top story was "Bop Spotter" followed by Starship and click to cancel.

    • nineteen999 6 hours ago

      You can always join the Wordpress tantrum discussions if you're getting fatigued, that ones really jumping around here lately.

      • kevindamm 3 hours ago

        I'm designing an extension of datalog instead.

    • throwaway314155 9 hours ago

      I will take AI bubble over crypto bubble any day of the week.

      • dangerwill 5 hours ago

        Yes, I agree wholeheartedly and I actively dislike the concept of LLMs for anything real. But nothing will be worse than the flood of outright scams or attempts at rent seeking/middle man creation that crypto was. At least LLMs have some potential use cases (and line level auto complete is genuinely better now because of LLMs)

      • Melting_Harps 9 hours ago

        > I will take AI bubble over crypto bubble any day of the week.

        I've been in the former since '21 and have seen every single cycle since 2011 in the latter, I can assure their is more dumb money in the former than in the latter; (just by scale alone) at least in the latter whether it was ICO or NFTs or whatever mal-investment was promptly punished (rugpulls/exit scams) companies like INTEL get to stay in Zombie mode because of the corpo-welfare that the US doles out while shaming everyone else to be prudent with their investments while these corps and banks spend like drunken sailors and tries to strangle the former out of existence (rightly so in most cases as most crypto is a total scam).

        With that said, what you will see emerge are some incredibly established players in both fields that will have the staying power to change how the Industry is shaped around them: Nvidia and Bitcoin are comparable to one another in that respect.

        Both have/had crazy volatility but the staying power and just the fact they remain firmly at the center of both Industry's is rather telling that you simply don't see what these technologies offer because of the hype and boom and bust cycles.

        As a person who directly benefits from this: I can assure you most of these VCs are exit liquidity just as most foolish people are for the alt scams of yore, execpt the US economy (likely all of the Western World at this point) isn't entirely reliant on the promise of vapourware with 'crypto' in any capacity, weheras the same cannot be said about the theatrics of Jensen's Nvidia.

        Source: I build data center infrastructure for these mega corps doing 'AI' and I'm doing an MSc in CS (Big Data) and been a Bitcoiner since Satoshi was still on BTF.

        • throwaway314155 9 hours ago

          You might be right, but the crypto people were/are basically religious in their responses. I see a few loons for LLM's talking about how AGI is near, and of course there's the EA/LessWrong people talking about doomsday. But none of them were as staunchly dug in _and_ misinformed as the crypto folks.

          edit: if it isn't clear I'm a staunch opponent of cryptocurrency in any form.

          • Melting_Harps 9 hours ago

            > edit: if it isn't clear I'm a staunch opponent of cryptocurrency in any form.

            It's very clear, but your exposure to zealots, on both sides, shouldn't deter you from being objective and seeing what these technologies actually offer. Hence why i wrote that part in the 2nd to last paragraph.

            HN has such misinfored vitriol for any technology it didn't ordain itself, you sem to be of that cohort, what's odd is that very same people who gave you VC.SV funded startup land are all major backers of this technology.

            I can just summarize this in one phrase: you seem to collectively not know, what you don't know and make leaps in logic and misinformed judgments from that POV.

            • throwaway314155 9 hours ago

              I have little interest in the ycombinator legacy of hustle culture, growth hacking, get-millions-for-glorified-todo-app, etc. As far as I can tell it is effectively the underlying reason for why crypto and AI get hyped up to the point where we can't have reasonable discussions about them in forums.

              I'm just here because it happens to be where like-minded people (_sometimes_) hang out.

              • Melting_Harps 8 hours ago

                > I have little interest in the ycombinator legacy of hustle culture, growth hacking, get-millions-for-glorified-todo-app, etc. As far as I can tell it is effectively the underlying reason for why crypto and AI get hyped up to the point where we can't have reasonable discussions about them in forums.

                Ohh, well... I'm guilty of drinking that kool-aid, unfortunately and was a boot-strapping fintech founder with the battle scars to show--those scars appear in this discussion thus far by the way.

                But I get it, I'm trying to be amicable about this as much as I can, especially fbecause I know no one can deny us anymore in the BTC side at this point, and in the AI side... well, money-hype boom and AI-doomer pr0n cycles aside we are actually building amazing amounts of compute that hopefully can yield amazing results, something like Starship recovery system levels of advancement in many other Industries/Sectors one day, probably very far off in the future but that's the start, right?

                Every forest starts with a sappling kind of a thing, and these two technologies emerged from a time when I was in my formidable development period so I saw them as something more than just the marketing side--AI is marketing, ML is really just stats with code to back it up after all. Bitcoin is just a FOSS network using token based cryptographic key encryption.

                Stop using throwaways if you want real conversations, it might help you have those conversations you're looking for. :)

          • Melting_Harps 9 hours ago

            zifpanachr23 said:

            > AI people sound more dug in to be honest from my perspective. But I guess that's cause the crypto stuff tends to be less overtly religious and more overtly batshit crazy politics and economics, which I'm much more used to dealing with haha. And mostly, everyone has figured out the scam by now on the crypto side.

            >>The AI people freak me out cause they are all talking eschatology and shit as if they have stumbled upon the literal ark of the covenant like in raiders of the lost ark or something.

            >>>It's a really great act to be honest. They've been clearly studying a lot of the more dishonest American religious culture of the last couple of decades.

            I'm going to commit a HN faux pas to prove a point and show you why I think Bitcoin has a valid use case here alone: I decided to repost what he said because there are valid points here and are worth discussing.

            Had I the inclination I can hash this into the blockchain for all to see what was written from this poster for the aforementioned reasons for as long as the mainchain continues to be maintained, protected and supported .

            This has great utility, and the mere suggestion that you cannot get over that is because those 'crazies offend me and my disposition' and stop there you fail to see why and what this technology can already do--create an actual immutable archive of all Human history if we desire it.

            But to his point, yes it's roots in Crypto-Anarchism (which started in CA at the inception of the rise of modern SV btw) has many of you questioning the ''sanity' and 'motives' behind this technology, and you assume they are all the same but rest assure their is a reason for the brain drain from all of tech/STEM/finance during my era and time in Bitcoin.

            Most of whom are now incredibly wealthier than they ever were working in academia or private industry if you think money is a measure of one's success--I don't, but most of you do.

            The AI people strike me more as a range of the introduction corpos from banking and academia into bitcoin (Gavin's, Hearn) to total con men like Veer and sprinkled in there are the cult memebers you mentioned who honestly think that their techno-utopian trans-humanist dreams are being built one LLM update at a time. It's sad... it's the same thing just different names/faces.

            • dangerwill 4 hours ago

              No one cares that blockchains are immutable, that doesn't mean that information written there is correct. Just that it was written on X date and the content. You could find proof of the biggest scandal of all time and post it on the blockchain so "the man" can't stop the word from getting out but 99.99999999% of readers would read a version presented by a simple web server and with the value cached in a closed database or memory. Which if the government wants to take down, it can. And if that does go down, no one will save a link to the entry on the blockchain. And on the flip side I could write obvious falsehoods in the same way. Blockchain provides no value for legal attestation nor information distribution.

              In practice, the vast majority of blockchain ledgers record the history of scams, penny stock style trading and money laundering attempts.

    • bugbuddy 9 hours ago

      I think it will burst when the Fed realizes inflation is not done and start raising again in 6 months. They can only feed the bubble for so long before the common people have had enough of rising prices.

      • almost_usual 9 hours ago

        The Fed raising rates will increase inflation at this point (and further increase fiscal deficits), nothing stops that train.

        Arguably if the investment here works out we’ll see deflation through extreme technical advancements.

        • bugbuddy 9 hours ago

          No, raising rate would bring the economy to a slower pace and reduce private sector consumer demand. Private sector investment can continue to increase but at some point that too will hit a brick wall. The public sector spending depends on which type of big ego people get to make decisions. Given the extreme excesses so far it can go either way. The now extinct fiscal conservatives might just make a return finally but don’t hold your breath.

          • almost_usual 9 hours ago

            Raising rates works in the beginning with high fiscal deficit driven inflation by slowing demand and bank lending.

            But raising interest rates and keeping them high in an environment where runaway government deficits and high government debts are causing inflation runs the risk of exacerbating inflation.

            You have high interest rates on a large amount of government debt which continues to push _more_ money into the economy.

            The Fed doesn’t have any real options at this point but to lower rates.

            • kibwen 3 hours ago

              > keeping them high

              At some point we need to address the elephant in the room and ask people specifically what they mean by "high" rates, because 5% isn't particularly high by historical terms, it's only high for people who never paid attention to interest rates before 2010.

            • bugbuddy 7 hours ago

              Public sector demand is a much smaller percentage of the overall economy. If raising rates did not slow the economy due to high government deficit spending, then we would certainly be living in a much different world of command economy with the government running everything. That’s not yet the world we live in.

              Another possibility is that the rate is still not high enough and needs to be raised much much higher to stop inflation. I think rates need to be in the 6 to 7 percent to really stop inflation. The is just a pause. It will come back like a vengeance.

              • bubbleRefuge 3 hours ago

                Ask Argentina about that. They started finally reducing rates and its working some.

            • bubbleRefuge 3 hours ago

              Wow! Rare to see someone get it. MMT follower ? Would add that the money printing is being distributed proportionaly to the wealthy under a Deomcrat regime. Pretty sad.

    • j_timberlake 5 hours ago

      People are going to be talking about AI for the rest of your life, but feel free to go join an Amish community or live in the woods, maybe get a job as a Firewatch.