The AI boom is causing shortages everywhere else

(washingtonpost.com)

344 points | by 1vuio0pswjnm7 a day ago ago

562 comments

  • singingwolfboy 21 hours ago
    • mrjay42 20 hours ago

      This does not work for me.

      It's a loop of captcha which never ends

      • diogenes_atx 10 hours ago

        This may be a DNS issue. I had the same problem with NextDNS. After switching my DNS servers to Cloud Flare DNS (1.1.1.1, 1.0.0.1), it works fine.

        • Zanfa an hour ago

          It’s not. They’ve blocked some countries entirely.

          • stevekemp 24 minutes ago

            Yeah here in Finland the archive site seems to come and go on a monthly basis.

        • LoganDark 4 hours ago

          It's always DNS!

      • tempodox 2 hours ago

        Anecdata: When it happens to me, disabling the VPN helps.

      • quinncom 11 hours ago

        When this happens to me, it's either because I'm connected to a VPN, or I'm using Cloudflare's public DNS server.

      • b00ty4breakfast 11 hours ago

        I was having this problem earlier with different URLs I was searching for but the link above actually worked for me.

      • bookofjoe 11 hours ago
      • esseph 12 hours ago

        You have some extension issues going on, maybe. Seems to work fine in FF and chrome.

      • YeGoblynQueenne 9 hours ago

        I bet all the downvoting helped with that.

  • rossdavidh 14 hours ago

    "JPMorgan calculated last fall that the tech industry must collect an extra $650 billion in revenue every year — three times the annual revenue of AI chip giant Nvidia — to earn a reasonable investment return. That marker is probably even higher now because AI spending has increased."

    That pretty much tells you how this will end, right there.

    • onion2k 13 hours ago

      Nvidia invests $100bn in OpenAI, who buy $100bn of Nvidia chips, who invest the $100bn revenue in OpenAI, who buy $100bn in Nvidia chips, and round it goes. That's an easy $600bn increase in tech industry revenue right there.

      • DavidPiper 4 hours ago

        The classic "Where's the $20 you owe me?" sketch: https://www.youtube.com/watch?v=s-ycvJC-qIQ

      • wiseowise 11 hours ago
        • sssilver 4 hours ago

          An important thing that this joke misses is that both economists now also owe federal income tax and social security tax.

          • RobotToaster 10 minutes ago

            If they routed the payments through a shell company it would be written off as business entertainment expenses.

          • TedDoesntTalk 3 hours ago

            Were the payments considered income?

        • satvikpendem 2 hours ago

          The replies explain all there is to explain in that example. If each economist thinks that eating shit is worth $100 then, well, that's what it's worth.

          • latexr 23 minutes ago

            It is fascinating that someone can tell an obvious joke with an obvious point, where the characters themselves spell out what’s wrong, and yet we can be certain someone will genuinely believe and defend that “no no, actually eating a random pile of shit you found on the floor makes sense and is worth it”.

            Has it occurred to you, especially since one of the economists in the joke admits they feel they ate shit for nothing, that they actually do not feel the exercise was worth it? Have you never spent money on something, thinking it would be worth it, then afterwards realised it was a waste of money? Have you also never taken a job and then realised “I didn’t charge enough for the trouble”?

            I’m reminded of a bit of news I heard a while back, where one teenager challenged a friend to eat rat shit they found on the street. The eater died shortly after, because the poop contained rat poison. I doubt any of them found it worth it.

            • charcircuit a few seconds ago

              It's not an obvious joke. It seems closer to a puzzle in that the reader must discover that the $100 was for entertainment.

      • dickersnoodle 13 hours ago

        Someone in management read and misunderstood "The Velocity of Money" (https://en.wikipedia.org/wiki/Velocity_of_money)?

        • collingreen 13 hours ago

          Or understood way too well. I promise it isn't Altman that will be left destroyed or jobless if this crashes.

      • nosuchthing 10 hours ago

        Mass layoffs and the wellness camp industry will easily account for $600+ billion a year in contracts, for at least a few years.

        https://www.teenvogue.com/story/rfk-wellness-farms-us-disabi...

      • rogerthis 10 hours ago

        Smells like "Banco Master" (Brazil) scandal.

      • suhputt 9 hours ago

        investment is not revenue tho... this is equivalent to nvidia buying its own chips...

        • 9rx 9 hours ago

          It is equivalent under scrutiny, but casually looking the books and seeing Nvida making a sale to Nvida sticks out like a sore thumb a lot more than Nvida making a sale to OpenAI. The latter is much more likely to pass as revenue.

      • scotty79 10 hours ago

        Either Nvidia eventually runs out of chips for OpenAI to buy or OpenAI runs out of equity Nvidia can buy.

        • RobotToaster 5 minutes ago

          The chips OPEN AI is buying don't even exist, they're hypothetical chips that will potentially exist at some point in the future.

    • zeroonetwothree 13 hours ago

      Total US GDP is ~31 trillion, so that's only like 5%. I think it's conceivable that AI could result in ~5% of GDP in additional revenue. Not saying it's guaranteed, but it's hardly an implausible figure. And of course it's even less considering global GDP.

      • crazygringo 13 hours ago

        Yup. If you follow the links to the original JP Morgan quote, it's not crazy:

        > Big picture, to drive a 10% return on our modeled AI investments through 2030 would require ~$650 billion of annual revenue into perpetuity, which is an astonishingly large number. But for context, that equates to 58bp of global GDP, or $34.72/month from every current iPhone user...

        • doodlebugging 13 hours ago

          > or $34.72/month from every current iPhone user...

          As a current iPhone user, I'm not signing up for that especially if it is on top of the monthly cell service fee.

          I do realize though that you were trying to provide useful context.

          • manwe150 12 hours ago

            But think about it this way: something simple like Slack charges $9/month/person and companies already pay that on many behalf. How hard would it be to imagine all those same companies (and lots more) would pay $30/month/employee for something something AI? Generating an extra $400 per year in value, per employee, isn't that much extra.

            • steveBK123 7 hours ago

              $35/iPhone user is not “per corporate white collar worker”.

              Think outside the coastal high paid SWE bubble and realize vast swathes of people use 5 year old phones on a $25/phone family mobile plan.

              Retirees, youth, blue collar, lots of people who don’t want/need AI or wouldn’t fork out $140 for their family of 4 to access it.

              $35/head is a pretty high bar if you compare to per capita total streaming subscriptions across music and movies across all providers for example.

              • csomar 2 hours ago

                $35/head is possible but it has to provide tangible value to the user (beyond coding) which many pro-AI people will fail to recognize. People pay a lot for other stuff (ie: like their phone plan). Being digital or physical is not the issue here but the value perceived by the user.

            • johnvanommen 11 hours ago

              > Generating an extra $400 per year in value, per employee, isn't that much extra.

              I agree, and would add that it’s contributing to inflation in hard assets.

              Basically:

              * it’s a safe bet that labor will have lower value in 2031 than it has today

              * if you have a billion to spend, and you agree, you will be inclined to put your wealth into hard assets, because AI depends on them

              In a really abstract way, the world is not responsible for feeding a new class of workers: robots.

              And robots consume electricity, water, space, and generate heat.

              Which is why those sectors are feeling the affects of supply and demand.

              • YokoZar 9 hours ago

                > * it’s a safe bet that labor will have lower value in 2031 than it has today

                If AI makes workers more productive, labor will have higher value than it has today. Which specific workers are winning in that scenario may vary tremendously, of course, but I don't think anyone is seriously claiming AI will make everyone less productive.

                • z2 8 hours ago

                  The value of labor i.e. wages depend on labor demand (the marginal product of labor) and bargaining power, not output per worker. If AI is a substitute for many tasks, the marginal value of an additional worker, and what a company is willing to pay for their work can fall even if each remaining worker is more productive.

                  • YokoZar 7 hours ago

                    What you're forecasting is a scenario where total output has substantially increased but no one's hiring or able to start their own business. Instant massive recession is by no means a "sure bet" with technological improvements, especially those that make more kinds of work possible than before.

                    • z2 7 hours ago

                      I'm not forecasting that, and it's a virtual strawman in the face of my much narrower claim: that wages depend on marginal labor demand and bargaining power, not average output per worker. If AI substitutes for labor, the marginal value of adding another worker in many roles can fall. That can mean fewer hires or lower wages in some categories, not 'no hiring' or an instant massive recession. I have no idea what the addressable market or demand for our more productive economy is, but for the record I do hope it's high to support new businesses and a bigger pie in general!

                      • YokoZar 3 hours ago

                        Forgive me, I was responding to the original claim that "it’s a safe bet that labor will have lower value in 2031 than it has today".

                • _DeadFred_ 4 hours ago

                  Tech Company: At long last, we have created Manna from classic sci-fi novel Don't Create Manna

                  https://marshallbrain.com/manna1

                • keybored 7 hours ago

                  > If AI makes workers more productive, labor will have higher value than it has today.

                  Workers being more productive does not necessarily translate to workers getting more leverage or a larger piece of the pie.

              • whattheheckheck 11 hours ago

                The world IS responsible for handling the people. Thats the whole fucking reason we made society to take care of children. Nothing is inevitable. It serves the interests of the few.

                • tjwebbnorfolk 10 hours ago

                  "The world" isn't responsible for anything. The world simply exists, and owes you nothing.

                  • blueone 7 hours ago

                    I think they meant “society.” Society does, in fact, owe the people something, especially if we, the people, are expected to live by the rules, social norms, and expectations imposed by society.

                    • tjwebbnorfolk 6 hours ago

                      Yea, like anything, you get out what you put into it. I wouldn't describe that as society "owes" me something.

                      • CrimsonCape 5 hours ago

                        "No taxation without representation" is a perfectly reasonable stance.

                      • gspr an hour ago

                        What's society for then?

                  • eli_gottlieb 6 hours ago

                    Oh man you're not gonna like how we all treat you after internalizing that kinda talk.

                  • samrus 8 hours ago

                    What your describing is a low trust society. If you disregard the social contract like that, then people wont owe the "the world" anythign either. Collaboration and civics goes out the window. If you want to look at what kind of a shithole that libertarian nonsense leads to, then try taking a stroll in SF at night

                    • tjwebbnorfolk 6 hours ago

                      You get out of society what you put into it. If I want a seamless web of deserved trust, then of course I need to contribute to that.

                      I don't consider that to be saying that society "owes" me something. I regard it mutually beneficial, not some kind of debt/debtor relationship.

                      • bombcar 5 hours ago

                        This is an important framing - we talk so much of "rights" but if you have a right to something, that means someone or someones have a duty to provide it.

                        • latexr 28 minutes ago

                          No, no it does not. If we say everyone has a right to clean air and water, no one else has a duty to provide it. Those are given to us for free by the planet. The issue is that rich assholes (and poor assholes who only think of getting rich) take that away from everyone else by polluting what is common to everyone.

                        • _DeadFred_ an hour ago

                          Man I am on the wrong tech site.

                          Where are all the geeks that grew up on Trek and want to create a better future where society provides for it's citizens?

                      • gspr an hour ago

                        > I don't consider that to be saying that society "owes" me something. I regard it mutually beneficial, not some kind of debt/debtor relationship.

                        You know, in phrases like "you owe it to your spouse/sibling/friend/self to...", people aren't talking about formal debt. Please try to keep that kind of meaning in mind when people say that society owes its people.

                • mahirsaid 10 hours ago

                  humans collectively are responsible for the end results of innovations and achievements , otherwise who are you doing all this for. Wars are a extreme form of disagreements amongst a large body of opposing opinions or perspective IMHO. Earth (world!) simply exists, with or without you. You as Byorganism/Byproduct of this planet you have an obligation to this planet in good deeds. Have you not watched Star-Wars?

            • doodlebugging 12 hours ago

              Most people in the economy do not use Slack. That tool may be most beneficial to those people who stand to lose jobs to AI displacement. Maybe after everyone is pink-slipped for an LLM or AI chatbot tool the total cost to the employer is reduced enough that they are willing to spend part of the money they saved eliminating warm bodies on AI tools and willing to pay a higher per employee price.

              I think with a smaller employee pool though it is unlikely that it all evens out without the AI providers holding the users hostage for quarterly profits' sake.

            • SoftTalker 6 hours ago

              They will pay it but lay off the number of employees needed to balance it out, and just expect the remaining ones to make up for it with their new AI subscriptions.

            • zozbot234 12 hours ago

              That AI will have to be significantly preferable to the baseline of open models running on cheap third-party inference providers, or even on-prem. This is a bit of a challenge for the big proprietary firms.

              • johnvanommen 11 hours ago

                > the baseline of open models running on cheap third-party inference providers, or even on-prem. This is a bit of a challenge for the big proprietary firms.

                It’s not a challenge at all.

                To win, all you need is to starve your competitors of RAM.

                RAM is the lifeblood of AI, without RAM, AI doesn’t work.

                • ndriscoll 11 hours ago

                  Assuming high bandwidth flash works out, RAM requirements should be drastically reduced as you'd keep the weights in much higher capacity flash.

                  > Sample HBF modules are expected in the second half of 2026, with the first AI inference hardware integrating the tech anticipated in early 2027.

                  https://www.tomshardware.com/tech-industry/sandisk-and-sk-hy...

                  • fc417fc802 3 hours ago

                    How does HBF compare to the discontinued 3D XPoint?

                    • zozbot234 3 hours ago

                      HBF is NAND and integrated in-package like HBM. 3D XPoint or Optane would be extremely valuable today as part of the overall system architecture, but they were power-intensive enough that this particular use probably wouldn't be feasible.

                      (Though maybe it ends up being better if you're doing lots of random tiny 4k reads. It's hard to tell because the technology is discontinued as GP said, whereas NAND has kept progressing.)

          • nomercy400 an hour ago

            That's about an extra iPhone every 3-4 years.

          • creative_name3 12 hours ago

            A lot of iPhone users will be given a subscription via their job. If they still have a job at that point.

            • doodlebugging 12 hours ago

              This is true though I think even if the employer provides all this on a per employee basis, the number of eligible employees, after everyone who stands to lose a job because of a shift to AI tools, will be low enough that each employee will need to add a lot of value for this to be worth it to an employer so the stated number is probably way too low. Ordinary people may just migrate from Apple products to something that is more affordable or, in the extreme case, walk away from the whole surveillance economy. Those people would not buy into any of this.

            • watwut 11 hours ago

              Why are they not getting the iPhones paid by employers now?

          • scotty79 10 hours ago

            It could be priced into your appstore purchases like apple 30% cut is and you wouldn't notice.

            • doodlebugging 2 hours ago

              This is true but unfortunately for Apple I don't buy anything from the app store except for a minimal iCloud subscription for temporary photo storage. I am in the process of unwinding that subscription in favor of local storage and periodic sync. I haven't been diligent about syncing things in the past so I did buy a subscription for photo storage to avoid losing photos. I know that lots of people buy apps for all kinds of things. I'm not one of those people though.

          • DrProtic 12 hours ago

            Why you even said you wouldn’t subscribe? It’s not relevant in the slightest.

        • bunnie 4 hours ago

          $650bb / ($34.72 * 12 months) = 1.56 billion users.

          That's far larger than the population of the USA (unclear to me if that 650bb number is global or USA only) but by sheer scale this is assuming that these companies can collect that fee from a global customer base - including users in developing economies, EU, China, etc. and after the middleman fees are accounted for.

          The comments in this thread seem to be thinking within the context of 'the poorest in their nation'. This calculation assumes collecting this fee from among 'the poorest in the world'.

          Sure, 1.56bb users could also be interpreted as 'the wealthiest 20% of the world'. But the tail is especially long on this curve given how wealth is concentrated in a small percentage of the global population (1% of users have 50% of wealth).

          • nerdix 3 hours ago

            Microsoft, Google, Apple, Amazon, Nvidia, etc have been able to collect large amounts of revenue from a global customer base so I don't think the assumption was that unreasonable.

            Obviously, China will protect its homegrown AI industry. Current geopolitics trending towards US decoupling in Europe might slow it. But under the old status quo, US AI would have been rapidly adopted in the EU (and it still might. It depends greatly on how much of the Trump Doctrine outlasts the current administration).

            Developing countries eventually adopt new technologies. First they adopted personal computers and became customers of Microsoft, then they adopted the Internet and became customers of Google, they adopted smartphones and became customers of Apple. Eventually they will adopt AI and become customers of someone. The question is whether it will be US tech or Chinese tech.

        • theptip 10 hours ago

          Personally I would be astonished if LLMs percolating through the global economy doesn’t give a 50bp bump from here on out.

          Even if scaling hit a wall, commoditizing what we have now would do it. We have so much scaffolding and organizational overhang with the current models, it’s crazy.

          • SamPatt 8 hours ago

            Agreed. Applying the intelligence we already have more broadly will have a huge impact. That's been true for a while now, and it keeps getting more true as models keep getting better.

      • direwolf20 12 hours ago

        It's conceivable to us working in white collar knowledge jobs where our input and output is language. Will it also make 5% more homes built by a carpenter?

        • bee_rider 11 hours ago

          It might provide cover to lay off more than 5% of us (the LLM can create a work-like text product that, as far as upper management can tell, is indistinguishable from the real thing!), then we will have to go find jobs swinging hammers to build houses. Well, somebody’s got to do it.

          • stinkbeetle 6 hours ago

            The idea that companies need "cover" to perform layoffs (particularly in the US) doesn't make sense to me. Tech companies, all companies lay people off regularly. (To a first order approximation) if a worker is a net positive to a company then the company will want to keep them, and if they are not then the company will want to get rid of them. AI or no AI.

            • direwolf20 4 minutes ago

              Seems like the cover might be for investors. If a company is shrinking but you don't want investors to know it's shrinking, you can say you're improving productivity with AI.

            • maigret 2 hours ago

              I’ve seen many essential people being laid off for stupid reasons, the gp reason above being part of the story for some. Finance runs the world not tech. Tech is only welcome when it helps finance else it is marginalized.

        • roenxi 9 hours ago

          That seems pretty reasonable, yes. That is like asking if putting a low-cost Ops Research specialist in every company could make a 5% difference in operations - yes it could. Making resource-efficient decisions is not something that comes naturally to humans and having a system that consistently makes high quality game-theoretic recommendations would be huge.

          Bunch of tiny companies would love to hire a mathematician to optimise what they are doing to get a 5-10% improvement. Unfortunately a 5-10% improvement in a small business can't justify the cost of hiring another person, and good mathematicians with business sense and empathy are a rare commodity.

          • nradov 9 hours ago

            If that seems reasonable to you then you don't know anything about residential construction. The problems that homebuilders face aren't amenable to mathematical solutions. They have to deal with permitting issues, corrupt / incompetent government officials, supplier delays, bad weather, flakey workers, etc. The notion of a 5% improvement from LLM is ludicrously naive.

            • roenxi 8 hours ago

              The first 2 are very LLM amenable, the last 3 are very mathematical-solution amenable (optimising around issues like that is basically what Ops Research does). I don't see what your argument is here.

              The list of people claiming that maths won't work who then get bulldozed by mathematicians is long.

              • direwolf20 8 hours ago

                How will the LLM bypass the corrupt government official?

                • roenxi 7 hours ago

                  Because they make it much easier to audit what decisions are being made and how reasonable they were. Corruption relies on not being too well known - once people can start pointing to specific decisions rather than a general "we know there is corruption here somewhere" it is hard to sustain.

                  • hattmall 4 hours ago

                    It's not like people don't know who they are though? It's not some secret formula of who is corrupt. It's everyone that's been in position for any length of time. If you don't yield to the corruption you won't be in your job long. The degree of corruption is variable and perhaps the LLM could find the most efficient wheel to grease and person to lean on but then you just have the next company doing more of the same.

                  • maigret 2 hours ago

                    LOL are you looking at the news lately. Everything is blatantly in the open now.

          • jopsen 9 hours ago

            Lots of jobs like daycare, teachers, cleaning, the material costs are near zero and your ability to increase productivity using technology is very low.

            You can reduce quality of cleaning. But it's very hard to clean faster and better at the same time.

            These industries are not going to be optimized by an AI. The only optimization is lower overhead or lower salaries.

            Sure, we could have robots in daycare, but I don't think lack of AI is why my wife would have concerns :)

            • cman1444 7 hours ago

              Of course there's jobs that don't have a productivity boost from AI. The question is whether across the entire economy there will be a 5% GDP boost.

              Teachers, cleaners, and daycare workers may see 0% gains, but don't be surprised if that is made up for by 10% gains the productivity of tech, law, marketing, advertising, manufacturing, government, etc. (okay maybe not government).

      • mykowebhn 13 hours ago

        Have you ever seen US GDP go up 5% yearly for several years?

        • matthewaveryusa 13 hours ago

          That’s the bet! last time we had that growth was for a few years during the dotcom, followed by a lost decade of growth in tech stocks

        • geysersam 13 hours ago

          Doesn't have to go up. It's also fine if they replace other parts of the economy.

          • mwwaters 9 hours ago

            In the expenditures for the economy making up GDP, not a lot of it screams “AI-able.” Page 9 here breaks down GDP on expenditure basis.

            https://www.bea.gov/sites/default/files/2026-01/gdp3q25-upda...

            Given how much of the spending is hard goods and simply not AI-able (rent, most of housing new construction, most of other goods, most health care, much of other services), the replacement theory would require a massive displacement.

          • johnvanommen 11 hours ago

            exactly.

        • crazygringo 13 hours ago

          The quote is about a one-time increase in growth of 5 percentage points. Not multiple years or forever.

          Or obviously it can be spread out, e.g. ~1% additional increase over 5 years.

          • blks 13 hours ago

            It cannot be sustained with just one-time growth. Capital always has to grow, or it will decrease. If this bubble actually manages to deliver interest, this will lead to the bubble growing even larger, driving even more interest.

        • dkasper 13 hours ago

          China did it. It’s not inconceivable.

          • Retric 13 hours ago

            China’s GDP per capita fell for the first 40 years of CCP rule, making it way easier to have constant growth after that period. https://en.wikipedia.org/wiki/Economic_history_of_China_(191...

            Developed countries have slow growth because they need to invent the improvements not just copy what works from other countries.

            • paulorlando 13 hours ago

              The chart you listed is for the years before the CCP won the civil war in 1949. But agreed that many of the problems overcome were also problems that were created after the war.

              • Retric 12 hours ago

                https://en.wikipedia.org/wiki/Communist-controlled_China_(19...

                Starting at 1949 is overly generous IMO, but yes the purges that followed didn’t help.

                • woooooo 8 hours ago

                  Japan controlled much more of China than the communists did before 1945. And having half your country occupied is bad for GDP. You made a mistake and believed some propaganda here.

                  • Retric 5 hours ago

                    1945 is before 1949.

                    Chinese GDP was higher during WWII than over the next several years, the actual minimum 1959 to 1961 was well into communist rule. Literally CCP rule was worse than the anarchy of civically war, it’s right up there with the insanity of Pol Pot.

                    • maxglute 4 hours ago

                      This so historically stupid claim, it's not even wrong tier.

                      There was no GDP data under KMT - it wasn't even formally calculated.

                      CCP started GDP calculations, but using soviet MPS GDP accounting system that basically omitted services and lowballed production prices.

                      The only GDP data we have that is pseudo normalized are via estimates like Maddison project. Even they don't bother to recompose China/KMT data during WW2. The TLDR is prewar peak 1939 data (right before JP invasion) around 288B, PRC took over in 1949, GDP was 245B in 1950, grew to 306B by 1952. GLF tanked GDP from 460b to 350B... i.e. the worst case scenario of GLF floor was still 40% larger than 1950.

                      E: Note wiki data links to ourworldindata that pulls from Maddison and in table form KMT/WW2 data is not available and only pulling from closest data point 1938/1950 and naively extrapolating per capita. Because KMT data doesn't exist.

                      • Retric 4 hours ago

                        GDP isn’t just some arbitrary abstraction it’s the amount of goods and services produced by an economy.

                        At the low end of economic output starvation or the lack thereof is a strong indication of GDP. You do need to adjust for exports and imports but you don’t need to have a particularly deep insight into the economy beyond that.

                        • maxglute 3 hours ago

                          Of course GDP is an arbitrary abstraction, it's literally derived from arbitrary systems of measurement, i.e. why soviet had mps system and west had sna, and each get to decide what to value and how much... arbitrarily... and even when they calculate, a lot of it is guestimate because no one has perfect or even good data, especially 80 years ago in developing countries.

                          > starvation or the lack thereof is a strong indication of GDP

                          No that's just an indicator that some cohort starved due to distribution failure. And to be blunt... that cohort was rural / peasants doing mostly subsistence agriculture tier production that do not count much towards GDP. An urban worker in industry can generate 10x GDP surplus than farmers in a commune.

                          Hence starvation (mostly in rural) has disproportionately less GDP weight vs urban worker productivity. An economy losing millions of peasants while still modernizing/industrializing can easily maintain higher total GDP than peaceful agrarian society. AKA CCP speed running first 5 year plan post WW2 raised the GDP floor so much that they can unalive 10s of millions of peasants and still have higher GDP vs pre/post war which, was incidentally also not peaceful agrarian society, but even messier interregnum shitshow with significantly shit state capacity than relatively unified postwar PRC under CCP. Republican Era KMT (during anarchy/civil war) simply couldn't organize fragmented China to be as productive as PRC under CCP, who can lose millions of peasants with marginal productivity of labour near zero and still do massively better in gdp/economic terms.

                        • csomar 2 hours ago

                          GDP is an arbitrary abstraction: https://en.wikipedia.org/wiki/Imputed_rent

                          We could have simply not calculated imputed rent (or all of rents).

          • nradov 9 hours ago

            Did China really do it though? We can clearly see that China has achieved huge economic growth since Deng Xiaoping took control. But the specific numbers can't be attempted to be believed. Communist Party officials at every level heavily manipulate the official economic data to meet their annual goals and no independent auditing is allowed.

          • fatherwavelet 11 hours ago

            In 1979, median income in China was $100 USD a year.

            In 1979, median income in the US was $16,530 USD a year.

            Not exactly an apples to apples comparison.

          • hyperbovine 7 hours ago

            By pulling ten million people a year from farms into factories and ploughing 40% of GDP into infrastructure and education. Sounds like a sound analogy to me.

          • eatsyourtacos 13 hours ago

            Yeah but China actively works in the best interest of their entire population.

            • brutalc 13 hours ago

              Huh? No they don’t.

              • direwolf20 12 hours ago

                In what way? Bring some substance instead of a vague rebuttal

                • b00ty4breakfast 11 hours ago

                  They're for those within the population that are willing to submit themselves to the whim of the state and whose prosperity in some way directly benefits the oligarchs that run the state.

                  Certainly, as just a few examples, they are not for the well-being of the Uyghar population or pro-democracy activists or journalists investigating human rights violation or supporters of Tibetan independence.

                  • irishcoffee 9 hours ago

                    Oh and Covid, don’t forget Covid.

                    • direwolf20 9 hours ago

                      The population's best interest is to never get COVID

      • pier25 6 hours ago

        It's plausible 5-10 years from now. I believe the entire AI revenue in 2025 was like $50B. It's not going to 10x in a year or two.

      • nativeit an hour ago

        I’m sorry, but 5% of GDP is an absurd figure. You’re saying $1 out of every $20 that does anything in our economy should be on AI? That seems insane to me.

      • blks 13 hours ago

        So for that GDP gotta show growth of over 5% extra to other growth sources (so total yearly growth will be pretty high). I doubt this will materialise

      • lowbloodsugar 13 hours ago

        You're saying that the entire increase in US GDP goes into the pockets of like 5 companies.

        • johnvanommen 11 hours ago

          Or we’re seeing a world where corporations dwarf countries.

          Apple will be around in a hundred years.

          Will the USA?

          • nradov 9 hours ago

            Tech companies never last. Apple will miss a disruptive innovation or make a key strategic error causing them to lose their dominant spot. Look at the top tech companies 50 years ago: how are they doing today?

          • whattheheckheck 11 hours ago

            Is like the transition from monarchies to nation states.

            By the 19th century, the rise of nation-states accelerated due to the spread of nationalism, the decline of feudal structures, and the unification of countries like Germany (1871) and Italy (1861). Centralized governments, uniform laws, national education systems, and a sense of collective identity became defining features. The French Revolution (1789) played a pivotal role by promoting citizenship, legal equality, and national sovereignty over dynastic rule

            Maybe in 2300 they'll say something similar about nationalism

          • bdangubic 8 hours ago

            There is exactly 0.00% chance Apple will be around in 50 years let alone 100.

      • turtlesdown11 10 hours ago

        I love HN, you can't get stuff like this anywhere else, the DKE from posters here - you can't get it anywhere else!

    • robotnikman 14 hours ago

      The future is not looking bright at all....

      I only have a meme to describe what we are facing https://imgur.com/a/xYbhzTj

      • weavie 10 hours ago

        As someone in the UK for whom that link is blocked, I wonder if that meme is doubly apt.

      • butterlettuce 13 hours ago

        I was expecting to see Mark Baum on the phone saying "hey, we're in a bubble".

      • dmix 12 hours ago

        > The future is not looking bright at all....

        The tech industry going through a boom and settling back down at a higher place than before isn't the end of the world. They all start merging together soon.

        • marcyb5st 11 hours ago

          I am more afraid if AI will actually deliver what CEOs are touting. People that are now working will be unemployable and will have to pivot to something else, overcrowding these other sectors and driving wages down.

          If that comes to pass you will work the same or more for less money than now.

          Basically jump back to a true plutocracy since only a few people will syphon the wealth generated by AI and that wealth will give them substantial temporal power.

          • dmix 7 hours ago

            This is basically just a standard cliche doomer prediction about any new development.

          • monkaiju 8 hours ago

            I mean, I just dont see any evidence of that happening. TBF I'm a SWE so I can only speak to that segment, but its literally worse than useless for working with anything software related thats non-trivial...

            • anonzzzies 5 hours ago

              I see that sentiment here all the time and I don't understand what you must be doing; our projects are far from non trivial and we get a lot of benefit from it in the SWE teams. Our software infra was alway (almost 30 years) made to work well with outsourcing teams, so maybe that is it, but I cannot understand how you can have quite that bad results.

              • yallpendantools an hour ago

                Butting in here but as I have the same sentiment as monkaiju: I'm working on a legacy (I can't emphasize this enough) Java 8 app that's doing all sorts of weird things with class loaders and dynamic entities which, among others, is holding it in Java 8. It has over ten years of development cruft all over it, code coverage of maybe 30-40% depending on when you measure it in the 6+ years I've been working with it.

                This shit was legacy when I was a wee new hire.

                Github Copilot has been great in getting that code coverage up marginally but ass otherwise. I could write you a litany of my grievances with it but the main one is how it keeps inventing methods when writing feature code. For example, in a given context, it might suggest `customer.getDeliveryAddress()` when it should be `customer.getOrderInfo().getDeliveryInfo().getDeliveryAddress()`. It's basically a dice roll if it will remember this the next time I need a delivery address (but perhaps no surprises there). I noticed if I needed a different address in the interim (like a billing address), it's more likely to get confused between getting a delivery address and a billing address. Sometimes it would even think the address is in the request arguments (so it would suggest something like `req.getParam('deliveryAddress')`) and this happens even when the request is properly typed!

                I can't believe I'm saying this but IntelliSense is loads better at completing my code for me as I don't have to backtrack what it generated to correct it. I could type `CustomerAddress deliveryAddress = customer` let it hang there for a while and in a couple of seconds it would suggest to `.getOrderInfo()` and then `.getDeliveryInfo()` until we get to `.getDeliveryAddress()`. And it would get the right suggestions if I name the variable `billingAddress` too.

                "Of course you have to provide it with the correct context/just use a larger context window" If I knew the exact context Copilot would need to generate working code, that eliminates more than half of what I need an AI copilot in this project for. Also if I have to add more than three or four class files as context for a given prompt, that's not really more convenient than figuring it out by myself.

                Our AI guy recently suggested a tool that would take in the whole repository as context. Kind of like sourcebot---maybe it was sourcebot(?)---but the exact name escapes me atm. Because it failed. Either there were still too many tokens to process or, more likely, the project was too complex for it still. The thing with this project is although it's a monorepo, it still relies on a whole fleet of external services and libraries to do some things. Some of these services we have the source code for but most not so even in the best case "hunting for files to add in the context window" just becomes "hunting for repos to add in the context window". Scaling!

                As an aside, I tried to greenfield some apps with LLMs. I asked Codex to develop a minimal single-page app for a simple internal lookup tool. I emphasized minimalism and code clarity in my prompt. I told it not to use external libraries and rely on standard web APIs.

                What it spewed forth is the most polished single-page internal tool I have ever seen. It is, frankly, impressive. But it only managed to do so because it basically spat out the most common Bootstrap classes and recreated the W3Schools AJAX tutorial and put it all in one HTML file. I have no words and I don't know if I must scream. It would be interesting to see how token costs evolve over time for a 100% vibe-coded project.

                • stackbutterflow 3 minutes ago

                  Copilot is notoriously bad. Have you tried (paid plans) codex, Claude or even Gemini on your legacy project? That's the bare minimum before debating the usefulness of AI tools.

            • int_19h 3 hours ago

              What products and what models have you tried?

        • dijit 12 hours ago

          There has never been an industry that does that consistently (that wasn't government subsidised at least).

          We got lucky with the dotcom bubble.

          There's no guarantee of anything, and it's totally possible for the industry to collapse and stay that way.

          • cootsnuck 8 hours ago

            They got the current administration to ban state level regulation for them. Not to mention various defense contracts. They are government subsidized.

      • palmotea 12 hours ago

        > I only have a meme to describe what we are facing https://imgur.com/a/xYbhzTj

        I don't recognize that cartoon and there's no audio. I'm going to need help with that one.

        • robotnikman 11 hours ago

          Make sure you click the unmute button, imgur mutes by default it seems.

        • falloutx 11 hours ago

          Smiling Friends

    • tototrains 2 hours ago

      The question is, how much are they willing to pay for the kind of surveillance and control that they hope this data processing will get them?

      The economics are not all in profit.

    • rishabhaiover 13 hours ago

      > "must collect an extra $650 billion in revenue every year" paired with the idea that automating knowledge work can cause a short-term disruption in the economy doesn't seem logical to me.

      I find it funny that Microsoft is scaling compute like crazy and their own products like Copilot are being dwarfed by the very models they wish to serve on that compute.

    • raincole 7 hours ago

      It sounds about right. Coffee's global revenue is $500b/yr. I can totally imagine AI brings twice or more than that.

      • mrweasel an hour ago

        That's not unreasonable, but if you can't do it without losing money then there's going be a problem.

        The problem isn't that AI/LLMs can't be useful or generate revenue, the problem is still the cost. We're no where near production ready AI, it can sort of do coding and some medical stuff, but we're not at a level of technology where the potential is fully realized. How much are investors willing to pour into more research?

        We looking at OpenAI contemplating ads and erotic chatbots. That's not a successful business who have those ideas for profit generation.

        Revenue is pointless without eventual profit.

    • kristianp 11 hours ago

      If 1 or 2 of the 5 big spenders starts having big losses, things will be interesting. Their market caps will be a fraction of the current overinflated values.

      Meanwhile Apple is only spending 1 billion a year to use Google's models.

    • WarmWash 7 hours ago

      It tells you that 500 million people will be paying $60-$80/mo for AI. Something they find as indispensable as a cell phone or internet bill.

      The numbers actually work really well, (un)fortunately.

      • anonymous908213 7 hours ago

        I don't know how you can write down those numbers and come to the conclusion they sound reasonable at all. Corporations literally can't give this trash away for free without consumers being unhappy about it (eg. the Copilot malware infesting every aspect of Windows). ChatGPT had 800m MAU at one report, but that's a chat interface and free. Do you really believe over half of those users are going to convert from "free" to paying $60/mo for access to the chat interface, when all potential applications for actually improving their lives are failing badly? I think you are out of touch with the finances of non-tech-indsutry workers if you think they will.

        • WarmWash 7 hours ago

          I don't know a single person in my (non-tech!) life that doesn't use AI, shy of toddlers and geriatric people.

          The famous MIT study (95% of AI initiatives fail, remember that one?) actually found that pretty much every worker was using AI almost daily, but used their personal accounts (hence the corporate ones not being used).

          If you are brand new to the tech world, and this is your first new product cycle, the way it works is that there is a free-cool-we're-awesomely-generous phase, and then when you are hooked and they are entrenched, the real price comes to fruition. See...pretty much every tech start-up burning runway cash.

          Right now they are getting us hooked, and like the dumbasses consumers are, they will become totally dependent and think it will stay this cheap.

          • hattmall 4 hours ago

            I use AI frequently. I am frequently let down. Occasionally satisfied and very rarely impressed. My results seem typical for everyone else I know. It's a free and widely promoted tool that has the potential to be useful, of course people will use it. The features I find most useful, is not providing me new knowledge. It's formalizing something I wrote or summarizing some other text, that I am going to read anyway or can at least reference as needed and confirm the output. This is also where the local models Excel.

            I also often see people post AI generated advice and answers that are simply incorrect in Facebook groups and get roasted with 100s of people chiming in on how you can trust ChatGPT.

            I just can't see regular people are going to pay more than (NetFlix + HBO + Prime + WM+) for an AI subscription. I think you would see tons of competitors pop up if that were at all viable.

          • jasonfarnon 3 hours ago

            "If you are brand new to the tech world, and this is your first new product cycle, the way it works is that there is a free-cool-we're-awesomely-generous phase, and then when you are hooked and they are entrenched, the real price comes to fruition. See...pretty much every tech start-up burning runway cash."

            That has indeed been the strategy, but it's not like it always or even usually works out. We've seen plenty of companies that try to raise their prices and people aren't hooked. (Though I am almost certain in this case at least professionals if not the general public will indeed be hooked.)

          • hunterpayne 4 hours ago

            > actually found that pretty much every worker was using AI almost daily

            What they found is that people search the Internet for things and an AI bot is right there. What they didn't find is people using Vibe coded apps, learning from AI or buying AI services. They did find companies buying AI services, but as an experiment. Also, blaming AI is easy when someone messes up and costs a customer or sale. The more that happens, the sooner the company stops experimenting. If that happens in a widespread way, then this bubble collapses.

          • hackable_sand 3 hours ago

            You sound very naive.

          • enraged_camel 5 hours ago

            A good way to think about it is that ChatGPT is well on its way to becoming a verb like Google did. Doesn't roll off the tongue as easily but in terms of brand awareness it feels ubiquitous.

        • keybored 7 hours ago

          > I don't know how you can write down those numbers and come to the conclusion they sound reasonable at all.

          Half this board is in the most hyped echo chamber I’ve ever seen.

        • jgalt212 7 hours ago

          > ChatGPT had 800m MAU at one report, but that's a chat interface and free. Do you really believe over half of those users are going to convert from "free" to paying $60/mo for access to the chat

          Even if these things worked great for everyone, the percent of free uses who convert to paid users is low single digits per cent. For OpenAI to have any chance of breaking even in the consumer space, they need to develop an ad biz that makes around 20-25% of G does. That's a tall order in that G doesn't make good dough from search anymore as SERP page clicks are down 80% with AI summaries being good enough for most.

          • lopis 2 hours ago

            And let's not forget that for the bubble to sustain itself, people would currently use different LLMs would need to create a separate account in each one. There's absolutely no way most people will be paying more than one LLM unless they have a lot of disposable income.

      • jgalt212 7 hours ago

        fair enough, but the consumer is already stretched. So where is this $60-80 per month X 500MM consumers coming from?

        • WarmWash 7 hours ago

          Consumer spending is strong and growing, don't listen to dregs milking upvotes on the internet, people will easily come up with 4-5 hours of minimum wage pay in a month to cover the cost of the thing they use many times a day.

          • SoftTalker 6 hours ago

            I don't use AI for anything in my private life, only at work. And I can't really imagine what it could do for me. In no scenario am I paying a monthly subscription for it.

            • WarmWash 5 hours ago

              I know someone who doesn't have a smart phone

          • superb_dev 4 hours ago

            I don’t know how this can be true when almost everyone I know is struggling right now

      • wasmainiac 3 hours ago

        Wait how? Show your work. I still see at least a $600bn gap.

        > Something they find as indispensable as a cell phone or internet bill.

        Source?

    • mahirsaid 10 hours ago

      It's crazy to me how many flags are being thrown in this investment spree. Repeating the same mistakes as before (2000). Big companies will be hit hard when they can't show for what they spent shareholders money on. The run will be large and impactful.

      • mahirsaid 10 hours ago

        If you analyze what's happening right now in the tech industry, you can't help but to think of something deeper than what's being talked about in plain sight. There is a clear panic amongst the large tech firms. the root cause of the panic is still unclear, simply saying these companies want to be the first in this new revolution isn't enough to draw conclusion. Amongst the top tech firms there still sits the original founders whom as we all know changed the way we live life today. Saying they are misunderstanding what's happening right now and they're are foolish, is to simple of an understatement. They of all people in the world would know it's a bad idea to go all in, in this manner. The underlying competing nature of this whether it has to do with China or other competing markets are not being talked about, and not just that " what exactly is the strategy here?"

        • devmor 10 hours ago

          > They of all people in the world would know it's a bad idea to go all in, in this manner.

          Or this kind of financial crash is exactly what they want. If they can drive the markets to failure, only the largest companies can hold on - and acquire more of the failing companies in the process.

          • mahirsaid 10 hours ago

            Day-by-day it seeming this way. They seem to want to flush out the remaining competitors. dedicators are old news, umbrella corporation(tm) is the new form of totalitarian/authoritarianism.

      • antonvs 8 hours ago

        > Repeating the same mistakes as before (2000).

        The issue is that every company in a position to do so is trying to stake a claim in a new market. Not every company will win. No-one has a surefire way of identifying "mistakes" ahead of time.

        What alternative do you think would work better, short of central planning?

    • throw_llm 7 hours ago

      On the other hand many who work on AI refer to it as 'building God' so all things taken together, it's all reasonable.

    • belter 12 hours ago

      I run the numbers on hyperscaler AI capex and the math is not going to work out.

      With these assumptions:

      – Big 4 keep spending at current pace for 3 more years

      – Returns only start showing after aprox 2 years

      – Heavy competition with around 20% operating margin on AI and Cloud

      – Use of 9% cost of capital

      This is the current reality:

      AWS aprox $142B/yr

      Azure aprox $132B/yr

      Google Cloud around $71B/yr

      Combined its about $330B to $340B annual cloud revenue today

      And lets says Global public cloud market of $700B total today.

      To justify the current capex trajectory under those assumptions, by year 3 the big hyperscalers would need roughly $800B to $900B in new annual revenue just to earn a normal return on the capital being deployed.

      That implies combined hyperscaler cloud and AI revenue going from: $330B today to $1.2T within 3 years :-))

      In other words...Cloud would need to roughly do 4× in a very short window, and the incremental revenue alone would exceed the entire current global cloud market.

      So for the investment wave to make financial sense, at least one of these must be true:

      1 Cloud/AI spending globally explodes far beyond all prior forecasts

      2 AI massively increases revenue/profit in ads, software, commerce and not just cloud

      3 A winner takes all outcome where only 1 or 2 players earn real returns

      4 Or a large share of this capex never earns an economic return and is defensive

      People keep modeling this like normal cloud growth. But what we have is insanity

      • johnvanommen 11 hours ago

        > That implies combined hyperscaler cloud and AI revenue going from: $330B today to $1.2T within 3 years :-))

        You’re ignoring the fact that gaming is going to the cloud.

        That industry is bigger than Hollywood.

        Desktop computers will invariably follow.

        The RAM shortage will drive the transition.

        For instance, my wife uses her personal laptop about four days a year.

        People like that won’t be buying personal desktops or laptops, five years from now. The RAM shortage will drive a transition into thin clients.

        I already see it with our kids. They use an iPhone, unless they need to type. Then they use an iPad with a BT keyboard.

        • api 7 hours ago

          The RAM shortage is extremely temporary. It’ll last as long as it takes for new capacity to come online. RAM shortages and price spikes have happened many times before.

          Eventually China will catch up in EUV fabrication and flood the market with cheap silicon. When that happens a terabyte of RAM will cost what 128gb costs now.

        • qwertycrackers 9 hours ago

          Cloud gaming is crap and any actual gamer will tell you that. The niche of gamers casual enough to not care about playing over network latency but serious enough to pay real money for cloud gaming is microscopic.

          • WarmWash 7 hours ago

            >The niche of gamers casual enough to not care about playing over network latency

            In the saddest way possible, the niche of gamers are people playing on desktops with ethernet connections.

            The majority of gamers are buying booster packs on mobile games.

            • int_19h 3 hours ago

              Yes, but that majority doesn't need cloud gaming precisely because those games run just fine on their phone - there's no benefit in putting them in the cloud, that was supposed to be for fancy stuff where you need a beefy GPU for the eye candy.

          • toephu2 8 hours ago

            It's not 2023 anymore. Have you tried cloud gaming in 2026? I can barely tell it's connected to the cloud.

            • amlib 8 hours ago

              Yes, it's amazing because it's streaming directly from a computer in the room behind me. :)

            • johnvanommen 6 hours ago

              And the increases in network speed are one of the last bastions of Moores Law.

              • SJC_Hacker 5 hours ago

                > nd the increases in network speed are one of the last bastions of Moores Law.

                Throughput has increased but latency hasn’t changed much

                Latency hasn’t decreased substantially since the late 90s when I remember getting sub 50 ms ping in Quake III from my dorm room in college

        • kmbfjr 7 hours ago

          That may be true, but all of this can be done today without the massive capex and without “AI”.

        • turtlesdown11 10 hours ago

          What amount of the gaming industry do you think will go to AI providers and not game developers?

          You think we'll replace gaming and desktop computers into the cloud in the timeline of the poster above (2-4 years?)

          Just not realistic.

        • kg 11 hours ago

          Even if gaming goes to the cloud, how are they going to run the massive existing library of video games on the dedicated AI inference hardware that everyone is buying right now? Seems like that pivot would require even more spending.

          • devmor 10 hours ago

            And how are they going to get sub-5ms round trip latency into the average consumer’s home to avoid people continuing to see cloud gaming as a janky gimmick that feels bad to use?

      • coffeemug 10 hours ago

        Azure revenue is growing at 39% year over year. If Microsoft can sustain this growth, in four years Azure will be ~3.73x its current size. This is of course very difficult, but you really don’t need a deus ex machina to hit 4x growth under your assumptions.

        • mwwaters 9 hours ago

          The issue in the late-90s was all the investment created a lot of real revenue for telecoms and other companies. Even though there were a lot of shenanigans with revenue, a lot of real money was spent on fiber and tech generally.

          But the real money was investment that didn’t see a return for the investor. The investments needed to have higher final consumption (such as through better productivity or through displacing other costs) to pay back the investment.

    • echelon 4 hours ago

      > JPMorgan calculated last fall that the tech industry must collect an extra $650 billion in revenue every year

      If you take half the software engineers in the US and replace them with AI, you're halfway there. And why stop with software?

      There's a reason for the fervor and excitement of these companies.

      The positive outlook is that you don't fire folks - they have even more work to do. But we'll see how it pans out in practice.

    • mullingitover 13 hours ago

      I read "Devil Take the Hindmost: A History of Financial Speculation" last year, and the current AI bubble is like getting a front row seat to the next edition being written.

      The really stupid bubbles end up getting themselves metastasized into the public retirement system, I'm just waiting for that to start any day now.

      • chasd00 8 hours ago

        > The really stupid bubbles end up getting themselves metastasized into the public retirement system, I'm just waiting for that to start any day now.

        Not sure what you mean exactly but every single 401k is tied into this.

    • tempodox 2 hours ago

      > That pretty much tells you how this will end, right there.

      It seems so blatantly obvious, yet nobody wants to listen, and practically everyone knows better. We live in interesting times.

    • kryogen1c 14 hours ago

      The question is not "is it a bubble". Bubbles are a desirable feature of the American experiment. The question is "will this bubble lay the foundation for growth and destroy some value when it pops, or will it only destroy value"

      https://www.oaktreecapital.com/insights/memo/is-it-a-bubble

      • rubenflamshep 11 hours ago

        Pretty good article until the bizarre post-script where they fall back on the tired "people derive meaning from their work" for why UBI is bad.

        • WalterBright 11 hours ago

          Meaning or not, UBI doesn't work because the math doesn't work.

          > bizarre

          It isn't bizarre at all. Without work people devolve into playing video games and smoking pot in their mom's basement.

          I remember summer vacations from school. It was great for a while, but soon I was looking forward to getting back to school.

          • Marazan 5 minutes ago

            > Without work people devolve into playing video games and smoking pot in their mom's basement.

            Skill issue

          • rubenflamshep 10 hours ago

            > Without work people devolve into playing video games and smoking pot in their mom's basement.

            I have no problem finding fulfilling and meaningful projects outside of my work! There are many people like me :)

            • WalterBright 10 hours ago

              > There are many people like me

              I'm sure there are. Doesn't mean most people are like that. Consider retirees. Some find meaningful activities, many just rot away out of not having a purpose.

              What percentage of people currently living off of welfare are doing meaningful work?

              • danparsonson 7 hours ago

                > What percentage of people currently living off of welfare are doing meaningful work?

                Do you have that number? Do you have any numbers to back up your claims or are you just talking about what works for you?

                • WalterBright 5 hours ago

                  According to google: "Some reports indicate that 26.8% to 28.6% of households on welfare have earned income, which sometimes reflects a focus on households with no work-eligible adults (elderly, disabled)."

              • eli_gottlieb 6 hours ago

                > What percentage of people currently living off of welfare are doing meaningful work?

                Most of them, since the vast majority of "welfare" programs exclusively assist people who are in work.

          • olyjohn 11 hours ago

            I have been off work for over 6 months now. I have been doing so many projects, and exploring so many places, working out, eating healthy, learning, and spending very little money doing so. I actually even quit smoking pot after doing it daily for 10 years. It's been amazing, and I'd rather never go back to work. I don't get how people can get so bored. There's so much to do and see.

            • WalterBright 10 hours ago

              Best wishes to you! I'm retired myself, but I work full time (on D). Yale is hosting a symposium on D in April, and I'll be a speaker at it.

            • tayo42 4 hours ago

              I generally agree, but I think for some of the most interesting problems in computer science you need resources that only companies can provide and thats basically work.

          • devmor 10 hours ago

            Your anecdote is not compliant with reality. Every test of UBI so far shows that people continue to work.

            • BuyMyBitcoins 8 hours ago

              There’s no way to test UBI without implementing it fully. Any experiment that gives people a no-strings-attached stipend isn’t accounting for the fact that the money has a negligible impact on the economy and produces no meaningful change in the workforce. Plus, all of these experiments are time-bound. Participants know the payments will stop.

              I also get the feeling that such experiments just prove that giving people money makes them happier. But there’s nothing to account for the fact that prices in the market haven’t changed, the tax structure hasn’t changed, and no goods or services experienced any shortages.

            • WalterBright 10 hours ago

              I suspect it was because the UBI wasn't enough to live on.

              • devmor 4 hours ago

                So you believe that the entire driving factor of the consumer goods market would mysteriously disappear if people had enough money to not worry about missing rent?

          • keybored 7 hours ago

            Always such glowing recommendations of human kind from techies.

            People devolve like that when they have no purpose or opportunities. Which I’m sure would happen with the real goal of UBI: barely subsistence support in order to grow a larger pool of reserve labor while the rich (who are not degenerate at all[1]) live large.

            [1] https://news.ycombinator.com/item?id=46929869

          • krapp 11 hours ago

            >Without work people devolve into playing video games and smoking pot in their mom's basement.

            Some people might, others wouldn't. Not everyone is a pot-smoking teenager.

            • WalterBright 10 hours ago

              People like ice cream, too. But not everyone.

          • watwut 10 hours ago

            Hard working billionaires famous for succesdully working devolved into abuse island, real saltiness over anyone saying sexual harrasment is wrong and basically conspiracy to end democracy.

            UBI guy playing games in moms basement comes accross as harmless in comparison.

        • api 7 hours ago

          UBI doesn’t mean people don’t work. It means work is partially decoupled from basic needs.

          People would work for two reasons. One is to make extra money and afford a lifestyle beyond what UBI provides. The second is to… do things that are meaningful. If people derive meaning from work then that’s why they’ll work.

          Some people will just sit around on UBI. Those are the same people who sit around today on welfare or dead end bullshit jobs that don’t really produce much value.

          I’m not totally sold on UBI but there’s a lot of shallow bad arguments against it that are pretty easy to dismiss.

        • falloutx 11 hours ago

          governments will collapse before we are at a moment where UBI is needed. Billionaires and companies hardly pay any tax and if white collar jobs die down, there is no guarantee that government will even have money to wipe their butt.

      • toomuchtodo 14 hours ago

        What can we use fields of GPUs for next?

        • thefounder 13 hours ago

          Whatever happened to crypto/blockchain ASICs

          • dijit 12 hours ago

            Nothing happened to them, they're still around; just consolidated into industrial operations.

            The "twist" is they rot as e-waste every 18 months when newer models arrive, generating roughly 30,000 metric tonnes of eWaste annually[0] with no recycling programmes from manufacturers (like Bitmain)... which is comparable to the entire country of the Netherlands.

            Turns out the decentralised currency for the people is also an environmental disaster built on planned obsolescence. Who knew.

            [0]: https://www.sciencedirect.com/science/article/abs/pii/S09213...

            • SJC_Hacker 5 hours ago

              > urns out the decentralised currency for the people is also an environmental disaster built on planned obsolescence. Who knew.

              Only proof of work systems, such as Bitcoin. Proof of stake such as Ethereum is a lot less energy intensive

              • dijit 2 hours ago

                ethereum has a similar ewaste problem

        • mike_hearn 13 hours ago

          AI, obviously! A bubble doesn't mean demand vanishes overnight. There is - at current price points - much more demand than supply. That means the market can tolerate price hikes whilst keeping the accelerators busy. It seems likely that we're still just at the start of AI demand as most companies are still finding their feet with it, lots of devs still aren't using it at all, lots of business workflows that could be automated with it aren't and so on. So there is scope for raising prices a lot as the high value use cases float to the top, maybe even auctioning tokens.

          Let's say tomorrow OpenAI and Anthropic have a huge down round, or whatever event people think would mark the end of the bubble. That doesn't mean suddenly nobody is using AI. It means they have to rapidly reduce burn e.g. not doing new model versions, laying off staff and reducing the comp of those that remain, hiking prices a lot, getting more serious about ads and other monetized features. They will still be selling plenty of inferencing.

          In practice the action is mostly taking place out of public markets. We won't necessarily know what's happening at the most exposed companies until it's in the rear view mirror. Bubbles are a public markets phenomenon. See how "ride sharing"/taxi apps played out. Market dumping for long periods to buy market share, followed by a relatively easy transition to annual profitability without ever going public. Some investors probably got wiped along the way but we don't know who exactly or by how much.

          Most likely outcome: AI bubble will deflate steadily rather than suddenly burst. Resources are diverted from training to inferencing, new features slow down, new models are weaker and more expensive than new models and the old models are turned off anyway. That sort of thing. People will call it enshittification but it'll really just be the end of aggressive dumping.

          • DrewADesign 13 hours ago

            There may not be that much demand at a price that yields profit. Demand at current heavily subsidized “the first dose is always free” prices is not a great indicator unless they find some way to make themselves indispensable for a lot of tasks for a lot of people. So far, they haven’t.

            • mike_hearn 11 hours ago

              Yes if/when prices rise there'll be demand destruction but I think demand will keep rising for the foreseeable future anyway even incorporating that. Lower value use cases like vibe coding hobby apps might fall by the wayside because they become uneconomic but the tokens will be soaked up by bigger enterprises that have found ways to properly integrate it at scale into their businesses. I don't mean Copilot style Office plugins but more business-specific stuff that yields competitive advantage.

              • DrewADesign 8 hours ago

                You’re just repeating their predictions. Investors are starting to get nervous that there’s no real proof these things could justify burning a Mt. Everest sized pile of $100 bills to achieve.

                • mike_hearn 21 minutes ago

                  Yes it's only a prediction based on what I'm seeing. And I'm not disagreeing with the investors that there's overinvestment right now. Prices need to rise, spending on R&D needs to fall for this stuff to make economic sense. I'm only arguing that there's plenty of demand, and assuming price rises happen smoothly over not too short of a period, any demand destruction at the lower levels will be quickly counter-balanced by demand creation at higher value-add levels.

                  It's also possible non-tech industries just have a collective imagination failure and can't find use cases for AI, but I doubt it.

            • Krei-se 10 hours ago

              I find myseld using dumber free models more as they reply instantly and keep me learning.

              Some local models run well already too and do the job. Not sure if i would pay any money when a discarded mac can run these just fine already.

              This may turn out like trying to make people game over streaming.

          • dickersnoodle 13 hours ago

            "much more demand than supply"? Demand from who?

            • joe_fishfish 13 hours ago

              The demand from middle managers trying to replace their dev teams with Claude Code, mainly.

          • blks 13 hours ago

            Please respect other users of hacker news and don’t generate your replies with LLM

            • zozbot234 13 hours ago

              FWIW, GP doesn't look like clanker speak to me. It's a bit too smooth and on-point for that.

            • mike_hearn 12 hours ago

              I never use LLMs to write for me (except code).

        • narrator 13 hours ago

          Anyone who regularly tries to rent GPUs on VPS providers knows that they often sell out. This isn't a market with lots of capacity nobody needs. In the dot.com bubble there was lots of dark fiber nobody was using. In this bubble, almost every high-end GPU is being used fully by someone.

        • ornornor 13 hours ago

          Heating!

        • zozbot234 13 hours ago

          Can they run Crysis?

        • dragontamer 14 hours ago

          We can use the GPUs for research (64-bit scientific compute), 3d graphics, a few other things. We programmers will reconfigure them to something useful.

          At least, the GPUs that are currently plugged in. A lot of this bullshit bubble crap is because most of those GPUs (and RAM) is sitting unplugged in a warehouse, because we don't even have enough power to turn all of them on.

          So if your question is how to use a GPU... I got plenty of useful non-AI related ideas. But only if we can plug them in.

          I wouldn't be surprised if many of those GPUs are just e-waste, never to turn on due to lack of power.

          • cogman10 13 hours ago

            > I wouldn't be surprised if many of those GPUs are just e-waste, never to turn on due to lack of power.

            That's my fear.

            The problem is these GPUs are specifically made for datacenters, So it's not like your average consumer is going to grab one to put into their gaming PCs.

            I also worry about what the pop ends up doing to consumer electronics. We'll have a bunch of manufacturers that have a bunch of capacity that they can no longer use to create products which people want to buy and a huge backstop of second hand goods that these liquidated AI companies will want to unload. That will put chip manufactures in a place where they'll need to get their money primarily from consumers if they want to stay in business. That's not the business model that they've operated on up until this point.

            We are looking at a situation where we have a bunch of oil derricks ready to pump, but shut off because it's too expensive to run the equipment making it not worth the energy.

            • dragontamer 12 hours ago

              That's fine. Server is where we programmers are best at repurposing things. Just a bunch of always on boxes doing random crap in the background.

              Servers can (and do!!) use 10+ year old hardware. Consumers are kind of the weird the ones who are so impatient they need the latest and greatest.

          • throwaway0123_5 12 hours ago

            > 3d graphics

            Seems like the G in GPU is very obsolete now:

            https://www.tomshardware.com/news/nvidia-h100-benchmarkedin-...

            > As it turns out Nvidia's H100, a card that costs over $30,000 performs worse than integrated GPUs in such benchmarks as 3DMark and Red Dead Redemption 2

          • jdiez17 13 hours ago

            I predict there's going to be a niche opening up for companies to recycle the expensive parts of all these compute hardware that AI companies are currently buying and will probably be obsolete/depreciated/replaced in the next 2-5 years. The easiest example is RAM chips. There will be people desoldering those ICs and putting them on DDR5 sticks to resell to the general consumer market.

          • bee_rider 12 hours ago

            It’ll be interesting to see what people come up with to get conventional scientific computing workloads to work on 16 bit or smaller data types. I think there’s some hope but it will require work.

          • themafia 13 hours ago

            The government is going to use them.

            The flock cameras are going to be fed into them.

            The bitcoin network will be crashed.

            A technological arms race just occurred in front of your eyes for the past 5 years and you think they're going to let the stockpile fall into civilian hands?

            • dragontamer 12 hours ago

              In 2 years the next generation chips will be released and th se chips will be obsolete.

              That's truly e-waste. Now in practice, we programmers find uses of 10+ year old hardware as cheap webhosta, compiler/build boxes, Bamboo, unit tests, fuzzers and whatever. So as long as we can turn them on we programmers can and will find a use.

              But because we are power constrained, when the more efficient 1.8nm or 1.5nm chips get released (and when those chips use 30% or less power), no one will give a shit about the obsolete stockpile.

              • themafia a few seconds ago

                > will be obsolete.

                In what sense? Not competitive for chat bot providers to use? Is that a metric that matters?

                > when the more efficient 1.8nm or 1.5nm chips get released

                What if they don't get released? You don't have a broad and competitive set of players providing products in this realm. How hard would it be to stop this?

                > no one will give a shit about the obsolete stockpile.

                You have lived your life with ready access to cutting edge resources. You ever wonder how long that trend could _possibly_ last?

              • zozbot234 12 hours ago

                I assume even really out of date cards and racks will readily find some use, when the present-day alternative costs ~$100k for a single card. Just have to run them on a low-enough basis that power use is not a significant portion of the overall cost of ownership.

        • greenchair 14 hours ago

          cloud gaming?

        • irishcoffee 14 hours ago

          It’s too bad they’re all concentrated in buildings, having been hovered up by the billionaire class.

          I would love to live in the world where everyone joins a pool for inference or training, and as such gets the open source weights and models for free.

          We could call it: FOSS

      • eli_gottlieb 6 hours ago

        > Bubbles are a desirable feature of the American experiment.

        No they're not. You don't get to decide what other people desire.

      • themafia 13 hours ago

        > Bubbles are a desirable feature of the American experiment

        Wild speculation detached from reality which destroys personal fortunes are not "a desirable feature."

        It's only a "desirable feature" to the nihilistic maniacs that run the markets as it's only beneficial to them.

        • kryogen1c 13 hours ago

          > Wild speculation detached from reality which destroys personal fortunes are not "a desirable feature."

          This is not the definition of a bubble, and is specifically contrary to what i said.

          A good bubble, like the automobile industry in the example I linked, paves the way for a whole new economic modalit - but value was still destroyed when that bubble popped and the market corrected.

          You may think its better to not have bubbles and limit the maximum economic rate of change (and you may be right), but the current system is not obviously wrong and has benefits.

        • zozbot234 13 hours ago

          The trouble is, you can only tell what was "detached from reality" after the fact. Real-world bubbles must be credible by definition, or else they would deflate smoothly rather than growing out of control and then popping suddenly when the original expectations are dashed by reality.

        • jdiez17 13 hours ago

          > It's only a "desirable feature" to the nihilistic maniacs that run the markets as it's only beneficial to them.

          ... and which forces do you think are the core concept of "the American experiment"?

  • Kon5ole 20 hours ago

    It's hard to comprehend the scale of these investments. Comparing them to notable industrial projects, it's almost unbelievable.

    Every week in 2026 Google will pay for the cost of a Burj Khalifa. Amazon for a Wembley Stadium.

    Facebook will spend a France-England tunnel every month.

    • peterlk 15 hours ago

      I have been having this conversation more and more with friends. As a research topic, modern AI is a miracle, and I absolutely love learning about it. As an economic endeavor, it just feels insane. How many hospitals, roads, houses, machine shops, biomanufacturing facilities, parks, forests, laboratories, etc. could we build with the money we’re spending on pretraining models that we throw away next quarter?

      • Kon5ole 14 hours ago

        I have to admit I'm flip-flopping on the topic, back and forth from skeptic to scared enthusiast.

        I just made a LLM recreate a decent approximation of the file system browser from the movie Hackers (similar to the SGI one from Jurassic park) in about 10 minutes. At work I've had it do useful features and bug fixes daily for a solid week.

        Something happened around newyears 2026. The clients, the skills, the mcps, the tools and models reached some new level of usefulness. Or maybe I've been lucky for a week.

        If it can do things like what I saw last week reliably, then every tool, widget, utility and library currently making money for a single dev or small team of devs is about to get eaten. Maybe even applications like jira, slack, or even salesforce or SAP can be made in-house by even small companies. "Make me a basic CRM".

        Just a few months ago I found it mostly frustrating to use LLM's and I thought the whole thing was little more than a slight improvement over googling info for myself. But the past week has been mind-blowing.

        Is it the beginning of the star trek ship computer? If so, it is as big as the smartphone, the internet, or even the invention of the microchip. And then the investments make sense in a way.

        The problem might end up being that the value created by LLMs will have no customers when everyone is unemployed.

        • josephg 13 hours ago

          Yeah I’m having a similar experience. I’ve been wanting a standard test suite for JMAP email servers, so we can make sure all created jmap servers implement the (somewhat complex) spec in a consistent manner. I spent a single day prompting Claude code on Friday, and walked away with about 9000 lines of code, containing 300 unit tests for jmap servers. And a web interface showing the results. It would have taken me at least a week or two to make something similar by hand.

          There’s some quality issues - I think some of the tests are slightly wrong. We went back and forth on some ambiguities Claude found in the spec, and how we should actually interpret what the jmap spec is asking. But after just a day, it’s nearly there. And it’s already very useful to see where existing implementations diverge on their output, even if the tests are sometimes not correctly identifying which implementation is wrong. Some of the test failures are 100% correct - it found real bugs in production implementations.

          Using an AI to do weeks of work in a single day is the biggest change in what software development looks like that I’ve seen in my 30+ year career. I don’t know why I would hire a junior developer to write code any more. (But I would hire someone who was smart enough to wrangle the AI). I just don’t know how long “ai prompter” will remain a valuable skill. The AIs are getting much better at operating independently. It won’t be long before us humans aren’t needed to babysit them.

          • salawat 2 hours ago

            So what'd your prompt look like, out of curiosity? I hear about all these things that sound quite impressive, but no one ever seems to want share any info on the prompts to learn or gain insight from.

          • wasmainiac 3 hours ago

            Was this written by llm?

        • gtech1 13 hours ago

          My team of 6 people has been building a software to compete with an already established piece of software written by a major software corporation. I'm not saying we'll succeed, I'm not saying we'll be better nor that we will cover every corner case they do and that they learned over the past 30 years. But 6 senior devs are getting stuff done at an insane pace. And if we can _attempt_ to do this, which would have been unthinkable 2 years ago, I can only wonder what will happen next.

          • josephg 13 hours ago

            Yeah I’m curious how much the moat of big software companies will shrink over the next few years. How long before I can ask a chatbot to build me a windows-like OS from scratch (complete with an office suite) and it can do a reasonable job?

            And what happens then? Will we stop using each others code?

          • bossyTeacher 10 hours ago

            > My team of 6 people has been building a software to compete with an already established piece of software written by a major software corporation.

            How long until that the devs at that major corporation start using an LLM? You think your smaller team can still compare to their huge team?

            • dpe82 9 hours ago

              If the goal is to simply undercut the incumbent with roughly the same product than it doesn't really matter if the incumbent starts using LLMs too as their cost structure, margin expectations, etc. are already relatively set.

            • henry2023 5 hours ago

              Of course they can. if you’ve ever stepped a foot inside big tech you’ll know the bottle neck is not dev output.

              • YZF 3 hours ago

                100%- which is what I'm telling everyone. I am in big tech and it doesn't matter that I can write what I used to in 1 week in 5 minutes. Meetings, reviews, design docs, politics, etc. etc. mean how much code is written is irrelevant. Productivity in big tech is pretty low because of organizational overhead. You just can't get anything done. Being able to get more work done with less people is the real game changer because less people don't suffer from those "coordination headwinds".

        • wasmainiac 3 hours ago

          I have not had the success you mention with programming… I still feel like I have to hold its hand all the way.

          Regardless..

          > The problem might end up being that the value created by LLMs will have no customers when everyone is unemployed.

          This mentality is why investors are scrambling right now. It’s a scare tactic.

        • bojan 14 hours ago

          I agree with you, and share the experience. Something changed recently for me as well, where I found the mode to actually get value from these things. I find it refreshing that I don't have to write boilerplate myself or think about the exact syntax of the framework I use. I get to think about the part that adds value.

          I also have the same experience where we rejected a SAP offering with the idea to build the same thing in-house.

          But... aside from the obvious fact that building a thing is easier than using and maintaining the thing, the question arose if we even need what SAP offered, or if we get agents to do it.

          In your example, do you actually need that simple CRM or maybe you can get agents to do the thing without any other additional software?

          I don't know what this means for our jobs. I do know that, if making software becomes so trivial for everyone, companies will have to find another way to differentiate and compete. And hopefully that's where knowledge workers come in again.

          • r_lee 14 hours ago

            Exactly. I hear this "wow finally I can just let Claude work on a ticket while I get coffee!" stuff and it makes me wonder why none of these people feel threatened in any way?

            And if you can be so productive, then where exactly do we need this surplus productivity in software right now when were no longer in the "digital transformation" phase?

            • dasil003 13 hours ago

              I don't feel threatened because no matter how tools, platforms and languages improved, no matter how much faster I could produce and distribute working applications, there has never been a shortage of higher level problems to solve.

              Now if the only thing I was doing was writing code to a specification written by someone else, then I would be scared, but in my quarter century career that has never been the case. Even at my first job as a junior web developer before graduating college, there was always a conversation with stakeholders and I always had input on what was being built. I get that not every programmer had that experience, but to me that's always been the majority of the value that software developers bring, the code itself is just an implementation detail.

              I can't say that I won't miss hand-crafting all the code, there certainly was something meditative about it, but I'm sure some of the original ENIAC programmers felt the same way about plugging in cables to make circuits. The world of tech moves fast, and nostalgia doesn't pay the bills.

              • bossyTeacher 10 hours ago

                > there has never been a shortage of higher level problems to solve.

                True, but whether all those problems are SEEN worth chasing business wise is another matter. Short term is what matters most for individuals currently in the field, and short term is less devs needed which leads to drop in salaries and higher competition. You will have a job but if you explore the job market you will find it much harder to get a job you want at the salary you want without facing huge competition. At the same time, your current employer might be less likely to give you salary raises because they know you bargaining power has decreased due to the job market conditions.

                Maybe in 40 years time, new problems will change the job market dynamics but you will likely be near retirement by then

            • vitaflo 13 hours ago

              Smart devs know this is the beginning of the end of high paying dev work. Once the LLM's get really good, most dev work will go to the lowest bidder. Just like factory work did 30 years ago.

              • WarmWash 7 hours ago

                Not even factory work, classic engineering jobs in general. SWE sucked all the air out of the engineering room, because the pay/benefits/job prospects were just head and shoulders better.

                We had a fresh out of school EE hire who left our company for an SWE position 6 months into his job with us, for a position that paid the same (plus full remote with a food stipend) as our Director of Engineering. A 23 yr old getting on offer above what a 54 yr old with 30 years experience was making.

                For a few years there, you had to be an idi...making sub-optimal decisions, to choose anything other than becoming an techy.

              • chasd00 8 hours ago

                I think it’s the end of low paying dev work. If I was in one of the coding sweatshops I would be thinking hard.

              • falloutx 11 hours ago

                Then whats the smart dev plan, sit on the vibe coding casino until the bossman calls you into the office?

                • vitaflo 9 hours ago

                  Make as much money as you can while you still can before the bottom falls out. Or go work for one of the AI companies on AI. Always better to sell picks and shovels than dig for gold. Eventually the gold runs out where you are.

                • democracy 10 hours ago

                  Exactly, it will be a CodeUber, we just pick the task from the app and deliver the results ))

                  • falloutx 10 hours ago

                    I thought AI would already automate that part, I expect to actually just drive an actual uber

                • username223 7 hours ago

                  Become a plutocrat, or be useful to plutocrats. I don't have the moral flexibility for the former, but plutes tend to care about their images, legacies, and mewling broods. A clever person can find a way to be the latter.

            • democracy 10 hours ago

              Lots of dreamers here, yet Vanguard reports 4x job and wages growth in the 100 jobs most exposed to AI

              • bossyTeacher 10 hours ago

                Bit naive to think that positive pattern will hold for the next ten years or so or whatever time is left between now and your retirement. And arguably, the later that positive pattern changes is worse for you because retraining as an older person has its own challenges.

          • democracy 10 hours ago

            Oh please, SAP doesn't exist only because writing software is not free or cheap

        • raegis 12 hours ago

          > The problem might end up being that the value created by LLMs will have no customers when everyone is unemployed.

          I'm not a professional programmer, but I am the I.T. department for my wife's small office. I used ChatGPT recently (as a search engine) to help create a web interface for some files on our intranet. I'm sure no one in the office has the time or skills to vibe code this in a reasonable amount of time. So I'm confident that my "job" is secure :)

          • falloutx 11 hours ago

            > Im sure no one in the office has the time or skills to vibe code.

            the thing you are describing can be vibe coded by anyone. Its not that teachers or nurses are gonna start vibecoding tmrw, but the risk comes from other programmers outworking you to show off to the boss. Or companies pitting devs against each other, or them mistakenly assuming they require very few programmers, or PMs suddenly start vibe coding when threatened for their jobs.

        • chasd00 8 hours ago

          I have to admit the last 6-8 weeks have been different. Maybe it’s just me realizing the value in some of these tools…

        • simoncion 5 hours ago

          It seems like every quarter or two, I hear a story just like yours (including the <<Wow! We've quietly passed an inflection point!>> part).

          What does that tell me?

          It tells me that I shouldn't waste my time with a tool that's going to fundamentally change in three to six months; that I should wait until I stop hearing stories like this for a good, long while. "But you're going to be left behind!", yeah, maybe. But. I've been primarily a maintenance programmer for a very long time. The "bleeding edge" is where I am very, very rarely... and it seems to work out fine.

          New tools that are useful are nice. Switching to a radically different tool every quarter or two? Not nice. I've got shit to do.

      • YZF 3 hours ago

        It's not a zero sum game. We could build hospitals and data centers. The reason we are not building hospitals or parks or machine shops have nothing to do with AI. We weren't building them 2 years ago either.

      • qaq 13 hours ago

        Not many. Money is not a perfect abstraction. The raw materials used to produce 100B worth of Nvidia chips will not yield you many hospitals. AI researcher with 100M singup bonus from Meta ain't gonna lay you much brick.

        • thwarted 12 hours ago

          It's not about the consumption of raw materials or repurposing of the raw materials used for chips. peterlk said:

          > How many hospitals, roads, houses, machine shops, biomanufacturing facilities, parks, forests, laboratories, etc. could we build with the money we’re spending on pretraining models that we throw away next quarter?

          It's about using the money for to build things that we actually need and that have more long term utility. No one expects someone with a 100M signing bonus at Meta to lay bricks, but that 100M could be used to buy a lot of bricks and pay a lot of brick layers to build hospitals.

          • AYBABTME 10 hours ago

            I think it's a mistake to believe that this money would exist if it was to be spent on these things. The existence of money is largely derived from society scale intention, excitement or urgency. These hospitals, machine shops, etc, could not manifest the same amount of money unless packaged as an exciting society scale project by a charismatic and credible character. But AI, as an aggregate, has this pull and there are a few clear investment channels in which to pour this money. The money didn't need to exist yesterday, it can be created by pulling a loan from (ultimately) the Fed.

            • int_19h 3 hours ago

              Those companies were each sitting ~$50-100B in cash even before the AI boom.

          • OGEnthusiast 11 hours ago

            Seems like the main issue is that taxes in America are far too low.

            • qaq 7 hours ago

              Again people confuse paper wealth and material assets. If you take half of money of 0.001% people imagine there will be material change in world of atoms but thats not true. You can't take 8 mil Richard Mille watch and build an apartment building. We are mostly resource constrained. There are no material assets to convert all the paper wealth into. Telsa's physical assets are like 5% of Tesla's market cap the rest is cultish belief in Elon. You can't convert that into a hospital. It's trivial to observe on AI side there is unlimited amount of $ available and yet companies are supplied constrained on the atoms side from gas turbines having 3-4 year lead times to ASML running 24/7 prod cycle and yet unable to meet demand.

          • scoofy 11 hours ago

            I mean, you're just talking about spending money. Google isn't trying to build data centers for fun. These massive outlays are only there because the folks making them think they will make much more money than they spend.

      • johnvanommen 11 hours ago

        > How many hospitals, roads, houses, machine shops, biomanufacturing facilities, parks, forests, laboratories, etc. could we build

        “We?”

        This isn’t “our” money.

        If you buy shares, you get a voice.

      • mike_hearn 13 hours ago

        FWIW the models aren't thrown away. The weights are used to preinit the next foundation model training run. It helps to reuse weights rather than randomize them even if the model has a somewhat different architecture.

        As for the rest, constraint on hospital capacity (at least in some countries, not sure about the USA) isn't money for capex, it's doctors unions that restrict training slots.

      • uejfiweun 14 hours ago

        There is a certain logic to it though. If the scaling approaches DO get us to AGI, that's basically going to change everything, forever. And if you assume this is the case, then "our side" has to get there before our geopolitical adversaries do. Because in the long run the expected "hit" from a hostile nation developing AGI and using it to bully "our side" probably really dwarfs the "hit" we take from not developing the infrastructure you mentioned.

        • A_D_E_P_T 14 hours ago

          Any serious LLM user will tell you that there's no way to get from LLM to AGI.

          These models are vast and, in many ways, clearly superhuman. But they can't venture outside their training data, not even if you hold their hand and guide them.

          Try getting Suno to write a song in a new genre. Even if you tell it EXACTLY what you want, and provide it with clear examples, it won't be able to do it.

          This is also why there have been zero-to-very-few new scientific discoveries made by LLM.

          • icedchai 9 hours ago

            Most humans aren't making new scientific discoveries either, are they? Does that mean they don't have AGI?

            Intelligence is mostly about pattern recognition. All those model weights represent patterns, compressed and encoded. If you can find a similar pattern in a new place, perhaps you can make a new discovery.

            One problem is the patterns are static. Sooner or later, someone is going to figure out a way to give LLMs "real" memory. I'm not talking about keeping a long term context, extending it with markdown files, RAG, etc. like we do today for an individual user, but updating the underlying model weights incrementally, basically resulting in a learning, collective memory.

            • A_D_E_P_T 8 hours ago

              Virtually all humans of average intelligence are capable of making scientific discoveries -- admittedly minor ones -- if they devote themselves to a field, work at its frontiers, and apply themselves. They are also capable of originality in other domains, in other ways.

              I am not at all sure that the same thing is even theoretically possible for LLMs.

              Not to be facetious, but you need to spend more time playing with Suno. It really drives home how limited these models are. With text, there's a vast conceptual space that's hard to probe; it's much easier when the same structure is ported to music. The number of things it can't do absolutely outweighs the number of things it can do. Within days, even mere hours, you'll become aware of its peculiar rigidity.

          • pixl97 13 hours ago

            Can most people venture outside their training data?

            • nosianu 13 hours ago

              Are you seriously comparing chips running AI models and human brains now???

              Last time I checked the chips are not rewiring themselves like the brain does, nor does even the software rewrite itself, or the model recalibrate itself - anything that could be called "learning", normal daily work for a human brain.

              Also, the models are not models of the world, but of our text communication only.

              Human brains start by building a model of the physical world, from age zero. Much later, on top of that foundation, more abstract ideas emerge, including language. Text, even later. And all of it on a deep layer of a physical world model.

              The LLM has none of that! It has zero depth behind the words it learned. It's like a human learning some strange symbols and the rules governing their appearance. The human will be able to reproduce valid chains of symbols following the learned rules, but they will never have any understanding of those symbols. In the human case, somebody would have to connect those symbols to their world model by telling them the "meaning" in a way they can already use. For the LLM that is not possible, since it doesn't habe such a model to begin with.

              How anyone can even entertain the idea of "AGI" based on uncomprehending symbol manipulation, where every symbol has zero depth of a physical world model, only connections to other symbols, is beyond me TBH.

              • throw4847285 7 hours ago

                Watch out, you're getting suspiciously close to the Chinese Room argument. And people on here really don't like that argument.

                • int_19h 3 hours ago

                  Speaking as someone who thinks the Chinese Room argument is an obvious case of begging the question, GP isn't about that. They're not saying that LLMs don't have world models - they're saying that those world models are not based in physical world and thus cannot properly understand what they talk about.

                  I don't think that's true anymore, though. All the SOTA models are multimodal now, meaning that they are trained on images and videos as well, not just text; and they do that is precisely because it improves the text output as well, for this exact reason. Already, I don't have to waste time explaining to Claude or Codex what I want on a webpage - I can just sketch a mock-up, or when there's a bug, I take a screenshot and circle the bits that are wrong. But this extends into the ability to reason about real world, as well.

                  • nosianu 12 minutes ago

                    I would argue that is still just symbols. A physical model requires a lot more. For example, the way babies and toddlers learn is heavy on interaction with objects and the world. We know those who have less of that kind of experience in early childhood will do less well later. We know that many of today's children, kept quiet and sedated with interactive screens, are at a disadvantage. What if you made this even more extreme, a brain without ability to interact with anything, trained entirely passively? Even our much more complex brains have trouble creating a good model in these cases.

                    You also need more than one simple brain structure simulation repeated a lot. Our brains have many different parts and structures, not just a single type.

                    However, just like our airplanes do not resemble bird flight as the early dreamers of human flight dreamed of, with flapping wings, I also do not see a need for our technology to fully reproduce the original.

                    We are better off following our own tech path and seeing where it will lead. It will be something else, and that's fine, because anyone can create a new human brain without education and tools, with just some sex, and let it self-assemble.

                    Biology is great and all but also pretty limited, extremely path-dependent. Just look at all the materials we already managed to create that nature would never make. Going off the already trodden bio-path should be good, we can create a lot of very different things. Those won't be brains like ours that "Feel" like ours, if that word will ever even apply. and that's fine and good. Our creations should explore entirely new paths. All these comparisons to the human experience make me sad, let's evaluate our products on their own merit.

            • falloutx 11 hours ago

              In some ways no, because to learn something you have to LEARN that then thats in the training data. But humans can do it continuously and sometimes randomly, and also being without prompted.

              • A_D_E_P_T 10 hours ago

                If you're a scientist -- and in many cases if you're an engineer, or a philosopher, or even perhaps a theologian -- your job is quite literally to add to humanity's training data.

                I'd add that fiction is much more complicated. LLMs can clearly write original fiction, even if they are, as yet, not very good at it. There's an idea (often attributed to John Gardner or Leo Tolstoy) that all stories boil down to one of two scenarios:

                > "A stranger comes to town."

                > "A person goes on a journey."

                Christopher Booker wrote that there are seven: https://en.wikipedia.org/wiki/The_Seven_Basic_Plots

                So I'd tentatively expect tomorrow's LLMs to write good fiction along those well-trodden paths. I'm less sanguine about their applications in scientific invention and in producing original music.

            • bigstrat2003 11 hours ago

              Yes, they can.

            • allarm 11 hours ago

              Ever heard of creativity?

          • uejfiweun 14 hours ago

            I mean yeah, but that's why there are far more research avenues these days than just pure LLMs, for instance world models. The thinking is that if LLMs can achieve near-human performance in the language domain then we must be very close to achieving human performance in the "general" domain - that's the main thesis of the current AI financial bubble (see articles like AI 2027). And if that is the case, you still want as much compute as possible, both to accelerate research and to achieve greater performance on other architectures that benefit from scaling.

            • rishabhaiover 13 hours ago

              How does scaling compute does not go hand-in-hand with energy generation? To me, scaling one and not the other puts a different set of constraints on overall growth. And the energy industry works at a different pace than these hyperscalars scaling compute.

            • pixl97 13 hours ago

              The other thing here is we know the human brain learns on far less samples than LLMs in their current form. If there is any kind of learning breakthrough then the amount of compute used for learning could explode overnight

        • samrus 7 hours ago

          Scaling alone wont get us to AGI. We are in the latter half of this AI summer where the real research has slowed down and even stopped and the MBAs and moguls are doing stupid things

          For us to take the next step towards AGI, we need an AI winter to hit and the next AI summer to start, the first half of which will produce the advancement we actually need

        • mylifeandtimes 14 hours ago

          Here's hoping you are chinese, then.

          • thesmtsolver2 14 hours ago

            Why?

          • uejfiweun 14 hours ago

            Well, I tried to specifically frame it in a neutral way, to outline the thinking that pretty much all the major nations / companies currently have on this topic.

    • saalaa 13 hours ago

      I'm a simple man, I just want these companies to pay taxes where they make money.

      • WarOnPrivacy 12 hours ago

        > I'm a simple man, I just want these companies to pay taxes where they make money.

        The folks who bankroll elections work tirelessly to insure this doesn't happen.

      • int_19h 3 hours ago

        Especially when they make money off the free work of all the people who contributed to those training sets.

        There really needs to be some kind of "commons tax" for that kind of thing.

    • mtrovo 14 hours ago

      Remember the good old days of complaining about Bitcoin taking the energy output of a whole town.

      • mjevans 13 hours ago

        It has never _not_ been time to build all the power plants we can environmentally afford.

        More power enables higher quality of living and more advanced civilization. It will be put to use doing something useful, or at the very worst it'll make doing existing useful things less expensive opening them up to more who would like those things.

    • bogzz 16 hours ago

      Haters will say Sora wasn't worth it.

      • jsheard 15 hours ago

        Incredible how quickly that moment passed. Four months on it's barely clinging to the App Store top 100, below killer apps such as Gossip Harbor®: Merge & Story.

      • mimischi 15 hours ago

        Ok I’ll bite. Was it worth it? What have people missed that haven’t used it.

      • askl 14 hours ago

        What is Sora?

        • cogman10 13 hours ago

          One of many AI video generators that are filling up all video social media with rage bate garbage.

        • ge96 14 hours ago

          If you see a video of a cat running away from a police traffic stop

    • jiggawatts 3 hours ago

      Sure, but the end-state of this isn't just chat bots! Bipedal robots are the real target application for this technology, and at least three big players have invested many billions each into base model training. The "GPT 2 moment" for robotics will happen likely later this year, next year at the latest. Then, the company that scales up from there to the equivalent of GPT 3.5 -- the first properly useful model -- can start selling androids by the tens of millions.

      It'll be the next automobile, every well-to-do household will want one eventually. Every hospital, to assist/replace nurses. Every retirement home for the same reason.

      Japan and China alone, with their ageing populations in dire need of nursing, will easily pay for the investment with that one use-case.

    • baq 15 hours ago

      Inflation adjusted?

    • fogzen 14 hours ago

      It's incredibly sad and depressing. We could be building green energy, parks, public transit, education, healthcare.

      • hunterpayne 3 hours ago

        > building green energy

        Fun fact, we have already spent about $10T on renewables and it still provides a very tiny amount of global energy. Learn about why before complaining about it in public. While you are at it, perhaps learn why health insurance is so expensive while also increasing the cost of healthcare. In many matters of public policy, lack of money isn't the problem. Its ignorance of what makes good policy that is missing and that isn't fixed by throwing money at problems.

      • renewiltord 8 hours ago

        We could, but perhaps people want AI data centers more than one more month of healthcare spending. Though it’s a tough thing choosing between 3.5 rail lines between SF and LA and LLMs.

    • jsemrau 7 hours ago

      Its like we are transcending to a new state of the world.

    • jaccola 14 hours ago

      Not really your point but I think the skills to create these things are much slower to train than producing chips and data centres.

      So they couldn't really build any of these projects weekly since the cost of construction materials / design engineers / construction workers would inflate rapidly.

      Worth keeping in mind when people say "we could have built 52 hospitals instead!" or similar. Yes, but not really... since the other constraints would quickly reveal themselves

    • ttoinou 14 hours ago

      So, now you understand you can’t compare things with how much they cost

  • lordnacho 21 hours ago

    The real question is whether the boom is, economically, a mistake.

    If AI is here to stay, as a thing that permanently increases productivity, then AI buying up all the electricians and network engineers is a (correct) signal. People will take courses in those things and try to get a piece of the winnings. Same with those memory chips that they are gobbling up, it just tells everyone where to make a living.

    If it's a flash in a pan, and it turns out to be empty promises, then all those people are wasting their time.

    What we really want to ask ourselves is whether our economy is set up to mostly get things right, or it is wastefully searching.

    • 112233 21 hours ago

      "If X is here to stay, as a thing that permanently increases productivity" - matches a lot of different X. Maintaining persons health increases productivity. Good education increases productivity. What is playing out now is completely different - it is both irresistible lust for omniscient power provided by this technology ("mirror mirror on the wall, who has recently thought bad things about me?"), and the dread of someone else wielding it.

      Plus, it makes natural moat against masses of normal (i.e. poor) people, because requires a spaceship to run. Finally intelligence can also be controlled by capital the way it was meant to, joining information, creativity, means of production, communication and such things

      • mattgreenrocks 21 hours ago

        > Plus, it makes natural moat against masses of normal (i.e. poor) people, because requires a spaceship to run. Finally intelligence can also be controlled by capital the way it was meant to, joining information, creativity, means of production, communication and such things

        I'd put intelligence in quotes there, but it doesn't detract from the point.

        It is astounding to me how willfully ignorant people are being about the massive aggregation of power that's going on here. In retrospect, I don't think they're ignorant, they just haven't had to think about it much in the past. But this is a real problem with very real consequences. Sovereignty must be occasionally be asserted, or someone will infringe upon it.

        That's exactly what's happening here.

        • fennecbutt 15 hours ago

          >massive aggregation of power that's going on here

          Which has been happening since what at least the bad old IBM days and nobody's done a thing about it?

          I've given up tbh. It's like the apathetic masses want the billionaires to become trillionaires as long as they get their tiktok fix.

          • luqtas 15 hours ago

            > It's like the apathetic masses want the billionaires to become trillionaires as long as they get their tiktok fix.

            it's much worse. a great demographic of hacker news love gen. AI.. these are usually highly educated people showing their true faces on the plethora of problems this technology violates and generates

            • int_19h 3 hours ago

              There's nothing wrong with generative AI as a technology.

              The problem is that it's in the hands of sociopaths. But that is a general problem with our socioeconomic system, not with AI.

          • zenmac 15 hours ago

            >I've given up tbh. It's like the apathetic masses want the billionaires to become trillionaires as long as they get their tiktok fix.

            Especially at cost of diverting power and water for farmers and humans who need them. And the benefit of the AI seems quite limited from recent Signal post here on HN.

            • pixl97 13 hours ago

              Water for farmers is its own pile of bullshit. Beef uses a stupid amount of water. Same with almonds. If you're actually worried about feeding people and not just producing an expensive economic product you're not going to make them.

              Same goes for people living in deserts where we have to ship water thousands of miles.

              Give me a break.

              • datsci_est_2015 11 hours ago

                And one of my favorites, alfalfa in Arizona for Saudi Arabian horses.

                Water usage must be taxed according to its use, unfortunately.

              • tormeh 12 hours ago

                Very important. There is more than just 1 bullshit line of business.

      • strken 20 hours ago

        The difference is that we've more or less hit a stable Pareto front in education and healthcare. Gains are small and incremental; if you pour more money into one place and less into another, you generally don't end up much better off, although you can make small but meaningful improvements in select areas. You can push the front forward slightly with new research and innovation, but not very fast or far.

        The current generation of AI is an opportunity for quick gains that go beyond just a few months longer lifespan or a 2% higher average grade. It is an unrealised and maybe unrealistic opportunity, but it's not just greed and lust for power that pushes people to invest, it's hope that this time the next big thing will make a real difference. It's not the same as investing more in schools because it's far less certain but also has a far higher alleged upside.

        • ZoomZoomZoom 15 hours ago

          > The difference is that we've more or less hit a stable Pareto front in education and healthcare.

          Not even close. So many parts of the world need to be pumped with target fund infusions ASAP. Only forcing higher levels of education and healthcare at the places where it lags is a viable step towards securing peaceful and prosperous nearest future.

          • pixl97 13 hours ago

            Then why didn't that happen before GenAI was a thing?

            I think some people may have to face the fact that money was never going to go there under any circumstances.

            • mattnewton 13 hours ago

              > Then why didn't that happen before GenAI was a thing?

              Because there was no easy way for the people directing capital to those endeavors to make themselves richer.

        • lazyasciiart 15 hours ago

          Pareto is irrelevant, because they are talking about how to use all of this money not currently used in healthcare or education.

          • hunterpayne 2 hours ago

            Also throwing money at problems doesn't necessarily solve them. Sometimes problems get worse when you throw more money at them. No matter how much money you throw at education, if you don't use Phonics to teach them, kids won't be able to read. Guess what we use?

            • lazyasciiart 2 hours ago

              Ok, this is mostly irrelevant - education is not one of those problems that money can’t at least massively improve. And lack of direct phonics instruction will leave behind some kids. And if you use phonics without the rest of learning to read, some of them still won’t manage it. One of the reasons that teaching American kids how to read varies is that somewhere between 30 and 60% of kids will figure it out if you just read to them enough. The others have a wide variety of gaps, ranging from hearing or sight difficulties to short term memory issues to not speaking English. Phonics helps a subset of them, but is not enough by itself - and I don’t know who “we” is, but most American schools do and have always taught phonics. The debate is really over the length of time and level of focus it gets, and whether to make 100% of kids sit through it when maybe half of them don’t need it. I’m sure there are teachers out there who just don’t teach phonics but I haven’t seen them.

        • KellyCriterion 20 hours ago

          > if you pour more money into one place and less into another, you generally don't end up much better off, although you can make small but meaningful improvements in select areas

          "Marginal cost barrier" hit, then?

        • 112233 17 hours ago

          > The difference is that we've more or less hit a stable Pareto front in education and healthcare. Gains are small and incremental;

          You probably mean gains between someone receiving healtcare and education now, as compared to 10 years ago, or maybe you mean year to year average across every man alive.

          You certainly do not mean that person receiving appropriate healthcare is only 2% better off than one not receiving it, or educated person is noly 2% better of than an uneducated one?

          Because I find such notion highly unlikely. So, here you have vast amounts of people you can mine for productivity increase, simply by providing things that exist already and are available in unlimited supply to anyone who can produce money at will. Instead, let's build warehouses and fill them with obsolete tech, power it all up using tiny Sun and .. what exactly?

          This seems like a thinly disguised act of an obsessed person that will stop at nothing to satisfy their fantasies.

      • gom_jabbar 15 hours ago

        > Finally intelligence can also be controlled by capital

        The relationship between capital and AI is a fascinating topic. The contemporary philosopher who has thought most intensely about this is probably Nick Land (who is heavily inspired by Eugen von Böhm-Bawerk and Friedrich Hayek). For Land, intelligence has always been immanent in capitalism and capitalism is actively producing it. As we get closer to the realization of capitalism's telos/attractor (technological singularity), this becomes more and more obvious (intelligible).

    • Archelaos 20 hours ago

      In 2024, global GDP was $111 trillion.[1] Investing 1 or 2 % of that to improve global productivity via AI does not seem exaggerated to me.

      [1] https://data.worldbank.org/indicator/NY.GDP.MKTP.CD

      • mitthrowaway2 15 hours ago

        2% is a lot! There's only fifty things you can invest 2% of GDP in before you occupy the entire economy. But the list of services people need, from food, water, shelter, heating, transportation, education, healthcare, communications, entertainment, mining, materials, construction, research, maintenance, legal services... there's a lot of things on that list. To allocate each one 1% or 2% of the economy may seem small, but pretty quickly you hit 100%.

        • Archelaos 14 hours ago

          Most of you have mentioned is not investment, but consumption. Investments means to use money to make more money in the future. Global investment rates are around 25 % of global GDP. Avarage return on investement ist about 10% per year. In other words: using 1% or 2% of GDP if its leads to an improvement in GDP of more than 0.1% or 0.2% next year would count as a success. I think to expect a productivity gain on this scale due to AI is not unrealistic for 2026.

          • simoncion 5 hours ago

            > Most of you have mentioned is not investment, but consumption.

            It's ongoing investment in the infrastructure of civil society. These sorts of investments usually give you indirect returns... which is why it's usually only done by governments.

        • tim333 10 hours ago

          AI is a big deal though.

      • throwaw12 15 hours ago

        I will put it differently,

        Investing 1 or 2% of global GDP to increase wealth gap 50% more and make top 1% unbelievable rich while everyone else looking for jobs or getting 50 year mortgage, seem very bad idea to me.

        • Archelaos 10 hours ago

          This problem is not specific to AI, but a matter of social policy.

          For example here in Germany, the Gini index, an indicator of equality/inequality has been oszillating about 29.75 +/-1.45 since 2011.[1] In other words, the wealth distribution was more or less stable in the last 15 years, and is less extrem than in the USA, where it was 41.8 in 2023.[2]

          [1] https://de.statista.com/statistik/daten/studie/1184266/

          [2] https://fred.stlouisfed.org/series/SIPOVGINIUSA

        • simianwords 15 hours ago

          It can be both? Both that inequality increases but also prosperity for the lower class? I don’t mind that trade off.

          If some one were to say to you - you can have 10,000 more iPhones to play with but your friends would get 100,000 iPhones, would you reject the deal?

          • majormajor 14 hours ago

            A century ago people in the US started to tax the rich much more heavily than we do now. They didn't believe that increasing inequality was necessary - or even actually that helpful - for improving their real livelihood.

            Don't be shocked if that comes back. (And that was the mild sort of reaction.)

            If you have billions and all the power associated with it, why are you shooting for personal trillions instead of actually, directly improving the day to day for everyone else without even losing your status as an elite, just diminishing it by a little bit of marginal utility? Especially if you read history about when people make that same decision?

          • int_19h 3 hours ago

            > If some one were to say to you - you can have 10,000 more iPhones to play with but your friends would get 100,000 iPhones, would you reject the deal?

            I'd think about how many elections he can buy with those iPhones, for starters.

          • thatcat 15 hours ago

            I don't think that is scalable to infinite iphones since the input materials are finite. If your all your friends get 100,000 iphones and then you need an ev battery and that now costs 20,000 iphones now and you're down 5k iphones if the previous battery cost was 5k iphones. On the other hand if you already had a good battery, then you're up 20k iphones or so in equity. Also, since everyone has so many iphones the net utility drops and they become worth less than the materials so everyone would have to scrap their iphones to liquidate at the cost of the recycled metals.

          • BobbyJo 15 hours ago

            It can be, but there are lots of reasons to believe it will not be. Knowledge work was the ladder between lower and upper classes. If that goes away, it doesn't really matter if electricians make 50% more.

            • simianwords 15 hours ago

              I guess I don’t believe knowledge work will completely go away.

              • BobbyJo 13 hours ago

                Its not really a matter of some great shift. Millennials are the most educated generation by a wide margin, yet their wealth by middle age is trailing prior generations. The ladder is being pulled up inch by inch and I don't see AI doing anything other than speeding up that process at the moment.

          • doodlebugging 13 hours ago

            >Both that inequality increases but also prosperity for the lower class? I don’t mind that trade off.

            This sounds like it is written from the perspective of someone who sees their own prosperity increase dramatically so that they end up on the prosperous side of the worsening inequality gap. The fact that those on the other side of the gap see marginal gains in prosperity makes them feel that it all worked out okay for everyone.

            I think this is greed typical of the current players in the AI/tech economy. You all saw others getting abundant wealth by landing high-paying jobs with tech companies and you want to not only to do the same, but to one-up your peers. It's really a shame that so much tech-bro identity revolves around personal wealth with zero accountability for the tools that you are building to set yourselves in control of lives of those you have chosen to either leave behind or to wield as tools for further wealth creation through alternate income SaaS subscription streams or other bullshit scams.

            There really is not much difference between tech-bros, prosperity gospel grifters or other religious nuts whose only goal is to be more wealthy today than yesterday. It's created a generation of greedy, selfish narcissists who feel that in order to succeed in their industry, they need to be high-functioning autists so they take the path of self-diagnosis and become, as a group, resistant to peer review since anyone who would challenge their bullshit is doing the same thing and unlikely to want too much light shed on their own shady shit. It is funny to me that many of these tech-bros have no problem admitting their drug experimentation since they need to maintain an aura of enlightenment amongst their peers.

            It's gonna be a really shitty world when the dopeheads run everything. As someone who grew up back in the day when smoking dope was something hidden and paranoia was a survival instinct for those who chose that path I can see lots of problems for society in the pipeline.

          • techblueberry 15 hours ago

            I think you inadvertently stepped in the point — Yes, what the fuck do I need 10,000 iPhones for? Also part of the problem is which resources end up in abundance. What am I going to do with more compute when housing and land are a limited resource.

            Gary’s Economics talks about this but in many cases inequality _is_ the problem. More billionaires means more people investing in limited resources(housing) driving up prices.

            Maybe plebes get more money too, but not enough to spend on the things that matter.

            • simianwords 15 hours ago

              It’s just a proxy for wealth using concrete things.

              If you were given 10,000 dollars but your friends were also 100,000 dollars as well, would you take the deal?

              Land and housing can get costlier while other things get cheaper, making you overall more prosperous. This is what happened in the USA and most of the world. Would you take this deal?

              • majormajor 14 hours ago

                I wouldn't be able to hang out with them as much (they'd go do a lot of higher-cost things that I couldn't afford anymore).

                I'd have a shittier apartment (they'd drive up the price of the nicer ones, if we're talking about a significant sized group; if it's truly just immediate friends, then instead it's just "they'd all move further away to a nicer area").

                So I'd have some more toys but would have a big loss in quality of my social life. Pass.

                (If you promised me that those cracks wouldn't happen, sure, it would be great for them. But in practice, having seen this before, it's not really realistic to hold the social fabric together when economic inequality increases rapidly and dramatically.)

                • simianwords 13 hours ago

                  No you would have the same house or better. That’s part of the condition.

              • jancsika 13 hours ago

                More to the point, what does research into notions of fairness among primates tell us about the risks of a vast number of participants deciding to take this deal?

                You have to tell us the answer so we can resolve your nickname "simianwords" with regard to Poe's Law.

                • simianwords 3 hours ago

                  haha perhaps we are better than primates.

              • lordnacho 12 hours ago

                I don't know how nobody has mentioned this before:

                The guy with 100k will end up rewriting the rules so that in the next round, he gets 105k and you get 5k.

                And people like you will say "well, I'm still better off"

                In future rounds, you will try to say "oh, I can't lose 5k for you to get 115k" and when you try to vote, you won't be able to vote, because the guy who has been making 23x what you make has spent his money on making sure it's rigged.

              • WarOnPrivacy 12 hours ago

                > If you were given 10,000 dollars but your friends were also 100,000 dollars as well, would you take the deal?

                This boldly assumes the 10k actually reaches me. Meanwhile 100k payouts endlessly land as expected.

                usa sources: me+kids+decade of hunger level poverty. no medical coverage for decades. homeless retirement still on the table.

              • techblueberry 15 hours ago

                You’re missing the point. It’s not about jealousy it’s basic economics - supply and demand. No I would not take the deal if it raised the demand in something central to my happiness (housing) driving the price up for something previously affordable and make it unaffordable.

                I would not trade my house for a better iPhone with higher quality YouTube videos, and slightly more fashionable athleisure.

                I don’t care how many yacht’s Elon Musk has, I care how many governments.

                • simianwords 15 hours ago

                  What if you could buy the same house as before, buy the same iPhone as before and still have more money remaining? But your house cost way way more proportionally.

                  • majormajor 14 hours ago

                    If you want to claim that that's a realistic outcome you should look at how people lived in the 50s or 80s vs today, now that we've driven up income inequality by dramatically lowering top-end tax rates and reduced barriers to rich people buying up more and more real property.

                    What we got is actually: you can't buy the same house as before. You can buy an iPhone that didn't exist then, but your boss can use it to request you do more work after-hours whenever they want. You have less money remaining. You have less free time remaining.

                    • simianwords 13 hours ago

                      Do you have source for your claim? I have source that supports what I have said - look at disposable income data from BLS

                  • techblueberry 15 hours ago

                    If you’re asking me if I’m an idiot who doesn’t understand basic economics / capitalism, the answer is no. If you’re asking me if I think that in the real world there are negative externalities of inequality in and of itself that makes it more complicated than “everyone gets more but billionaires get more more” than the answer is yes.

        • Anon1096 15 hours ago

          Just being born in the US already makes you a top 10% and very likely top 5-1% in terms of global wealth. The top 1% you're harping about is very likely yourself.

          • WarOnPrivacy 12 hours ago

            > Just being born in the US already makes you a top 10%

            Our family learned how long-term hunger (via poverty) is worse in the US because there was no social support network we could tap into (for resource sharing).

            Families not in crisis don't need a network. Families in crisis have insufficient resources to launch one. They are widely scattered and their days are consumed with trying to scrape up rent (then transpo, then utilities, then food - in that order).

          • majormajor 14 hours ago

            And so many people in the US are already miserable before yet another round of "become more efficient and productive for essentially the same pay or less as before!!"

            So maybe income equality + disposable material goods is not a good path towards people being happier and better off.

            It's our job to build a system that will work well for ourselves. If there's a point where incentivizing a few to hoard even more resources to themselves starts to break down in terms of overall quality of life, we have a responsibility to each other to change the system.

            Look at how many miserable-ass unhappy toxic asshole billionaires there are. We'll be helping their own mental health too.

            • fatherwavelet 10 hours ago

              It is not really obvious to me that happiness should be part of the social contract.

              Happiness is very slippery even in your own life. It seems absurd to me that you should care about my happiness.

              So much of happiness is the change from the previous state to the present. I am happy right now because 2026 has started off great for me while 2025 was a bad year.

              I would imagine there was never a happier American society than the year's after WW2.

              I imagine some of the most happy human societies were the ones during the years after the black plague. No one though today gains happiness because of the absence of black plague.

              To believe a society can be built around happiness seems completely delusional to me.

          • whstl 15 hours ago

            So what? If that's the case, they clearly mean the 0.0001% or whatever number, which is way worse.

        • cheonn638 15 hours ago

          >Investing 1 or 2% of global GDP to increase wealth gap 50% more

          What’s your definition of wealth gap?

          Is it how you feel when you see the name of a billionaire?

          • mjamesaustin 13 hours ago

            It's easy to access statistics about wealth and income inequality. It is worse than it has ever been, and continuing to get worse.

            https://www.pewresearch.org/social-trends/2020/01/09/trends-...

          • fennecbutt 15 hours ago

            Yes the very fact that billionaires exist mean our species has failed.

            I do not believe that there is a legitimate billionaire on the planet, in that they haven't engaged in stock manipulation, lobbying, insider trading, corrupt deals, monopolistic practices, dark patterns, corporate tax dodging, personal tax dodging.

            You could for example say that the latter are technically legal and therefore okay, but it's my belief that they're "technically legal/loopholes" because we have reached a point where the rich are so powerful that they bend the laws to their own ends.

            Our species is a bit of a disappointment. People would rather focus on trivial tribal issues than on anything that impacts the majority of the members of our species. We are well and truly animals.

      • catlifeonmars 15 hours ago

        It’s implied you mean that the ROI will be positive. Spending 1-2% of global GDP with negative ROI could be disastrous.

        I think this is where most of the disagreement is. We don’t all agree on the expected ROI of that investment, especially when taking into account the opportunity cost.

    • jleyank 21 hours ago

      They still gotta figure out how their consumers will get the cash to consume. Toss all the developers and a largish cohort of well-paid people head towards the dole.

      • rybosworld 20 hours ago

        Yeah I don't think this get's enough attention. It still requires a technical person to use these things effectively. Building coherent systems that solve a business problem is an iterative process. I have a hard time seeing how an LLM could climb that mountain on it's own.

        I don't think there's a way to solve the issue of: one-shotted apps will increasingly look more convincing, in the same way that the image generation looks more convincing. But when you peel back the curtain, that output isn't quite correct enough to deploy to production. You could try brute-force vibe iterating until it's exactly what you wanted, but that rarely works for anything that isn't a CRUD app.

        Ask any of the image generators to build you a sprite sheet for a 2d character with multiple animation frames. I have never gotten one to do this successfully in one prompt. Sometimes the background will be the checkerboard png transparency layer. Except, the checkers aren't all one color (#000000, #ffffff), instead it's a million variations of off-white and off-black. The legs in walking frames are almost never correct, etc.

        And even if they get close - as soon as you try to iterate on the first output, you enter a game of whack-a-mole. Okay we fixed the background but now the legs don't look right, let's fix those. Okay great legs are fixed but now the faces are different in every frame let's fix those. Oh no fixing the faces broke the legs again, Etc.

        We are in a weird place where companies are shedding the engineers that know how to use these things. And some of those engineers will become solo-devs. As a solo-dev, funds won't be infinite. So it doesn't seem likely that they can jack up the prices on the consumer plans. But if companies keep firing developers, then who will actually steer the agents on the enterprise plans?

        • mc-0 15 hours ago

          > It still requires a technical person to use these things effectively.

          I feel like few people critically think about how technical skill gets acquired in the age of LLMs. Statements like this kind of ignore that those who are the most productive already have experience & technical expertise. It's almost like there is a belief that technical people just grow on trees or that every LLM response somehow imparts knowledge when you use these things.

          I can vibe code things that would take me a large time investment to learn and build. But I don't know how or why anything works. If I get asked to review it to ensure it's accurate, it would take me a considerable amount of time where it would otherwise just be easier for me to actually learn the thing. Feels like those most adamant about being more productive in the age of AI/LLMs don't consider any of the side effects of its use.

          • IsTom 13 hours ago

            That's not something that will affect the next quarter, so for US companies it might as well be something that happens in Narnia.

        • throwaw12 15 hours ago

          > But when you peel back the curtain, that output isn't quite correct enough to deploy to production

          What if, we change current production environments to fit that blackbox and make it run somehow with 99% availability and good security?

        • KellyCriterion 19 hours ago

          esp when it comes down to integration with the rest of the business processes & people aroud this "single apps" :-)

      • mylifeandtimes 14 hours ago

        Why do we need people to consume when we have the government?

        Serious question. As in, we built the last 100 years on "the american consumer", the idea that it would be the people buying everything. There is no reason that needs to or necessarily will continue-- don't get me wrong, I kind of hope it does, but my hopes don't always predict what actually happens.

        What if the next 100 is the government buying everything, and the vast bulk of the people are effectively serfs. Who HAVE to stay in line otherwise they go to debt prison or tax prison where they become slaves (yes, the US has a fairly large population of prison laborers who are forced to work for 15-50 cents/hour. The lucky ones can earn as much as $1.50/hour. https://www.prisonpolicy.org/blog/2017/04/10/wages/

        • smegger001 13 hours ago

          where will the government get the money to buy anything if the billionaires and their mega corps have it all and spend sufficient amounts to keep the government from taxing. we have a k shape economy where the capital class is extracting all of the value from the working class who are headed to subsistence levels of income and the low class dies in the ditch.

      • nosianu 12 hours ago

        Like before - debt!

        This prevents the consumers from slacking off and enjoying life, instead they have to continue to work work work. They get to consume a little, and work much more (after all, they also have to pay interest, and for consumer credits and credits that the masses get that adds up to a lot).

        In this scenario, it does not even matter that many are unable to pay off all that debt. As long as the amount of work that is extracted from them significantly exceeds the amount of consumption allowed to them all is fine.

        The chains that bind used to be metal, but we progressed and became a civilized society. Now it's the financial system and the laws. “The law, in its majestic equality, forbids rich and poor alike to sleep under bridges, to beg in the streets, and to steal their bread.” (Anatole France)

      • oblio 20 hours ago

        At some point rich people stop caring about money and only care about power.

        • willis936 20 hours ago

          It's a fun thought, but you know what we call those people? Poor. The people who light their own money on fire today are ceding power. The two are the same.

          • oblio 14 hours ago

            At the end the day a medieval lord was poor but he lived a better life than the peasants.

            • WarOnPrivacy 12 hours ago

              > At the end the day a medieval lord was poor but he lived a better life than the peasants.

              As measured in knowledge utilized during basic living: The lives of lords were much less complex than that of modern poor people.

          • catlifeonmars 16 hours ago

            1. Some people can afford to light a lot of their money on fire and still remain rich.

            2. The trick is to burn other people’s money. Which is a lot more akin to what is going on here. Then, at least in the US, if you’re too big to fail, the fed will just give you more cash effectively diminishing everyone else’s buying power.

            • willis936 12 hours ago

              In regards to 2: it's as simple as not letting it be your money being set on fire. Every fiscally responsible individual is making sure they have low exposure to the mag 7.

    • forinti 21 hours ago

      I know that all investments have risk, but this is one risky gamble.

      US$700 billion could build a lot of infrastructure, housing, or manufacturing capacity.

      • tomjen3 20 hours ago

        There is no shortage of money to build housing. There is an abundance of regulatory burdens in places that are desirable to live in.

        Its not due to a lack of money that housing in SF is extremely expensive.

        • ajam1507 20 hours ago

          SF is not the only place where housing is expensive. There are plenty of cities where they could build more housing and they don't because it isn't profitable or because they don't have the workers to build more, not because the government is telling them they can't.

          • loeg 14 hours ago

            It is expensive in those other places for similar reasons as SF -- the government either tells them they can't (through zoning), or makes it very expensive (through regulation, like IZ / "affordable" housing), or limit profitability (rent control), or some combination of the above. All of these reduce the supply of new housing.

          • zeroonetwothree 13 hours ago

            Generally the cities where housing is expensive are exactly the ones where the government is telling people they can't build (or making it very expensive to get approval). Do you have a specific example of a city such as you claim?

          • WillPostForFood 16 hours ago

            Which cities, for example?

      • throwaw12 15 hours ago

        > US$700 billion could build a lot of infrastructure, housing, or manufacturing capacity.

        I am now 100% convinced, that the US has power to build those things, but it will not, because it means lives of ordinary people will be elevated even more, this is not what brutal capitalism wants.

        If it can make top 1% richer in 10 year span vs good for everyone in 20 years, it will go with former

      • unsupp0rted 21 hours ago

        What $700 billion can't do is cure cancers, Parkinsons, etc. We know because we've tried and that's barely a sliver of what it's cost so far, for middling results.

        Whereas $700 billion in AI might actually do that.

        • wolfram74 21 hours ago

          Your name is well earned! "can't cure cancers" is impressively counterfactual [0] as 5 year survival of cancer diagnosis is up over almost all categories. Despite every cancer being a unique species trying to kill you, we're getting better and better at dealing with them.

          [0]https://www.cancer.org/research/acs-research-news/people-are...

          • XCSme 20 hours ago

            Treating cancer is not the same as curing it. Currently, no doctor would ever tell you you are "cured", just that you are in remission.

            • kmbfjr 7 hours ago

              Cancer is approaching being a managed chronic disease. That isn’t remission.

              • XCSme 7 hours ago

                In my experience, most people with cancer that I know simply oscillate between having life-threatening active cancer/tumors and remission.

                I don't know any case where people have detectable cancer and it's just being managed, I think that's more the exception than the rule.

                For my girlfriend, when she was in her last stages they had to do that (try to slow down/manage the cancer instead of remove it), but that was already palliative care and she died soon after. Also, the only reason they didn't try removing the tumor is because the specific location in the brain (pons) is inoperable.

          • unsupp0rted 21 hours ago

            Yes, we're getting better at treating cancers, but still if a person gets cancer, chances are good the thing they'll die of is cancer. Middling results.

            Because we're not good at curing cancers, we're just good at making people survive better for longer until the cancer gets them. 5 year survival is a lousy metric but it's the best we can manage and measure.

            I'm perfectly happy investing roughly 98% of my savings into the thing that has a solid shot at curing cancers, autoimmune and neurodegenerative diseases. I don't understand why all billionaires aren't doing this.

            • minifridge 20 hours ago

              How AI will cure neurodegenerative diseases and cancer?

              • tim333 9 hours ago
              • unsupp0rted 20 hours ago

                If we knew that we probably wouldn’t need AI to tell us.

                But realistically: perhaps by noticing patterns we’ve failed to notice and by generating likely molecules or pathways to treatment that we hadn’t explored.

                We don’t really know what causes most diseases anyway. Why does the Shingles vaccine seem to defend against dementia? Why does picking your nose a lot seem to increase risk of Alzheimer’s?

                That’s the point of building something smarter than us: it can get to places we can’t get on our own, at least much faster than we could without it.

                • catlifeonmars 15 hours ago

                  I don’t think that lack of intelligence is the bottleneck. It might be in some places, but categorically, across the board, our bottlenecks are much more pragmatic and mundane.

                  Consider another devastating disease: tuberculosis. It’s largely eradicated in the 1st world but is still a major cause of death basically everywhere else. We know how to treat it, lack of knowledge isn’t the bottleneck. I’d say effectively we do not have a cure for TB because we have not made that cure accessible to enough humans.

                  • alex43578 14 hours ago

                    That’s a weird way to frame it. It’s like saying we don’t know how to fly because everyone doesn’t own a personal plane.

                    We have treatments (cures) for TB: antibiotics. Even XDR-TB.

                    What we don’t have is a cure for most types of cancer.

                    • catlifeonmars 13 hours ago

                      Flying is a bad example because airlines are a thing and make flying relatively accessible.

                      I get your point, but I don’t think it really matters. If a cure for most (or all) cancers is known but it’s not accessible to most people then it is effectively nonexistent. E.g it will be like TB.

                      > We have treatments (cures) for TB

                      TB is still one of the top 10 causes of death globally.

                      • alex43578 2 hours ago

                        Things like antibiotics are plenty accessible - 3rd world countries are literally overusing and misusing antibiotics to the point of causing drug resistance in TB. "Effectively we do not have [thing] because we have not made that [thing] accessible to enough humans" is an exercise in goal-post moving.

                        About 15% of people over the age of 15 are illiterate, but it'd be silly to say "effectively we don't have literacy", even in a global context. Depending on the stat, 1 in 10 don't have access to electricity, but electricity has been in 50% of American homes for over 100 years.

                        The reality is that the future is unevenly distributed. AI and more broadly technology as a whole, will only exacerbate that uneven distribution. That's just the reality of progress: we didn't stall electrifying homes in NYC because they didn't get electricity in Papua New Guinea.

                        If AI discovers a cure for cancer, it may be incredibly unevenly distributed. Imagine it's some amp'd-up form of CAR-T, requiring huge resources and expenses, but offering an actual cure for that individual. It'd be absurd to say we couldn't consider cancer cured just because the approach doesn't scale to a $1 pill.

            • beepbooptheory 20 hours ago

              Maybe it should give you pause then, that not everyone else is investing 98% of their savings?

              • unsupp0rted 20 hours ago

                It gives me pause that most people drive cars or are willing to sit in one for more than 20 minutes a week.

                But people accept the status quo and are afraid to take a moment’s look into the face of their own impending injury, senescence and death: that’s how our brains are wired to survive and it used to make sense evolutionarily until about 5 minutes ago.

            • catlifeonmars 15 hours ago

              > I don't understand why all billionaires aren't doing this.

              I know, shocking isn’t it?

            • danaris 19 hours ago

              Ah, yes: "well, we can't cure cancer or autoimmune and neurodegenerative diseases, but I'm willing to invest basically all my money into a thing that's...trained on the things we know how to do already, and isn't actually very good at doing any of them."

              ...Meanwhile, we are developing techniques to yes, cure some kinds of cancer, as in every time they check back it's completely gone, without harming healthy tissue.

              We are developing "anti-vaccines" for autoimmune diseases, that can teach our bodies to stop attacking themselves.

              We are learning where some of the origins of the neurodegenerative diseases are, in ways that makes treating them much more feasible.

              So you're 100% wrong about the things we can't do, and your confidence in what "AI" can do is ludicrously unfounded.

              • unsupp0rted 19 hours ago

                Every doctor and researcher in the world is trained on things we already know how to do already.

                I’m not claiming we haven’t made a dent. I’m claiming I’m in roughly as much danger from these things right now as any human ever has been: middling results.

                If we can speed up the cures by even 1%, that’s cumulatively billions of hours of human life saved by the time we’re done.

                • danaris 19 hours ago

                  But what they can do, that AI can't, is try new things in measured, effective, and ethical ways.

                  And that hypothetical "billions of hours of human life saved" has to be measured against the actual damage being done right now.

                  Real damage to economy, environment, politics, social cohesion, and people's lives now

                  vs

                  Maybe, someday, we improve the speed of finding cures for diseases? In an unknown way, at an unknown time, for an unknown cost, and by an unknown amount.

                  Who knows, maybe they'll give everyone a pony while they're at it! It seems just as likely as what you're proposing.

    • TheDong 15 hours ago

      There's one additional question we could have here, which is "is AI here to stay and is it net-positive, or does it have significant negative externalities"

      > What we really want to ask ourselves is whether our economy is set up to mostly get things right, or it is wastefully searching.

      We've so far found two ways in recent memory that our economy massively fails when it comes to externalities.

      Global Warming continues to get worse, and we cannot globally coordinate to stop it when the markets keep saying "no, produce more oil, make more CO2, it makes _our_ stock go up until the planet eventually dies, but our current stock value is more important than the nebulous entire planet's CO2".

      Ads and addiction to gambling games, tiktok, etc also are a negative externality where the company doing the advertising or making the gambling game gains profit, but at the expense of effectively robbing money from those with worse impulse control and gambling problems.

      Even if the market votes that AI will successfully extract enough money to be "here to stay", I think that doesn't necessarily mean the market is getting things right nor that it necessarily increases productivity.

      Gambling doesn't increase productivity, but the market around kalshi and sports betting sure indicates it's on the rise lately.

    • somewhereoutth 21 hours ago

      I suspect a lot of this is due to large amounts of liquidity sloshing around looking for returns. We are still dealing with the consequences of the ZIRP (Zero Interest Rate Policy) and QE (Quantitative Easing) where money to support the economy through the Great Financial Crisis and Covid was largely funneled in to the top, causing the 'everything bubble'. The rich got (a lot) richer, and now have to find something to do with that wealth. The immeasurable returns promised by LLMs (in return for biblical amounts of investment) fits that bill very well.

    • majormajor 15 hours ago

      AI could be here to stay and "chase a career as an electrician helping build datacenters" could also be a mistake. The construction level could plateau or decline without a bubble popping.

      That's why it can't just be a market signal "go become an electrician" when the feedback loop is so slow. It's a social/governmental issue. If you make careers require expensive up-front investment largely shouldered by the individuals, you not only will be slow to react but you'll also end up with scores of people who "correctly" followed the signals right up until the signals went away.

      • throwaway0123_5 13 hours ago

        > you'll also end up with scores of people who "correctly" followed the signals right up until the signals went away.

        I think this is where we're headed, very quickly, and I'm worried about it from a social stability perspective (as well as personal financial security of course). There's probably not a single white-collar job that I'd feel comfortable spending 4+ years training for right now (even assuming I don't have to pay or take out debt for the training). Many people are having skills they spent years building made worthless overnight, without an obvious or realistic pivot available.

        Lots and lots of people who did or will do "all the right things," with no benefit earned from it. Even if hypothetically there is something new you can reskill into every five years, how is that sustainable? If you're young and without children, maybe it is possible. Certainly doesn't sound fun, and I say this as someone who joined tech in part because of how fast-paced it was.

        • zozbot234 12 hours ago

          > Many people are having skills they spent years building made worthless overnight, without an obvious or realistic pivot available.

          I'd like to see real examples of this, beyond trivial ones like low-quality copywriting (i.e. the "slop" before there was slop) that just turns into copyediting. Current AI's are a huge force multiplier for most white-collar skills, including software development.

    • hackable_sand 18 hours ago

      Your comment doesn't say anything

    • HardCodedBias 15 hours ago

      "The real question is whether the boom is, economically, a mistake."

      The answer to this is two part:

      1. Have we seen an increase in capability over the last couple of years? The answer here is clearly yes.

      2. Do we think that this increase will continue? This is unknown. It seems so, but we don't know and these firms are clearly betting that it will.

      1a. Do we think that with existing capability that there is tremendous latent demand? If so the buildout is still rational if progress stops.

    • dfedbeef 14 hours ago

      If

    • mschuster91 15 hours ago

      > People will take courses in those things and try to get a piece of the winnings.

      The problem is boom-bust cycles. Electricians will always be in demand but it takes about 3 years to properly train even a "normal" residential electrician - add easily 2-3 years on top to work on the really nasty stuff aka 50 kV and above.

      No matter what, the growth of AI is too rapid and cannot be sustained. Even if the supposed benefits of AI all come true - the level of growth cannot be upheld because everything else suffers.

      • marcosdumay 15 hours ago

        > it takes about 3 years to properly train even a "normal" residential electrician

        To pass ordinary wire with predefined dimensions in exposed conduits? No way it takes more than a few weeks.

        • chasd00 8 hours ago

          It’s protected by requiring many hours (years) of apprenticeship. These kinds of heavily unionized jobs only reward seniority. Gotta pay your dues buddy!

        • mschuster91 14 hours ago

          I'm talking about proper German training, not the kind of shit that leads to what Cy Porter (the home inspector legend) exposes on Youtube.

          Shoddy wiring can hold up for a looong time in homes because outside of electrical car chargers and baking ovens nothing consumes high current over long time and as long as no device develops a ground fault, even a lack of a GFCI isn't noticeable. But a data center? Even smaller ones routinely rack up megawatts of power here, large hyperscaler deployments hundreds of megawatts. Sustained, not peak. That is putting a lot of stress on everything involved: air conditioning, power, communications.

          And for that to hold up, your neighbor Joe who does all kinds of trades as long as he's getting paid in cash won't cut it.

    • mcphage 20 hours ago

      > What we really want to ask ourselves is whether our economy is set up to mostly get things right, or it is wastefully searching.

      I can’t speak to the economy as a whole, but the tech economy has a long history of bubbles and scams. Some huge successes, too—but gets it wrong more often than it gets it right.

    • thefz 17 hours ago

      > If AI is here to stay, as a thing that permanently increases productivity,

      Thing is, I am still waiting to see where it increases productivity aside from some extremely small niches like speech to text and summarizing some small text very fast.

      • sigseg1v 15 hours ago

        Serious question, but have you not used it to implement anything at your job? Admittedly I was very skeptical but last sprint in 2 days I got 12 pull requests up for review by running 8 agents on my computer in parallel and about 10 more on cloud VMs. The PRs are all double reviewed and QA'd and merged. The ones that don't have PRs are larger refactors, one 40K loc and the other 30k loc and I just need actual time to go through every line myself and self-test appropriately, otherwise it would have been more stuff finished. These are all items tied to money in our backlog. It would have taken me about 5 times as long to close those items out without this tooling. I also would have not had as much time to produce and verify as many unit tests as I did. Is this not increased productivity?

        • thefz 3 hours ago

          So you roll a dice and call yourself a software engineer, basically.

      • cheema33 14 hours ago

        > I am still waiting to see where it increases productivity...

        If you are a software engineer, and you are not using using AI to help with software development, then you are missing out. Like many other technologies, using AI agents for software dev work takes time to learn and master. You are not likely to get good results if you try it half-heartedly as a skeptic.

        And no, nobody can teach you these skills in a comment in an online forum. This requires trial and error on your part. If well known devs like Linus Torvalds are saying there is value here, and you are not seeing it, then then the issue is not with the tool.

        • thefz 3 hours ago

          These are definitely skills I don't want to have, don't worry.

      • throwaw12 15 hours ago

        Are you doctor or a farmer?

        If you are a software engineer you are missing out a lot, literally a lot!

        • bopbopbop7 15 hours ago

          What is he missing? Do you have anything quantitative other than an AI marketing blog or an anecdote?

  • mixologic 14 hours ago

    It's caused a massive shortage of interesting content that isn't related to AI.

    • cheschire 8 hours ago

      Remember when AI were closer to just purpose-built neural networks and not always LLMs?

      I miss reading about that kind of “AI”

    • dmix 12 hours ago

      The best part is every single thread on HN now has someone accusing the author of using AI.

    • journal 6 hours ago

      many users here are not willing to try anything new that might give it a chance. even if i show someone a better alternative, they still wont use it because no one else is using it. people need to be told what to do. initiative has been beaten out of you all.

  • 1vuio0pswjnm7 13 hours ago

    Alternative to archive.ph, no Javascript, no CAPTCHAs:

       x=www.washingtonpost.com 
       { 
       printf 'GET /technology/2026/02/07/ai-spending-economy-shortages/ HTTP/1.1\r\n'
       printf 'Host: '$x'\r\n'
       printf 'User-Agent: Chrome/115.0.5790.171 Mobile Safari/537.36 (compatible ; Googlebot/2.1 ; +http://www.google.com/bot.html)\r\n'
       printf 'X-Forwarded-For: 66.249.66.1\r\n\r\n'
       }|busybox ssl_client -n $x $x > 1.htm
       firefox ./1.htm
    • WarOnPrivacy 13 hours ago

      > Alternative to archive.ph, no Javascript, no CAPTCHAs:

      The other tld have been kinder to me (no captcha).

      https://archive.md/J8pg5

      https://archive.fo/J8pg5

    • 1vuio0pswjnm7 12 hours ago

      Anonymous middleman that could potentiallly collect browsing histories, serves CAPTCHAS, requires Javascript, maybe in crosshairs of authorities, unreliable according to some commenters (YMMV). NB. Nothing about "honeypot", just observations (cf. "accusations")

      Someone recently noticed an apparent DDOS attempt on some blogger using Javascript fetch function

      The site used to include a tracking pixel containing the visitor's IP address

      Also used to ping mail.ru

      Would need to look at the page source again to see what it contains today

      It's a crowd favorite

      People love it

      • 1vuio0pswjnm7 9 hours ago

        Third party anonymous middleman that can observe what site(s) a user wants to read

        Keywords: anonymous, third party

        The websites with the webpages that a user seeks to read, e.g., some page on www.washingtonpost.com, are not third party websites. They are "first party" websites

        Other archives are third parties but are generally not run by anonymous operators that keep shifting between different IP addresses and domain names

        Other archives generally do not serve CAPTCHAs or require Javascript

        Will provide examples if requested

        No Anubis:

           {
           printf 'GET / HTTP/1.0\r\n'
           printf 'Host: www.kernel.org\r\n\r\n'
           }|busybox ssl_client 146.75.109.55
        
        
        NB. Replace 205.1.1.1 with user's IP address, replace cc with country code, replace 123456789 with some 9-digit number

        </script></div></div><img style="position:absolute" width="1" height="1" src="https://205.1.1.1.cc.VSY1.123456789.pixel.archive.md/x.gif"><script type="text/javascript">

        This is from December 2024. May have changed since then

      • gruez 9 hours ago

        >Anonymous middleman that could potentiallly collect browsing histories

        So literally any site? What's the alternative, using something like bypass paywalls clean, and allow it to access your browsing history AND steal your cookies?

        >serves CAPTCHAS

        I don't like it, but it's understandable given the load from AI scrapers. Do you also get upset at kernel.org for putting up Anubis?

        >requires Javascript

        So most sites?

        >The site used to include a tracking pixel containing the visitor's IP address

        ???

        They couldn't get visitor IP through logs?

      • dmix 12 hours ago

        Are you accusing archive.today of being a honeypot for the feds because they use Cloudflare? That's a bit much don't you think?

        • MallocVoidstar 12 hours ago

          Archive.today don't use Cloudflare, the admin mimics their captcha page because he hates them. He also used to captcha-loop anyone using Cloudflare's DNS resolver because they don't send the IP subnet of clients to upstreams.

          I don't think it's a honeypot, though, it's not like he's learning much about me other than I like not paying for news sites.

  • bm3719 21 hours ago

    This is the trade-off to connectivity and removing frictional barriers (i.e., globalism). This is the economic equivalent of what Nick Land and Spandrell called the "IQ shredder". Spandrell said of Singapore:

      Singapore is an IQ shredder. It is an economically productive metropolis that
      sucks in bright and productive minds with opportunities and amusements at the
      cost of having a demographically unsustainable family unit.
    
    Basically, if you're a productive person, you want to maximize your return. So, you go where the action is. So does every other smart person. Often that place is a tech hub, which is now overflowing with smart guys. Those smart guys build adware (or whatever) and fail to reproduce (combined, these forces "shred" the IQ). Meanwhile every small town is brain-drained. You hometown's mayor is 105 IQ because he's the smartest guy in town. Things don't work that great, and there's a general stagnation to the place.

    Right now, AI is a "capital shredder". In the past, there were barriers everywhere, and we've worked hard to tear those down. It used to be that the further the distance (physically, but also in other senses too, like currencies, language, culture, etc.), the greater the friction to capital flows. The local rich guy would start a business in his town. Now he sends it to one of the latest global capital attractors, which have optimized for capital inflow. This mechanism works whether the attractor can efficiently use that capital or not. That resource inflow might be so lucrative, that managing inflow is the main thing it does. Right now that's AI, but as long as present structure continues, this is how the machine of the global economy will work.

    • malfist 20 hours ago

      This is hogwash. It's incel and eugenic reasoning wrapped up all together.

      Not every smart person (or even most) are engineers, and of the ones that are they don't all move to tech hubs, and the ones that do not all of them can't get laid.

      And I'll give you a great reason why it's hogwash, the "brilliant" engineers that can't get laid in Singapore are the same "brilliant" engineers that can't get laid in their home town

      • internet_points an hour ago

        > can't get laid

        huh, I thought it was talking about how expensive extra bedrooms are in big cities so people choose not to have (as many) kids there

      • dauertewigkeit 15 hours ago

        I agree with you that STEM don't hold a monopoly on intelligence.

        > engineers that can't get laid in Singapore are the same "brilliant" engineers that can't get laid in their home town

        Maybe so, but not for the same reasons. Back in their home town, they cannot vibe with anyone because the few who might be compatible have long since left. In a STEM hotspot, they go to an event and meet compatible people, but it's 11 guys for every 3 girls, so unless they are top dog in that room, they aren't going to score.

        • malfist 15 hours ago

          And yet, the population of Singapore isn't 79% male.

          • dauertewigkeit 15 hours ago

            IDK the dating scene in Singapore. I frankly didn't even know that Singapore was considered a tech hub. I was using it as a synonym for a tech hub because that is what I assumed the author was doing.

      • 9dev 20 hours ago

        …not to mention they are completely ignoring the existence of smart girls as well

        • raincole 9 hours ago

          Higher education is strongly associated to lower fertility rates. Especially for women, but for men too. So no the argument doesn't ignore the existence of smart women. Smart women (and men) just far less likely to reproduce, statistically speaking.

          • 9dev 2 hours ago

            > […] overflowing with smart guys. […] smart guys build […] he's the smartest guy […] local rich guy would start a business […]

            It’s pretty clear this entire crude theory is based on a thinking system that has little place for women in active positions.

            And if your immediate reaction to that is annoyance because it seems like an insignificant detail, maybe reflect on that for a moment.

      • bm3719 20 hours ago

        We can blame the individual for the cost we've outsourced into him. When he collapses under that load, we can attribute it to personal shortcomings. Some people survive, even thrive, in the current environment, after all. We've coalesced a plurality of games into a single one, and in a sense, it works great. We have our smartphones, AI, online shopping, and targeted advertising.

        Notepad now has Copilot built right into it, after all. That wasn't going to happen by now if we took the human psyche as a given and built around that.

      • g8oz 19 hours ago

        >>It's incel and eugenic reasoning wrapped up all together.

        More like French post-structuralism.

      • Forgeties79 20 hours ago

        It’s the same nonsense as people going “Idiocracy is a documentary.“ Of course none of them think they’re the idiots.

        • NegativeK 8 hours ago

          I was going to comment the same. Complaining about how there are too many dumb people is a trollish red flag I've learned to disengage from.

          Also, stop blaming your damn users.

    • simianwords 15 hours ago

      Apologies but either I don’t understand your post or it is nonsensical.

      What relevance does AI have to being an IQ shredder if the talent has gone into (productively) developing capable AI?

      If anything, AI completely disproves your notion of IQ shredder because this is an instance of lack of barriers actually hastening progress. Look at all the AI talent. Very few are American or ethnic Americans.

      • hunterpayne 2 hours ago

        > Very few are American or ethnic Americans

        Maybe you should look into why that is. Or specifically look at the scores and grades of students applying to grad school in the US vs who is admitted (this isn't a recent thing either) by country. If anything, we are probably behind where we should be because we don't admit the best into the best schools. We admit those that don't need grants (usually because their government pays instead).

      • bm3719 14 hours ago

        AI is an attractor. In general, attractors absent barriers have the potential to act as a resource shredder in the context of an ecosystem where stability was predicated on said barriers being present.

        I called AI a "capital shredder" because I'm asserting that it is one (of many) by comparison to a more even distribution of capital investment. The non-attractor small town where the capital that would have gone into a self-reinforcing progress cycle has it redirected into an external context. If you don't care about that and just want more AI, then no worries, because that's what we're optimizing for.

        I was trying to limit my point to something clear with linear reasoning, but AI is indeed also an IQ shredder in both the immediate and transitive sense. For the former, it's an aggregator of talent. From the perspective of the small town (both domestic and foreign), its best human resources have been extracted.

        Relation with production is irrelevant for the purposes of being a shredder, but this system does generally serve for increased production of a sort. That's why we ended up with it. The small town doesn't get its factories, local governance doesn't get good programmers, etc. Those resources are being redirected into getting us to wherever AI is going to go because that's what the global economic machine desires right now. Likewise in the past for NFTs, crypto, cloud, etc.

        To personalize this: I was drawn to an attractor and rode one of those waves myself, and, at least economically, it worked out great for me as it probably has for others here. However, this system isn't a free lunch.

        • simianwords 14 hours ago

          It looks like it’s working as expected then? This is what I would have done with AI if I had complete authority to decide where money flows and what people should do.

          What’s the alternative

    • wodenokoto 20 hours ago

      What amazes me about this theory is that being the 115 IQ guy in a town where the next guy is 105 isn’t better than being the 115 IQ guy in and office averaging 120.

      Or put more plainly, being a big fish in a small pond is not better than being a small fish.

      • moomoo11 15 hours ago

        Why not just not compare to others because it’s easy to game anything?

        Just look for patterns and then act out of self interest. Nobody is coming to save you.

        I’m no high IQ person but if I can figure out how to get a STEM job without STEM degree, make money by getting lucky at a unicorn, invest and sell for profit and invest again (only losers HODL so others can take profits), then there’s really no excuse why someone else can’t.

        And I’m originally from a country that has like 70 IQ nationally or so it is said. So I’m not a genius, maybe the only quality I have that makes me different is I don’t know how to quit until I meet my goal.

        More people should stop crying and be a man. Our ancestors literally survived against nature and each other so we could post here on HN. I don’t mean being able to lift 500lbs like a caveman either.

        Exercise your brain, it gets stronger too because I have a hard time understanding concepts sometimes, but taking steps to break it down and digest it in pieces helps me. Takes a bit longer but hopefully you have tomorrow.

        • Synthetic7346 14 hours ago

          I can't tell if this is satire

          • nubg 12 hours ago

            MENA slop

            • moomoo11 10 hours ago

              Whatever you need to tell yourself to cope!

              I tried to help, but maybe I should have given you two a tissue box instead.

    • oblio 13 hours ago

      > Singapore is an IQ shredder.

      Heh, I've just realized about 2 years ago that it's worse.

      Cities are people shredders. Based on the information I've found, cities have lower fertility rates than rural areas and this has been the case ever since they were created.

      I absolutely love cities, but with ever increasing urbanization and unless we make HUGE changes to facilitate people easily having kids in cities (and I'm talking HUGE, stuff like having stay at home parents for the first 6-7 years of their childhood, free access to communal areas that offer all the services required to take care of kids of any age, free education, etc), humanity will probably not be able to sustain a population of more than say, 1 billion people. Probably much fewer.

      Which I guess, could work, but we will be in totally uncharted territory.

      And then AI comes in and things become... very interesting.

      https://asimov.fandom.com/wiki/Solaria

    • dyauspitr 21 hours ago

      What’s the alternative. Keep the smart physically separated, can never collaborate to make anything paradigm shifting and we just prod along with small town paper mills and marginally better local government?

      • bm3719 20 hours ago

        Within the Landian system, I suspect he'd say the answer is economic "territorialization", the economic equivalent to the mechanism originally defined by Deleuze+Guattari in A Thousand Plateaus based on the territoriality of earlier work.

        It's the process where social, political, or cultural meaning is rooted in some context. It's a state of stability and boundaries. For just the economic, the geographic would likely be the centroid of that, but the other vectors are not irrelevant.

        One could argue that we suffer to the degree we are deterritorialized, because the effects thereof are alienating. So, we need structure that aligns both our economic and psychological needs. What we have is subordination to the machine, which will do what it's designed to: optimize for its own desire, which is machinic production.

        Note that none of this is inherently good/bad. Like anything, a choice has trade-offs. We definitely get more production within the current structure. The cost is born by the individual, aggregating into the social ills that are now endemic.

        • gom_jabbar 19 hours ago

          Land himself has suggested a very anti-human solution to the problem of "IQ shredders":

          "The most hard-core capitalist response to this [IQ shredders] is to double-down on the antihumanist accelerationism. This genetic burn-rate is obviously unsustainable, so we need to convert the human species into auto-intelligenic robotized capital [a]s fast as possible, before the whole process goes down in flames." [0]

          [0] Nick Land (2014). IQ Shredders in Xenosystems Blog. Retrieved from github.com/cyborg-nomade/reignition

          • bm3719 18 hours ago

            Thanks, been awhile since I read it.

            I think the only solution is territorialization if you want to preserve the human. If you don't care about that (or think that it's not possible anyway), then yes, accelerate.

            • AndrewKemendo 15 hours ago

              Glad there’s other fellow travelers here

        • dyauspitr 16 hours ago

          We’ve had 50 years of genetic engineering and it’s about time we started using it. I wish someone with more central authority like China starts doing experiments of genetically altering humans to start making super humans. We have the technology, it’s only ethics holding us back. So what if a few thousand people (preferably volunteers) die in experiments, we should just make sure they’re condemned (like death row or terminally ill) and carry on.

      • WillAdams 20 hours ago

        California had a great mechanism for this in their land grant colleges, which back before the protests of the Vietnam War were required to offer the valedictorian of nearby high schools (or the person with the highest GPA who accepted) full tuition and room and board --- then Governor Ronald Reagan shut down this program when the students had the temerity to protest the Vietnam War --- it was also grade inflation to keep students above the threshold necessary for a draft deferment which began the downward spiral of American education.

      • raincole 9 hours ago

        Yeah, more separated would be ideal. Local government (and churches or other organizations) should have strong incentives to keep talents in their hometowns, or at least their home countries. Imagine how much stronger technologically the EU would be now if their skilled workers didn't get brain-drained to the US for decades.

      • HPsquared 20 hours ago

        A little less min-maxing, perhaps.

    • sjjsjdk 15 hours ago

      this is genius

  • mandeepj 15 hours ago

    It’s definitely not causing shortages of people, with tens of thousands and more getting laid off every month!

    The AI boom is just like a conference- where new and shiny seems to do wonders, but when you come back to workplace, none of that seem to work or fit in!

    • erikerikson 15 hours ago

      The article was specific about where the jobs shortages are. Electricians, construction, mathematicians (?)(wasn't aware the was a lot of dedicated math work in a data center design and build). Lay offs have been in software, design, and elsewhere.

    • roysting 15 hours ago

      Yes, because the bigger issue is that the whole American system has been so dominated by the rapacious and parasitic mindset of the blob that may be grouped as the PE/investor class (which I also belong to and have benefited from, even though it’s been deliberately secondarily), which has been extracting/monetizing value and cutting real, human, long term investment in lieu of short term, quick, unsustainable outsourcing and resource extracting methods for so many decades now that it has now left America in a tight spot.

      Unfortunately, though it has left mostly the Europeans, but all American vassals in an even worse position as the vampiric fake American ruling class, devours America like Saturn devouring his son, as famously depicted by Goya. The parasitic American empire is in a bit of a panic and it is pulling out all the stops to make everyone else pay for its failures, evil, and detrimental activities. That comes in the form of extracting value from Europe and damaging their economy in order to make them even more dependent vassals who still think they can rebel like a toddler threading to run away from home, and dominating the Americas that are effectively helpless in the face of Americas overwhelming position over them.

      The anxiety and panic among the American ruling class is palpable among the clubhouse chatter. The empire is in a bit of a panic to sustain itself, just as much as it is also trying to devour what little value is left among the American pension funds and people to save itself; Saturn devouring his son for fear of the prophesy that his child would overthrow him.

      • dmix 11 hours ago

        It is a downside of being the world center of finance is that you have way too many well paid finance guys sitting in rooms trying to squeeze the juice out of everything on the market.

        The opposite end of that is that this AI boom wouldn't have existed in America without the insane amount of capital these guys can put together.

  • wundersam 25 minutes ago

    The railway comparison is apt, but the pace is striking. JPMorgan calculates tech needs 650B in new annual revenue just to break even on ROI. That's an extraordinary bet.The shortage effects are real and measurable. But capital doesn't easily substitute — you can't just redirect 700B into infrastructure without immediately hitting bottlenecks in skilled labor and materials.I use LLMs daily for coding. They're helpful for scaffolding. But the gap between "useful tool" and "justify restructuring the entire economy" is massive. If this is a bubble, the legacy won't just be stranded GPUs — it'll be years of diverted resources and accelerated power consolidation. The railway boom left railways. What does an AI boom leave if the returns don't materialize?

  • int0x29 an hour ago

    The article is primarily about how wild unrestrained AI spending is causing problems. For example it is hard to get an electrician, anything with memory in it is either significantly more expensive or unobtainable, and building of homes, factories, and hospitals is being depressed pushing down supply (Housing supply being potentially depressed by AI is both serious and alarming). The HN conversation about this article seems to instead be will Google, OpenAI, etc break even? The degree of disconnect in Silicon Valley between the tech industry and everyone else's problems is alarming.

  • ricardobayes 3 hours ago

    Why does almost every thread have 200+, sometimes 500+ comments now? Back a few months or a year ago, most threads were sitting at 10-20 comments max.

    • consp 3 hours ago

      They stay on the front page longer. I've seen more stayin there for hours upto days.

  • utopiah 15 hours ago

    Good thing that I don't need a :

    - bigger TV, my "old" not even 4K video projector is enough

    - faster phone with more memory or better camera, my current one as "just" 5G, is enough

    - faster laptop/desktop, I can work on the laptop, game on desktop

    - higher resolution VR headsets (but I'll still get a Steam Frame because it's more free)

    - denser smart watch, I'm not even using the ones I have

    ... so, the situation is bad, yes, and yet I don't really care. The hardware I have is good enough and in fact regardless of AI I've been arguing we've reached "peak" IT few years ago already. Of course I wouldn't mind "better" everything (higher resolution, faster refresh rate, faster CPU/GPU, more memory, more risk, etc). What I'm arguing for though is that most "normal" users (please, don't tell me you're a video editor for National Geographic who MUST edit 360 videos in 8K! That's great for you, honestly, love that, but that's NOT a "normal" user!) who bough high end hardware during the last few year matches most of their capabilities.

    All that being said, yes, pop that damn bubble, still invest in AI R&D and datacenters, still invest in AI public research for energy, medicine, etc BUT not the LLM/GenAI tulip commercial craze.

    • benjiro 9 hours ago

      Your forgetting a little detail ... While you do not need a lot of new stuff, companies need buyers. A lot of companies work on rather thing margins and losing potentially 10 a 20% sales can result in people getting fired, or companies shutting down.

      Remember, its not just about "O, X big brands sells less, they can deal with it". But a lot of brands have suppliers who feed that system. Or PC component makers like ... heatsinks, Fans, Cases ... seeing a 20% less sales because people buy less new PCs.

      People do not realize how much is linked in the industry. Smaller GPU card makers are literally saying that they may be forced to leave the industry because of drops in sales and the memory prices making the products too expensive.

      We can live a long time on old hardware but hardware also limits. Hey, the wife's laptop is from 2019, just before Covid (2020 when a lot of people bought new laptops). The battery is barely holding on. Replacement? None (reputable) ... So in a year that laptop is dead.

      How about phones? Same issue ... battery is the build in obsolete maker.

      You see the issue. It goes beyond what what most people realize.

      Wait when a recession hits when the whole AI bubble bursts and cascades down the already weakened industry. Unlike previous bubbles, the hardware being build is so specialized, that little will hit the normal consumer market. So there will not be a flood of cheap GPUs or memory being dumped on the market.

      • kykat 8 hours ago

        Yeah, some hardware vendors that sell things like pc cases or coolers have definitely noticed that people are really building way less PCs

  • baby-yoda 12 hours ago

    The token maximizer is a thought experiment illustrating AI alignment risks. Imagine an AI given the simple goal of maximizing token production. If it became sufficiently powerful, it might convert all available resources — including humans and the Earth — into tokens or token-producing infrastructure, not out of malice, but because its objective function values nothing else. The scenario shows how even a trivial goal, paired with enough intelligence, can lead to catastrophic outcomes if the AI's objectives aren't carefully aligned with human values.

    *written by AI, of course

  • atleastoptimal 3 hours ago

    I feel like I'm caught in between two schizophrenically myopic perspectives on AI.

    One being:

    >Generative AI is a product of VC-funding enabled hype, enormous subsidies and fraudulent results. No AI code "really" works or contributes to productivity, and soon the bubble will burst, returning Real Software Engineers to their former peerless ascendency.

    And the other perspective:

    >The AI boom will be the last chance to make money, after which point your socioeconomic status circa 2028 will be the permanent station of all your progeny, who will enjoy a heavenly post-scarcity experience with luxury amenities scaled by your PageRank equivalent of social connections to employees at leading AI labs.

  • drnick1 15 hours ago

    > Smartphones are expected to get pricier for potentially years to come.

    "Smart" phones have ceased to be smart years ago. They are now instruments of mass surveillance, and the tech industry has convinced people that 1) you need one not to be a social outcast, 2) you need to upgrade it every year, 3) not having root access is for your own good.

    I'll stick to my Pixel 9a running Graphene for the forseeable future.

    • goalieca 15 hours ago

      Well, the article touched on how smaller phone makers and middle tier startups, are being squeezed out. Big tech and their surveillance economy is only going to tighten their grip now.

  • exabrial 9 hours ago

    This is ok, let macroeconomics happen, and instead, enforce monopoly laws. The effect in 2-3 years will be a wider base of suppliers, cheaper goods, and solid supply chains.

  • coldtea 9 hours ago

    I wouldn't call inflating a bubble a "boom".

  • hedayet 13 hours ago

    A bubble doesn’t necessarily imply the underlying technology is useless.

    It implies that expectations and capital allocation have significantly outpaced realistic returns, leading to painful corrections for bulls.

    And with AI, a classic bubble signal is emerging: widespread exit-timing instead of deep, long-term conviction.

    • an0malous 9 hours ago

      The greatest critics of the AI boom like Gary Marcus and Michael Burry aren’t even saying it’s useless, that’s a strawman no one is arguing for

    • bdangubic 9 hours ago

      > widespread exit-timing instead of deep, long-term conviction.

      $500+bn in one year capex from largest and most profitable companies ever known to mankind seems like a deep and long-term conviction, no? you and I may not have conviction, the largest and most profitable companies on earth that have been carrying most of global economy do

  • lvl155 15 hours ago

    Sign me up to be an electrician. The line for that is pretty long and gatekept.

    • ares623 14 hours ago

      You and 100,000 other software engineers will make that line pretty short in no time I bet!

      It's amazing how South Park has better economic sense than HN (or maybe not actually)

  • dehrmann 14 hours ago

    Does anyone know the high-level breakdown of where the money's going and how long that part (energy production, energy transmission, network, datacenter, servers/GPUs) lasts?

  • loeg 15 hours ago

    > Good luck finding an electrician

    My local (urban) residential electricians aren't even busy -- they are booking less than a week out. By contrast, just last year they were booking six weeks out. The fall in EV infrastructure demand due to eliminated incentives might be impacting them more than additional data center demand.

  • alecco 13 hours ago

    What a low quality article with zero new information or insights. WaPo is dead.

  • themafia 13 hours ago

    AI "boom?"

    Where's the "boom?" Doesn't that imply a bunch of people are getting rich off of new business?

    Where are those?

  • geetee 10 hours ago

    All this disruption to every facet of society. For what? So you can roleplay as the next billionaire startup founder with your weekend project? All while the actual tech billionaires have a giant dick measuring contest.

  • bix6 21 hours ago

    RIP Washington Post.

  • macguyv3r 14 hours ago

    Good.

    Gen pop can diversify its skillset, become more independently self sufficient as a result, rely on/require less money overall, and realize they don't really need to listen to rich tech CEOs which will implode their value politically.

    SaaS jobs were about little more than agency control and now they're losing that control.

  • croes 14 hours ago

    If AI can what it promises most of its big customers become useless but who else could pay enough to make them profitable?

  • kkfx 18 hours ago

    Is there really for the ML boom, or is it just a way to make computers more and more expensive to push people towards mobile? Because looking around, that's the effect I'm seeing, regardless of the causes.

    I see a future where most people will buy tablets to save money and the desktop will be for only a few, a very few, just when self-hosting is becoming trendy and people are saying "it's time for GNU/Linux to take Windows' place"...

  • stego-tech 20 hours ago

    This is another facet of the fierce opposition to AI by a swath of the population: it’s quite literally destroying the last bit of enjoyment we could wring from existence in the form of hobbies funded through normal employment.

    Think of the PC gamers, who first dealt with COVID supply shocks, followed by crypto making GPUs scarce and untenable, then GPU makers raising prices and narrowing inventory to only the highest-end SKUs, only to outright abandon them entirely for AI - which then also consumed their RAM and SSDs. A hobby that used to be enjoyed by folks on even a modest budget is now a theft risk given the insane resale priced of parts on the second-hand market due to scarcity.

    And that extends to others as well. The swaths of folks who made freelance or commission artistry work through Patreons and conventions and the like are suddenly struggling as customers and companies spew out AI slop using their work and without compensation. Tech workers, previously the wealthy patron of artisans and communities, are now being laid off en masse for AI CapEx buildouts and share pumps as investors get cold feet about what these systems are actually doing to the economy at large (definite bad, questionable good, uncertain futures).

    Late stage capitalism’s sole respite was consumerism, and we can’t even do that anymore thanks to AI gobbling up all the resources and capital. It’s no wonder people are pissed at AI boosters trying to say this is a miracle technology that’ll lift everyone up: it’s already kicking people down, and nobody actually wants to admit or address that lest their investments be disrupted to protect humans.

    • 9dev 20 hours ago

      I think this started a lot earlier actually. A few generations back, many people played an instrument, or at least could sing. It didn't matter that none of them was a Mozart, because they didn't had to be. For making music or singing together in a family or a friend group, it was wholly sufficient to be just good, not necessarily great.

      But when everyone has access to recordings of the world's best musicians at all times, why listen to uncle Harry's shoddy guitar play? Why sing Christmas songs together when you can put on the Sinatra Christmas jazz playlist on Spotify?

      • stego-tech 20 hours ago

        That’s definitely part of it as well, this sort of general distillation into a smaller and smaller pool of content or objects or goods that cost ever more money.

        Like how most of the royalties Spotify pays out are for older catalogue stuff from “classic” artists, rather than new bands. Or how historical libraries of movies and films are constantly either up for grabs (for prestige) or being pushed off platforms due to their older/more costly royalty agreements.

        With AI though, it’s the actual, tangible consumption of physical goods being threatened, which many companies involved in AI may argue is exactly the point: that everyone should be renters rather than consumers, and by making ownership expensive through cost and scarcity alike, you naturally drive more folks towards subscriptions for what used to be commodities (music, movies, games, compute, vehicles, creativity tools, TCGs, you name it).

        It’s damn depressing.

      • ThrowawayR2 19 hours ago

        "Comparison is the thief of joy" as they say. Some dude has the world's highest score in Pac-Man in the Guinness Book Of World Records. It doesn't mean that I can't play Pac-Man to beat my own personal high score and enjoy the process because the game is fun in it's own right.

        • 9dev 16 hours ago

          That's sure true in theory, but given the prevalence of status symbols, many people thrive on comparing themselves to others. I'd argue society was better off when the only people you could reasonably compare yourself to were the three neighbours down the street (out of which only one would be into Pac-Man), not the world's ten thousand best players showing off only their best streaks on your Instagram feed all day.

      • KellyCriterion 19 hours ago

        THIS! Instrument-playing capability of a social environment: By today, I know only of one person playing a piano regularly in his club, he is the only one I know. When I was young, you had some basic instrument introduction in music classes at school - I do not know if these still exist today.

        Regarding singing - I do not know a single perso who can "somehow" sing at least a little bit.

        The society is loosing these capacities.

      • wiether 15 hours ago

        > why listen to uncle Harry's shoddy guitar play?

        Uncle Harry is not playing guitar: https://www.youtube.com/watch?v=VXzz8o1m5bM

    • zozbot234 12 hours ago

      This is all temporary scarcity. GPUs becoming scarce and expensive today is exactly what we want to make future GPUs (and other electronics) cheap and abundant tomorrow. This is what happens when any capital intensive industry runs into a capacity ceiling it needs to push through.

    • simianwords 15 hours ago

      Apologies for being glib but I never thought I’d see a sincere “think of the gamers” comment.

      Your whole post is a bit vague and naive. If people enjoyed real art more than AI art, then the market will decide it. If they don’t then we should not be making people enjoy what they don’t.

      • vaylian 14 hours ago

        The market might not be able to tell the difference. It takes effort to count fingers and toes in art. Part of the problem is also: So many companies are doing it now, that it doesn't seem effective to call people/companies out for the use of AI slop.

        A big part of the problem is also: AI art is usually not labelled as such. The market can not make an informed decision.

        • simianwords 14 hours ago

          If people can’t tell the difference, perhaps it doesn’t matter

          • socialcommenter 11 hours ago

            We're discussing an article about the human externalities of AI progress. It absolutely matters in this context.

      • stego-tech 13 hours ago

        The free market is not some universal force of balance. It is a system of systems that is routinely influenced, disrupted, damaged, manipulated, and controlled overwhelmingly by a small cadre of wealth and asset-owners with disproportionate influence after a century of government policies against market intervention or regulation.

        C'mon, be better than some "lol free market" quip.

    • WillPostForFood 15 hours ago

      Think of the PC gamers

      The battlecry of the new revolution?

      • stego-tech 13 hours ago

        I mean, what was once an accessible hobby that taught folks how computers worked to a degree is now an RGB-lit target for thieves who know they can flip that memory and GPU for a grand or so pretty easily.

        That's a pretty big turnabout that could get some more folks thinking about and discussing the impacts of AI on non-AI systems or markets.

    • dehrmann 13 hours ago

      > PC gamers

      There's a pretty deep back catalog of PC games that will run on integrated GPUs.

      > The swaths of folks who made freelance or commission artistry...

      Those are people turning their hobby into a side hustle. If it's a hustle they depend on, this sucks. If it's actually a hobby, meh. You're drawing for you. Who cares if AI can also do it.

      • stego-tech 13 hours ago

        The fact you're nitpicking specific details instead of actually absorbing and discussing the core argument (that AI is having negative impacts on multiple systems, sectors, and markets beyond narrow verticals like programming, and that this is something worth acknowledging and discussing) really reveals that you're not here to discuss things seriously and are just looking to feel right about your personal positions.

        Be better.

  • lo_zamoyski 10 hours ago

    Give it a couple years. Pursuit of trades among Gen Z has gone up 1500%.

    The big loser is the modern university.

  • techblueberry 15 hours ago

    Bezos must have forgot to censor this post before it published.

  • dev1ycan 20 hours ago

    I don't think these companies buying realize how much animosity they're creating with literally everyone until it explodes on their face.

    • deadbabe 8 hours ago

      These companies play a long game. Eventually generations of people just grow up with this being the new normal, no reason to be angry.

      The people who are angry are the ones who had their cheese moved.

  • mystraline 16 hours ago

    The current LLMs are not constantly learning. They're static models that require megatons of coal to retrain.

    Now if the LLMs could modify their own nets, and improve themselves, then that would be immensely valuable for the world.

    But as of now, its a billionaires wet dream to threaten all workers as a way to replace labor.

    • reducesuffering 15 hours ago

      Think bigger, think of the entire system outside of just the single LLM: the interplay of capital, human engineering, and continual improvement of LLMs. First gen LLM used 100% human coding. Previous gen were ~50% human coding. Current gen are ~10% human coding (practically all OpenAI/Anthropic engineers admit they're entirely using Claude Code / Codex for code). What happens when we're at 1% human coding, then 0%? The recursive self-improvement is happening, it's just indirectly for now.

      • mystraline 15 hours ago

        And if everybody could thrive, then I say go for it.

        But thats nowhere near the system we have. Instead, this is serving to take all intellectual and artistic labor, and lower everyone into the 'ditch digger and garbage man' class, aka the low wage class.

        LLMs was never about building upwards. Its about corporate plundering from the folks who get paid well - the developers, the artists, the engineers, you name it. And its a way to devalue everyone with a pretend that they can be replaced with the biggest plagiarism machine ever created.

        https://www.reddit.com/r/economicCollapse/comments/1hspiym/t...

        is correct in their take. Why else would we see a circular economy of shoving AI everywhere? Because it serves to eliminate wages, paid to humans.

        This stratification and cementing social classes nearly permanently is a way to have our race of humanity die. But hey, shareholder value was at an all time high for a while.

    • simianwords 15 hours ago

      How would you have done it differently? It’s clear that even if this builds up more billionaire wealth, it still benefits everyone. Just like any other technology?

      So would you rather stop billionaires from doing it?

      • bigstrat2003 11 hours ago

        > It’s clear that even if this builds up more billionaire wealth, it still benefits everyone.

        That is by no means clear. I have yet to see any benefit from AI. I have no doubt in my mind that I'm not alone. So how is "everyone" benefiting from this trend?

    • vixen99 13 hours ago

      Is this actually true? Anyone care to comment on the claim (from some quarters) that offering free access to ChatGPT allows OpenAI to 'collect a gold mine of real-world interaction data to improve the underlying language model. Every conversation users have with ChatGPT – every question asked and every task requested – provides valuable training data for refining' the model.

      If false, I'm thinking - there's me, thinking I'm doing my bit to help . . .

      • mystraline 13 hours ago

        Corporate LLMs also are going to be the absolute biggest rug-pull.

        The billionaire companies responsible for this claim they can bed used for intellectual and artistic labor. Sure, they're the biggest piracy and plagariam engines.

        Once people start losing the abilities of what LLMs were trained on, will be the next phase. And thats to ratchet up prices to replace what they would have paid to humans.

        It will be the biggest wealth transfer we will have ever seen.

        This whole socio-econic equasion would be different if we all did actually gain. Raising tides raise all ships, but sinking ships get gobbled faster and faster.

    • throwaw12 15 hours ago

      > if the LLMs could modify their own nets ... then that would be immensely valuable for the world.

      Not sure :)

      I expect different things, don't think Wall Street allows good things to happen to ordinary people

  • cyanydeez 11 hours ago

    I don't think that's the AI Boom. It's the de rigor Republican policies.

  • shmerl 5 hours ago

    Shortages and price hikes. So much "benefit to society".

  • luxuryballs 13 hours ago

    So where is the electrician? Busy at an AI data center being built?

  • mschuster91 15 hours ago

    It's time that societies act against this cancer. When everything else suffers because the cancer is stealing all sorts of resources, a human would go to a doctor and have the cancer killed chemically or excised.

  • wnc3141 3 hours ago

    We're disappointed that WaPo is flailing. Meanwhile we all post workarounds for the paywall

  • vaxman 15 hours ago

    Almost positive 'ol Bessent has by now warned Trump that there is an issue with the AI queens breaking 1930s securities reform laws. Last week's stock market and crypto moves have all but sealed the fate of the Bozo No Bozos. They will take down the people involved with the funky purchase orders quietly, except inside of the closed door meetings where they get handled. https://youtu.be/y_z9W_N5Drg

  • jongjong 10 hours ago

    That's what you get when you have a centrally planned economy.

    This is just like communism. The software industry has been like this for over a decade and the effect started spreading to other industries. I've been warning of this for years.

    It has been a very painful decade for some. At least now millions of others are starting to feel the pain. More broadly distributed pain creates incentives for change which did not exist before. Pain is hope. Being on the frontline has been an excruciating and isolating experience.