948 comments

  • dang 16 hours ago

    We changed the URL from https://openai.com/index/announcing-the-stargate-project/ to a third-party report. Readers may want to read both. If there's a better URL, we can change it again.

  • mppm 9 hours ago

    Apart from my general queasiness about the whole AGI scaling business and the power concentration that comes with it, these are the exact four people/entities that I would not want to be at the tip of said power concentration.

    • ActionHank 2 minutes ago

      By the time this project is done it will have been dead for 2 years.

      Too many greedy mouths. Too many corporations. Too little oversight. Too broad an objective. Technology is moving too quickly for them to even guess at what to aim for.

    • mattlutze 2 hours ago

      Ellison should be nowhere near this:

      https://arstechnica.com/information-technology/2024/09/omnip...

      The man has the moral system of a private prison and the money to build one.

      • thelastgallon 19 minutes ago

        <quote> Citizens will be on their best behavior because we are constantly recording and reporting everything that's going on," Ellison said, describing what he sees as the benefits from automated oversight from AI and automated alerts for when crime takes place. "We're going to have supervision," he continued. "Every police officer is going to be supervised at all times, and if there's a problem, AI will report the problem and report it to the appropriate person. </quote>

        What is far more important to understand is to ignore all that nonsense and focus on who makes money? It will be Ellison and his buddies making tens of billions of dollars/year selling 'solutions' to local governments, all paid by your property taxes. This also enables an ecosystem of theft, where others benefit a lot more. With the nexus of Private Prisons, kids for cash judges (or judges investing in stock of prisons), DEA/police unions, DEA unions, small rural towns increasing prison population (because they get added to the total pop, and get funds allocated).

        More importantly this is extremely attractive to police who can steal billions every day from civil forfeiture, they have access to anyone who makes a bank withdrawal or transacts in cash, all displayed in real time feeds, ready for grabbing!

        • whatshisface 8 minutes ago

          Money? If they take it I can get more. The government is trying to take our freedom, permanently.

      • spacechild1 2 hours ago

        > "Citizens will be on their best behavior because we are constantly recording and reporting everything that's going on," Ellison said, describing what he sees as the benefits from automated oversight from AI and automated alerts for when crime takes place.

        Wow! It is genuinely frightening that these people should be in control of our future!

        • idiotsecant 5 minutes ago

          Literal 'new world order' stuff here. Alex Jones and crew got so excited that their guy was in the driver's seat that they didn't notice the actual illuminati lizard people space lasers being deployed.

      • pj_mukh 42 minutes ago

        I don't think we'll ever have a zero-crime society, neither should we aim to be one. But being left to the vagaries of police (and union) politics, culture and the complications of city budgets is clearly broken.

        Example: Cities are being presented a false choice between accepting deadly high speed chases vs zero criminal accountability [1], which in the world of drones seems silly [2]

        I don't want the police to have unfettered access to surveil any and all citizens but putting camera access behind a court warrant issued by a civilian elected judge doesn't feel that dystopian to me.

        Is that what Ellison was alluding to? I have no idea, but we are no longer in a world where we should disregard this prima facie.

        [1]: https://www.ktvu.com/news/controversial-oakland-police-pursu...

        [2]: https://www.cbsnews.com/sanfrancisco/news/san-francisco-poli...

      • lenerdenator an hour ago

        We keep saying people like him shouldn't be involved in certain ventures, and yet, they still are. More than ever, actually.

      • aswanson 30 minutes ago

        2025 is shaping up to be When the Villains Win year.

      • siva7 2 hours ago

        > "Citizens will be on their best behavior because we are constantly recording and reporting everything that's going on,"

        Let's be honest. He isn't wrong. I'd rather live in a society with zero crime than what we have now.

        • bayindirh an hour ago

          Sorry to break it to you, but oppressing people with cameras to prevent crime will only push the crime to where the cameras aren't.

          This makes preventing the crime and protecting people from effects of these crimes extremely difficult.

        • mattlutze an hour ago

          There's a few that have tried to implement this, and I want to live in none of them.

          The US will fare no better if it walks down this path, and honestly will likely fare worse for it's cultural obsession with individualism over community.

        • ajmurmann 2 hours ago

          Yes we have historically low low crime. It's unbearable.

          There are a number of countries that might give you a panopticon state of you want one

          • vtashkov 14 minutes ago

            Yeah, historically low crime because a lot of the crime is not considered crime anymore. Why thousands of stores are closing in California?

            • RajT88 3 minutes ago

              Well and good as a talking point, but violent crime is still illegal and way down.

        • noisy_boy 2 hours ago

          Just be prepared to be never daring to complain; a zero crime society isn't without its faults.

        • YinglingHeavy an hour ago

          You stop abuse in this country, particularly of children, and you start having zero violent crime a decade later.

        • wadim an hour ago

          If you're lucky, you might get your chance to live in Thiel's and Ellison's techbro utopia. Make sure to tell us how great it is to be subjected to people with no accountability, but all of the power over every aspect of your life.

        • javcasas 26 minutes ago

          So having a policeman in each street and corner, except the policeman bias is set by these four oligarchs.

          Welcome to... choose among many of the technodystopies in literature.

    • A4ET8a8uTh0_v2 5 hours ago

      Just Ellison alone brings unwelcome feeling of having Oracle craziness forced down our collective throats, but I share your concern about the unholy alliance generated in front of us.

      • DebtDeflation 3 hours ago

        My immediate reaction to the announcement was one of these is not like the others. OpenAI, a couple of big investment funds, Microsoft, Nvidia, and...............Oracle?

        • rTX5CMRXIfFG 20 minutes ago

          Oracle has a lot of valuable classified information about the state and its enemies due to its business.

        • Octoth0rpe 2 hours ago

          Oracle makes perfect sense in that they are 1) a massive datacenter company, and 2) sell a variety of saas products to enterprises, which is a major target market for AI.

          • mrbungie an hour ago

            Oracle has 2-3% market share as a Cloud Provider.

            MSFT or even Google (AWS is not as mature in that space imho) made perfect sense, Oracle doesn't.

            Elon and Larry are good friends, I would guess that has something to do with this development.

            • Octoth0rpe 42 minutes ago

              > Oracle has 2-3% market share as a Cloud Provider.

              And the market leader is what, 30%? about 1 order of magnitude. That's not such a huge difference, and I suspect that Oracle's size is disproportionate in the enterprise space (which is where a lot of AI services are targeted) whereas AWS has a _ton_ of non-enterprise things hosted.

              In any case, 2-3% is big enough where this kind of investment is 1) financially possible, 2) desirable to grow to be #2 or #3

        • A4ET8a8uTh0_v2 3 hours ago

          Sadly, it is not that unexpected given some of his recent interviews[1]. Any other day, I would agree it is a surprise.

          [1] https://arstechnica.com/information-technology/2024/09/omnip...

        • freehorse 3 hours ago

          There is a certain reason that last weeks everybody and their grandma is simping for Trump. Nobody would want to be on his bad side right now. Moreover, we hear here and there that Trump "keeps his promises". A lot of the promises we do not know about and we may never will. These people did not spend money supporting his campaign for nothing. In other places and eras this would have been called corruption, now it is called "keeping his promises".

          • lupire 2 hours ago

            Trump is one of the most famous people in the world for not keeping promises of paying debts. But there is money to be made temporarily when he is running a caper, as long as you can get your hand in the pot before he steals it.

          • fbfactchecker 32 minutes ago

            And you, are you simping for the Obidens of this world?

            Corruption is as old as mankind; don't know why it's pointed out prominently. Just look at that Xipeng/Biden photo from the National Archives.

            • idiotsecant 2 minutes ago

              If your knee jerk response to any political discussion even remotely critical of 'your guy' is to snap into whataboutisim instead of participating in the conversation you might need a outrage pornography detox for a while.

            • freehorse 19 minutes ago

              > And you, are you simping for the Obidens of this world?

              Did I?

              > Corruption is as old as mankind

              Yeah but seldomly celebrated or boasted about.

          • miki123211 2 hours ago

            > There is a certain reason that last weeks everybody and their grandma is simping for Trump. Nobody would want to be on his bad side

            It's worth keeping in mind how extremely unfriendly to tech the last admin was. At this point, it's basically proven in court that emails of the form "please deboost person x or else" were send, and there's probably plenty more we don't know about.

            Combine that with the troubles in Europe which Biden's administration was extremely unwilling to help with, the obstacles thrown in the way of major energy buildouts, which are needed for AI... one would have to be stupid to be a tech CEO and not simp for Trump.

            Tech has been extremely Democratic for many years. The Democrats have utterly alienated tech, and now they reap the consequences.

            • danieldk an hour ago

              the troubles in Europe

              Nice euphemism for giving people autonomy in their data and privacy.

              Most of there companies are so large that they cannot really fail anymore. At this point it has very little to do with protecting themselves, more with making them more powerful than governments. JD Vance are said that the US could drop support for NATO if Europe tries to regulate X [1]. Oligarchs have fully infiltrated the US government and are trying to do the same to other countries.

              I disagree with the grandparent. They don't support Trump because they do not want to be on his bad side (well, at least not only that), they support Trump because they see the opportunity to suppress regulation worldwide and become more powerful than governments.

              We just keep making excuses (fiduciary duties, he just doesn't know how to wave his arm because he's an autist [2]). Why not just call it what it is?

              [1] https://www.independent.co.uk/news/world/americas/us-politic...

              [2] Which is pretty offensive to people on the spectrum.

              • freehorse an hour ago

                I do agree that big part of why they support Trump is for anti-regulation reasons. But, it is also a fact that Trump is one of them, a businessman, not a politician. With Trump they can now discuss more business and less policies. There is a certain dealing of business right now that seems not at all transparent. And in this, the amount of public simping is really weird to what usually happens, everybody praising Trump even before he was taking office, and even tiktok, "coming out" as whatever etc.

                Oligarchs want less regulation, but they also want these beefy government contracts. They want weaker government to regulate them and stronger government to protect them and bully other countries. Way I see it, what they actually want is control of the government, and with Trump they have it (more than before).

            • mschuster91 2 hours ago

              > Tech has been extremely Democratic for many years. The Democrats have utterly alienated tech, and now they reap the consequences.

              Well, on the other side it can be said that Big Tech wasn't really on the side of democracy (note: democracy, not the Democrat Party) itself, and it hasn't been for years - at the very least ever since Cambridge Analytica was discovered. The "big tech" sector has only looked at profit margins, clicks, eyeballs and other KPIs while completely neglecting its own responsibility towards its host, and it got treated as the danger it posed by the Biden administration and Europe alike.

              As for the cryptocoin world that has also been campaigning for the 45th: they are an even worse cancer on the world. Nothing but a gigantic waste of resources (remember the prices of GPUs, HDDs and RAM going through the roof, coal power plants being reactivated?), rug pulls and other scams.

              The current shift towards the far-right is just the final masks falling off. Tech has rather (openly) supported the 45th than to learn from the chaos it has brought upon the world and make at least a paper effort to be held accountable.

              • cosmic_cheese an hour ago

                Yes, big tech was the kid caught in the corner cleaning out the cookie jar and threw a tantrum when one parent moved the jar out of reach as punishment in effort to help the industry learn self-control. Now the other parent has come home and has not only returned the cookie jar to the kid but pledged to bring them packs of cookies by the shipping container to gorge on in exchange for favors.

            • unethical_ban 16 minutes ago

              We have more energy and are pumping more domestic oil than ever. We are a major exporter of LNG. Trump just killed EV subsidies, and electric charging network funding.

              What are you talking about via Europe? Holding tech companies accountable to meddling in domestic politics? Not allowing carte blanche to user data?

              I understand (though do not like) large corps tiptoeing around Trump in order to manipulate him, it is due to fear. Not due to Trump having respectable values.

    • belter 8 hours ago

      This is a Military project. Have no doubts about it.

      • Gud 7 hours ago

        This is a money making scheme.

        • jwr 4 hours ago

          Mostly benefiting the fossil fuel industry. How are they going to power this? Gas is the only option that can be implemented within single years. And this is going to need a lot of power.

          Who cares about the planet, anyway.

          • noisy_boy an hour ago

            There probably will be a clause of mandatory consumption of a given percentage of power generated from coal ensuring continued coal generation of a given minimum providing excellent talking-points for broadcasting to the incumbent's base.

          • secondcoming 3 hours ago

            For $500bn they can build a nuclear power plant dedicated to these data centres

            • ReptileMan 3 hours ago

              They can build a couple. With nuclear money is rarely the issue. It is that it takes forever because reasons.

              • Andrex 2 hours ago

                It's not like the current admin respects the rule of law anyways...

          • andrepd 3 hours ago

            Trump just rescinded licenses for offshore wind farms via an EO. We're fucking cooked (and I mean this literally)

          • admissionsguy 3 hours ago

            You need to stop this nonsense. Pollution is a long term problem, but it does not mean it is productive to do what Germany has done and cease development.

            • unethical_ban 15 minutes ago

              You need to stop this nonsense. The path we were on, that Trump has already overthrown, was nothing like Germany's.

        • 4ndrewl 6 hours ago

          Wealth residistribution scheme. Your tax dollars into their pockets.

          • Palmik 5 hours ago

            As far as I can tell, this will be financed by private money. Can you elaborate?

            • 4ndrewl 5 hours ago

              Tax breaks, government forced to become a customer etc. the usual. Just like the astronauts to Mars thing will just shovel your money that might have gone to NASA into Musk's pocket.

              • miki123211 2 hours ago

                > the usual. Just like the astronauts to Mars thing will just shovel your money that might have gone to NASA into Musk's pocket.

                The difference is that Musk can do twice as much for 1/10 what Nasa thinks the program will cost, which is never what the program will actually cost, and Musk will do it in half that time to boot.

                The guy is an unhinged manchild, but if what you care about is having your money well spend and getting to Mars as cheaply as possible, he's exactly who you're looking for.

                • Filligree 22 minutes ago

                  I think you meant to type SpaceX. Which works as well as it does partly because Musk is kept at a careful length from the controls...

              • vtashkov 3 hours ago

                Tax breaks, i.e. my money not being in your pocket means that they are stolen?

                • mattlutze 2 hours ago

                  Tax breaks, i.e. a company extracting wealth from a community without paying into the systems that keep all the parts of that community running, forcing the community to ultimate subsidize that business's weath extraction from them.

                  • vtashkov 31 minutes ago

                    Companies do not extract value, they create value which is then transferred to the people via the market through voluntary exchange (ideally). Where have you learned about those things? Oh, yeah, “community” , i.e. Marx.

                • shoxidizer an hour ago

                  Tax breaks have basically the same effect as the government writing a check, increases inflation.

                  • vtashkov 34 minutes ago

                    This is utter nonsense. If 1000 people go to a deserted island with no government and taxation would that mean the inflation will be plus infinity or at least very high??? Inflation is monetary phenomenon, it happens when money is being printed.

                    • shoxidizer 15 minutes ago

                      In that case there would be no inflation or deflation, assuming a fixed money supply and no economic growth. However, the the key here is that the government, the federal government anyways, is spending money regardless of the tax break. Anytime the government writes a check, that's a little bit more money floating around; anytime the government collects some money, such as taxes, there's that much less money to be had. Every tax break causes the money supply to increase more relative to if the tax break did not exist, causing more inflation (or less deflation, if that were the case). If the government spent exactly as much as it taxed, then there would be... actually deflation, because the economy is growing. This is the basics of fiscal policy.

                      There's also the monetary policy, which is when the federal reserve does this on purpose. The general principle is the same, but instead it spends its money buying bonds and gets its money selling those bonds, and creates a bunch of rules about where banks keep their money so it always has some money on hand.

                • matwood 2 hours ago

                  Assuming the tax money has to come from somewhere at some point, those who pay taxes have to make up the shortfall from those who have tax breaks. So far the US just kicks that can down the road so...

                  • vtashkov 25 minutes ago

                    That is a big assumption. Tax money need not be a constant. But for the sake of following the same logic: if companies pay bigger taxes, they also have to make up the shortfall. Actually, this last one is much more accurate statement. Companies do not pay taxes, PEOPLE pay taxes. So taxes are paid either by the employees, the clients or by the owners (which in case of the big tech are generally common people). With high taxation you are hurting: the customers, the workers and the middle class saving for their retirement. Who is winning the tax money: state bureaucracy, corrupt politicians and the business around them, people who live like parasites (or rather are forced to live like that, because they are electoral power).

              • lupire 2 hours ago

                What do you think NASA does with the money? Is doesn't build a NASA house for its NASA babies.

              • beezlewax 3 hours ago

                The Mars walk is just 3 years away baby!

            • belter 5 hours ago

              Your tax dollars are the customer.

        • vargr616 6 hours ago

          what's the difference

          • Gud 4 hours ago

            Not all money making schemes involve the military.

        • arisAlexis 7 hours ago

          This has cosmological significance if it leads to superintelligence

          • Cthulhu_ 6 hours ago

            It won't unless there's another (r)evolution in the underlying technology / science / algorithms, at this point scaling up just means they use bigger datasets or more iterations, but it's more finetuning and improving the existing output then coming up with a next generation / superintelligence.

            • miki123211 2 hours ago

              > It won't unless there's another (r)evolution in the underlying technology / science

              I think reinforcement learning with little to no human feedback, O-1 / R-1 style, might be that revolution.

            • Filligree 5 hours ago

              Okay, but let’s be pessimistic for a moment. What can we do if that revolution does happen, and they’re close to AGI?

              I don’t believe the control problem is solved, but I’m not sure it would matter if it is.

              • ForHackernews 5 hours ago

                Being pessimistic, how come no human supergeniuses ever took over the world? Why didn't Leibniz make everyone else into his slaves?

                I don't even understand what the proposed mechanism for "rouge AI enslaves humanity" is. It's scifi (and not hard scifi) as far as I can see.

                • HeatrayEnjoyer 4 hours ago

                  > Being pessimistic, how come no human supergeniuses ever took over the world? Why didn't Leibniz make everyone else into his slaves?

                  We already did. Look at the state of animals today vs <1 mya. Bovines grown in unprecedented mass numbers to live short lives before slaughter. Wolves bred into an all new animal, friendly and helpful to the dominate species. Previously apex predators with claws, teeth, speed and strength, rendered extinct.

                  • adalacelove 3 hours ago

                    Sometimes I wonder if we are going to be the unkillable plague that takes over the universe. Or maybe we will dissappear in a blink. It's hard to know, we don't have any reference point except ourselves.

                    • lupire 2 hours ago

                      Destroying human life in Earth (the only habitable place in the solar system) is far far easier than reaching something outside the solar system.

                • Philpax 5 hours ago

                  Once you have one AGI, you can scale it to many AGI as long as you have the necessary compute. An AGI never needs to take breaks, can work non-stop on a problem, has access to all of the world's information simultaneously, and can interact with any system it's connected to.

                  To put it simply, it could outcompete humanity on every metric that matters, especially given recent advancements in robotics.

                  • ForHackernews 4 hours ago

                    ...so it can think really hard all the time and come up with lots of great, devious evil ideas?

                    Again, I wonder why no group of smart people with brilliant ideas has unilaterally imposed those ideas on the rest of humanity through sheer force of genius.

                    • Philpax 4 hours ago

                      An equivalent advance in autonomous robotics would solve the force projection issue, if that's what you're getting at.

                      I don't know if this will happen with any certainty, but the general idea of commoditising intelligence very much has the ability to tip the world order: every problem that can be tackled by throwing brainpower at it will be, and those advances will compound.

                      Also, the question you're posing did happen: it was called the Manhattan Project.

                      • redserk 2 hours ago

                        And if this whole exercise turns out to be a flop and gets us absolutely nowhere closer to AGI?

                        “AGI” has proven to be today’s hot marketing stunt for when you need to raise another round of cash and your only viable product is optimism.

                        Flying cars were just around the corner in the 60s, too.

                        • anon84873628 36 minutes ago

                          This thread started from a deliberately pessimistic hypothetical of what happens if AGI actually manifests, so your comment is misplaced.

                    • jprete an hour ago

                      Quite a few have succeeded in conquering large fractions of the Earth's population: Napoleon, Hitler, Genghis Khan, the Roman emperors, Alexander the Great, Mao Zedong. America and Britain as systems did so for long periods of time.

                      All of these entities would have been enormously more powerful with access to an AGI's immortality, sleeplessness, and ability to clone itself.

                      • anon84873628 34 minutes ago

                        And of course the more society is wired up and controlled by computer systems, the more the AGI could directly manage it.

                      • SketchySeaBeast an hour ago

                        I can see what you're trying to say, but I cannot for the life of me figure out how an AGI would have helped Alexander the Great.

                        • jprete 25 minutes ago

                          Alexander the Great made his conquests by building a really good reputation for war, then leveraging it to get tribute agreements while leaving the local governments intact. This is a good way to do it when communication lines are slow and unreliable, because the emperor just needs to check tribute once a year to enforce the agreements, but it's weak control.

                          If Alexander could have left perfectly aligned copies of himself in every city he passed, he could have gotten much more control and authority, and still avoided a fight by agreeing to maintain the local power structure with himself as the new head of state.

                          • SketchySeaBeast 20 minutes ago

                            Oh, you're assuming an entire networking infrastructure as well. That makes way more sense, but the miracle there isn't AGI - without networking they'd lose alignment over time. Honestly, I feel like it would devolve in a patchwork of different kingdoms run by an Alexander figurehead... where have I seen this before?

                            The problem you're proposing could be solved via a high quality cellular network.

                    • lupire 2 hours ago

                      Look at any corporation or government to understand how a large group of humans can be driven to do specific things none of them individually want.

                • z3phyr 4 hours ago

                  I consider many successful military leaders and politicians to be geniuses as well. In my books, Caesar is as genius as Newton!

                  Having said that, we do not to understand the world to exploit it for ourselves. And what better way to understand and exploit the universe than science? Its an endearment.

            • iLoveOncall 5 hours ago

              > bigger datasets

              Not even, they already ran out of data.

              • nick__m 3 hours ago

                I am sure that the M.I.C. have a ton of classified data that could be used to train a military AI.

          • computerthings 5 hours ago

            "this generation shall not pass"... to me that's about as credible as wanting to "preserve human consciousness" by going to Mars.

            Setting the world on fire and disrupting societies gleefully, while basically building bunkers (figuratively more than literally) and consolidating surveillance and propaganda to ride out the cataclysm, that's what I'm seeing.

            And the stories to sell people on continuing to put up with that are not even good IMO. Just because the people who use the story to consolidate wealth and control are excited about that, we're somehow expected to be excited about the promise of a pair of socks made from barbed wire they gave us for Christmas. It's the narcissistic experience: "this is shit. this benefits you, not me. this hurts me."

            One thing is sure, actual intelligence, regardless of how you may define it, something that is able to reason and speak freely, is NOT what people who fire engineers for correcting them want. It's not about a sort of oracle for humanity to enjoy and benefit from, that just speaks "truth".

          • iLoveOncall 6 hours ago

            Don't worry, it'll only lead to superstupidity.

            • bluescrn 6 hours ago

              And superplagiarism of human-created content

            • _heimdall 2 hours ago

              Is that the prequel to Idiocracy?

      • smeeger 27 minutes ago

        of course. its an arms race by definition so its all a military project. and already one whistleblower was brazenly murdered by our government to protect our horse in this race.

        • whimsicalism 7 minutes ago

          no whistleblower was murdered, ridiculous conspiracy theory

      • dgoldstein0 6 hours ago

        ... If they build it under Cheyenne mountain you are definitely correct

    • amelius 4 hours ago

      I would love for Oracle to use AI to put their entire legal department out of work, though.

      • andy_ppp 4 hours ago

        So you want them to be infinitely more litigious?

        A serious question though, what does happen when AIs are filing lawsuits autonomously on behalf of the powerful, the courts clearly won't be able to cope unless you have AI powered courts too? None of how these monumental changes will work has been thought through at all, let's hope AI is smart enough to tell us what to do...

        • miki123211 2 hours ago

          > A serious question though, what does happen when AIs are filing lawsuits autonomously on behalf of the powerful

          It won't just be at the behalf of the powerful.

          If lawyers are able to file 10x as many lawsuits per hour, the cost of filing a lawsuit is going to go down dramatically, and that's assuming a maximally-unfriendly regulatory environment where you still officially need a human lawyer in the loop.

          This will enable people to e.g. use letters signed by an attorney at law, or even small claims court, as their customer support hotline, because that actually produces results in today.

          Nobody is prepared for that. Not the companies, not the powerful, not the courts, nobody.

          • ajmurmann an hour ago

            Unless you can afford your lawsuit to take up substantial time on Stargate and make a much stronger case than your average Joe who is still using o1 for their lawsuits

        • SketchySeaBeast an hour ago

          I'm envisioning a future where there's a centralized "legal exchange", much like the NYSE, where high speed machines file micro-ligation billions of times faster than any human can, which is decided equally quickly, an unrelenting back and forth buzz of lawsuits and payouts as every corporation wages constant automated legal battle. Small businesses are consumed in seconds, destroyed by the filing of a million computerized grievances while the major players end up in a sort of zero-sum stalemate, where money is constantly moving, but it never shifts the balance of power.

          ... has anyone ever written a book about this? If not, I think I'm gonna call dibs.

        • roenxi 3 hours ago

          Oracle could reasonably be hit with some sort of stick every time they filed a frivolous lawsuit until the AI got tuned appropriately. Then it'd be a situation where Oracle were continuously suing people who don't follow the law, following a reasonably neutral and well calibrated standard that is probably going to end up as similar to an intelligent and well practised barrister. That would be acceptable. If people aren't meant to be following the law that is a problem for the legislators.

        • ReptileMan 2 hours ago

          >A serious question though, what does happen when AIs are filing lawsuits autonomously on behalf of the powerful,

          AI controlled cheap Chinese drones will start flying into their residencies carrying some trivial to make high explosives. With the class wars getting hotter in next few years we may be saying that Luigi Mangione had the right ideas towards the PMC, but he was underachiever.

    • fsndz 6 hours ago

      What do you prefer ? Letting DeepSeek and China lead the AI war ? DeepSeek R1 is a big wake up call https://open.substack.com/pub/transitions/p/deepseek-is-comi...

      • bayindirh 4 hours ago

        Us vs. Them. My favorite perspective [0].

        Regarding to your question, yes. I'd prefer a healthy counterbalance to what we have currently. Ideally, I'd prefer cooperation. A worldwide cooperation.

        [0]: https://pbs.twimg.com/media/B_AiI9_XIAA67_t.jpg

        • rpastuszak 4 hours ago

          Treating the world as a bunch of football teams is a great distraction though.

        • andy_ppp 4 hours ago

          Arguably the cooperation between the US and China has lead to the most economic growth and prosperity in human history, it's a shame the US and China are returning to a former time.

      • mppm 5 hours ago

        From what I've read about DeepSeek and its founder, I would very much prefer them, even with China factored in. At least if these particular Four Horsemen are the only alternative.

        On a tangential note, those who wish to frame this as the start of the great AI war with China (in which they regrettably may be right), should seriously consider the possibility of coming out on the losing end. China has tremendous industrial momentum, and is not nearly as incapable of leading-edge innovation as some Americans seem to think.

        • corimaith 4 hours ago

          >China has tremendous industrial momentum, and is not nearly as incapable of leading-edge innovation as some Americans seem to think.

          So those who framing this are correct and that we should matching their momentum here asap?

          • mppm 4 hours ago

            No, I was rather pointing out that getting into an altercation that you are likely (even if not guaranteed) to lose may not be the smartest of ideas. On occasion, humans have been known to fruitfully engage in cooperation and de-escalation. Please pardon my naive optimism.

            • lII1lIlI11ll 4 hours ago

              "Great AI war with China", "altercation" are excessively harsh characterizations. There is nothing "escalatory" in competing for leadership in new industries with other states, nor should it be "regrettable". No one, to my knowledge, is planning to nuke DeepSeek data centers or something.

              • mppm 4 hours ago

                I wish I could agree with you. But have you read Aschenbrenner's "Situational Awareness" [1]? I am very much afraid that the big decision makers in AI do in fact think in those terms, and do not in any way frame this as fair competition for the benefit of all.

                1. https://situational-awareness.ai/

                • lII1lIlI11ll 3 hours ago

                  A person heavily invested in this wave of AI succeeding saying AI will be big and we will have AGI next year? Sure.

                  I don't think there is much point of reading the whole thing after the following:

                  "Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the willful blindness of “it’s just predicting the next word”."

      • whimsicalism 5 minutes ago

        we need to cooperate and put aside our petty politicking right now. the potential downsides of ‘racing’ without building a safety scaffold are catastrophic.

      • smeeger 21 minutes ago

        the outcome would be exactly the same. AGI leads the human race off of a cliff, not in the direction of one human interest group vs another. the only difference would be that it was china that was responsible for the extinction if the human race rather than another country. i would prefer to die with dignity… the outcome we should all be advocating for is a global halt of AI research — not because it would be easy but because there is no other option.

      • otabdeveloper4 5 hours ago

        > What do you prefer ? Letting DeepSeek and China lead the AI war ?

        Me personally? Yes.

      • vbezhenar 4 hours ago

        China is much more peaceful nation compared to US. So, yes, I'd prefer China leading AI research any day. They are interested in mutual trade and prosperity, they respect local laws and culture, all unlike US.

        • jbaiter 4 hours ago

          "They respect local laws and culture" - I think people from Xinyang probably have a very different perspective on that........

          • Octoth0rpe 2 hours ago

            I think there's a more nuanced version of this: China respects local laws and culture _outside of what they view as China_ more than the US does. It's also worth noting that China's policy in Xinjiang is somewhat narrowly targeted at religion, and less other aspects like cuisine or clothing. That said, religion is nigh impossible to separate from the broader idea of culture in much of the world.

            • lupire 2 hours ago

              Africa and South America and USA strongly disagree.

            • Analemma_ an hour ago

              Give me a break. China has overseas police stations as bases of operation for harassing ex-pats and dissidents. That's not "respecting local laws and culture".

              • whimsicalism 4 minutes ago

                sorry but you’re not going to convince anyone approaching this with a neutral mind that China is more partial to overseas intervention than the US is

          • vbezhenar 2 hours ago

            I encountered this almost first person. When American company goes like an elephant, bribing local officials left and right, using dirty practices to push out concurrents. At the same time, Chinese companies try very hard to abide to local regulations and trying to resolve all issues using local courts, etc. Like actually civilised people.

            What happens inside China is nothing of my interest, it's their business. They existed for millennias, they probably know how to manage themselves. They are not trying to expand outside of may be Taiwan, they don't put their military bases in my country, they don't fund so-called "opposition" and that's good enough for me.

            • whimsicalism 3 minutes ago

              Bribery is probably one of the few cases where the US is significantly better than bad actors in both China and the EU, both of which have major problems with overseas bribery

          • anthk 2 hours ago

            If you had AlQaeda in a hypothetical region near Florida with almost two-yearly terror attacks, you would shit bricks and create jails/prisons with more security than the Pentagon itself.

        • infecto 3 hours ago

          Holy smokes. Do folks like you actually believe this? China has its own style of colonialism (whatever you want to call it) but it certainly exists as strong as the US flavor.

          • Cumpiler69 3 hours ago

            How many countries has China invaded and bombed in the last 30 years?

            How many deaths did China's warmongering caused abroad?

            • infecto 2 hours ago

              Quite a few from an economic perspective. Like I said they have their own style of colonialism. To think they are some peaceful loving nation is foolish. Maybe in the last 10 years China have had the military equipment capable of handling an offensive. They have been smart and done all their dealings via money. Without going too far in whataboutism, I simply find it ridiculous to classify China as a warm fuzzy nation with their long list of human rights issues. That does not mean America is peaceful and loving, simply that perhaps the two countries are not so different in net.

              • Cumpiler69 2 hours ago

                > Like I said they have their own style of colonialism.

                That's moving the goalposts and doesn't address the issue.

                >They have been smart and done all their dealings via money.

                You mean just like the country who issues the world reserve currency and who's intelligence agencies get involved in destabilizing regimes across the world?

                • infecto 2 hours ago

                  > That's moving the goalposts and doesn't address the issue.

                  Is this how you make a constructive argument? Perhaps I was expecting too much from a joke account but this style of whataboutism is boring.

                  My post that you responded to set my premise which was that China has its own form of colonialism that is quite different than Americas but it exists and it’s quite strong. To classify China as a peaceful loving nation that respects other cultures is as if we were saying the US has never started a conflict. It’s factually a lie. China has a long list of human rights issues, they factually do not respect other cultures even within their own borders. I am not defending America but pointing out that China is not what the OP stated.

                  • Cumpiler69 2 hours ago

                    > I was expecting too much from a joke account

                    Are you the kind of superficial petty person who needs to take jabs at the messenger's name and not the message itself?

                    And are you really in the position to throw stones from a glass house with that account name? If you had your real name and social media profiles linked in the bio I'd understand, but you're just being hypocritical, petty and childish here with this 'gotcha'.

                    > To classify China as a peaceful loving nation that respects other cultures

                    I never made such a classification. You're building your own strammen to form a narrative you can attack but you're not saying anything useful the contradicts my PoV and wasting our time. Since you're obviously arguing in bad faith I won't converse with you further. Goodbye.

                    • infecto 2 hours ago

                      If you have an argument that is actually on topic with what I said please continue, otherwise save your troll account for someone else. The whataboutism/gaslighting is silly. You clearly cannot read threads or respond in a logical form to the right person. The conversation at hand was about China and in response to the OP classifying them as a loving and respectful nation. I made no attempt to defend the US and it has been you moving the goalposts. You throw about whataboutism around and then simply runoff with some flimsy excuse about multiple people being unable to converse with you. Troll account.

                      • anon84873628 19 minutes ago

                        Cumpiler asked two very clear and direct questions:

                        >How many countries has China invaded and bombed in the last 30 years? >How many deaths did China's warmongering caused abroad?

                        You didn't answer those, just started hand waving some stuff about China's "own form of colonialism" -- without even explaining what that is and how it works (which personally I'd be curious to hear about, and believe *is*" likely guilty of violence).

                        So you very clearly are the one guilty of shifting the goalposts, going on tangents, and bringing up usernames instead of real arguments.

            • greentxt 2 hours ago

              Define invade.

              • Cumpiler69 2 hours ago

                Sorry, but If you need a definition for military invasion, you're not arguing in good faith. Goodbye.

    • nejsjsjsbsb 4 hours ago

      Need a bit of Zuck too

      • blantonl an hour ago

        Yeah, really the only thing missing from this initiative was the personal information of the vast majority of the United States population handed over on a silver platter.

    • roenxi 5 hours ago

      That sentiment calls for reflection - whoever ends up on top of the heap after the AI craze settles down is going to be someone that everyone objects to. Elon Musk was himself an internet darling up until he became wealthy and entrenched.

      That said, this does look like dreadful policy at the first headline. There is a lot of money going in to AI, adding more money from the US taxpayer is gratuitous. Although in the spirit of mixing praise and condemnation, if this is the worst policy out of Trump Admin II then it'll be the best US administration seen in my lifetime. Generally the low points are much lower.

      • whimsicalism a minute ago

        Nietzsche wrote about these phenomena a long time ago in his Genealogy of Morality. there will never be someone who reaches the top who doesn’t become an object of ire in modern Western culture.

      • mppm 4 hours ago

        > That sentiment calls for reflection - whoever ends up on top of the heap after the AI craze settles down is going to be someone that everyone objects to.

        I agree in principle. And realistically, there is no way Altman would not be part of this consortium, much as I dislike it. But rounding out the team with Ellison, Son and Abu Dhabi oil money in particular -- that makes for a profound statement, IMHO.

      • JKCalhoun 4 hours ago

        > That sentiment calls for reflection - whoever ends up on top of the heap after the AI craze settles down is going to be someone that everyone objects to.

        Did we see the same fallout from the space-race from a couple generations ago?

        I don't think so — certainly not in the way you're framing it. So I guess I don't accept your proposition as a guarantee of what will happen.

        • roenxi 3 hours ago

          A couple of generations ago we didn't have the internet and the only things people heard about were being managed. The big question was whether the media editors wanted to build someone up or tear them down.

          The spoils of the space race would have gone to someone a lot like Musk. Or Ellison. Or Masayoshi Son. Or Sam Altman. Or the much worse old-moneyed types. The US space program was, famously, literally employing ex-Nazis. I doubt the beneficiaries of the money had particularly clean hands either

      • infecto 3 hours ago

        > That sentiment calls for reflection - whoever ends up on top of the heap after the AI craze settles down is going to be someone that everyone objects to. Elon Musk was himself an internet darling up until he became wealthy and entrenched.

        Trying to process this but doesn’t his fall from grace have more to him increasing his real personality to the world? Sometime around calling that guy a pedo. Not much bothers me but at the very least his apparent lack of decision making calls into question many things.

        • anon84873628 16 minutes ago

          Of all the sentiments that call for reflection, the parent's belief about why people don't like Elon is the one that needs it the most.

  • serjester 16 hours ago

    You have to keep in mind Microsoft is planning on spending almost 100B in datacenter capex this year and they're not alone. This is basically OpenAI matching the major cloud provider's spending.

    This could also be (at least partly) a reaction to Microsoft threatening to pull OpenAI's cloud credits last year. OpenAI wants to maintain independence and with compute accounting for 25–50% of their expenses (currently) [2], this strategy may actually be prudent.

    [1] https://www.cnbc.com/2025/01/03/microsoft-expects-to-spend-8...

    [2] https://youtu.be/7EH0VjM3dTk?si=hZe0Og6BjqLxbVav&t=1077

    • throitallaway 16 hours ago

      Microsoft has lots of revenue streams tied to that capex outlay. Does OpenAI have similar revenue numbers to Microsoft?

      • tuvang 16 hours ago

        OpenAI has a very healthy revenue stream in the form of other companies throwing money at them.

        But to answer your question, no they aren’t even profitable by themselves.

        • manquer 16 hours ago

          > they aren’t even profitable

          Depends on your definition of profitability, They are not recovering R&D and training costs, but they (and MS) are recouping inference costs from user subscription and API revenue with a healthy operating margin.

          Today they will not survive if they stop investing in R&D, but they do have to slow down at some point. It looks like they and other big players are betting on a moat they hope to build with the $100B DCs and ASICs that open weight models or others cannot compete with.

          This will be either because training will be too expensive (few entities have the budget for $10B+ on training and no need to monetize it) and even those kind of models where available may be impossible to run inference with off the shelf GPUs, i.e. these models can only run on ASICS, which only large players will have access to[1].

          In this scenario corporations will have to pay them the money for the best models, when that happens OpenAI can slow down R&D and become profitable with capex considered.

          [1] This is natural progression in a compute bottle-necked sector, we saw a similar evolution from CPU to ASICS and GPU in the crypto few years ago. It is slightly distorted comparison due to the switch from PoW to PoS and intentional design for GPU for some coins, even then you needed DC scale operations in a cheap power location to be profitable.

          • Fade_Dance 15 hours ago

            They will have an endless wave of commoditization chasing behind them. NVIDIA will continue to market chips to anyone who will buy... Well anyone who is allowed to buy, considering the recent export restrictions. On that note, if OpenAI is in bed with the US government with this to some degree, I would expect tariffs, expert restrictions, and all of that to continue to conveniently align with their business objectives.

            If the frontier models generate huge revenue from big government and intelligence and corporate contracts, then I can see a dynamo kicking off with the business model. The missing link is probably that there need to be continual breakthroughs that massively increase the power of AI rather than it tapering off with diminishing returns for bigger training/inference capital outlay. Obviously, openAI is leveraging against that view as well.

            Maybe the most important part is that all of these huge names are involved in the project to some degree. Well, they're all cross-linked in the entire AI enterprise, really, like OpenAI Microsoft, so once all the players give preference to each other, it sort of creates a moat in and of itself, unless foreign sovereign wealth funds start spinning up massive stargate initiatives as well.

            We'll see. Europe has been behind the ball in tech developments like this historically, and China, although this might be a bit of a stretch to claim, does seem to be held back by their need for control and censorship when it comes to what these models can do. They want them to be focused tools that help society, but the American companies want much more, and they want power in their own hands and power in their user's hands. So much like the first round where American big tech took over the world, maybe it's prime to happen again as the AI industry continues to scale.

            • fragmede 14 hours ago

              Why would China censoring Tiananmen Square/whatever out of their LLMs be anymore harmful to the training process when the US controlled LLMs also censor certain topics, eg "how do I make meth?" or "how do I make a nuclear bomb?".

              • vaccineai 14 hours ago

                Because China censors very common words and phrases such as "harmonized", "shameless", "lifelong", "river crabbed", "me too". This is because Chinese citizens uses puns and common phrases initially to get around censors.

                • saghm 10 hours ago

                  Don't forget "Winnie the Pooh"!

                • jiggawatts 11 hours ago

                  OpenAI models refuse to translate subtitles because they contain violence, sex, or racism.

                  That’s just a different flavour of enforced right-think.

                  • talldayo 10 hours ago

                    They are absolutely different flavors. OpenAI is not being told by the government to censor violence, sex or racism - they're being told that by their executives.

                    News flash: household-name businesses aren't going to repeat slurs if the media will use it to defame them. Nevermind the fact that people will (rightfully) hold you legally accountable and demand your testimony when ChatGPT starts offering unsupervised chemistry lessons - the threat of bad PR is all that is required to censor their models.

                    There's no agenda removing porn from ChatGPT any more than there's an agenda removing porn from the App Store or YouTube. It's about shrewd identity politics, not prudish shadow government conspiracies against you seeing sex and being bigoted.

                    • snapcaster an hour ago

                      I don't know why people care if they're being censored by government officials or private billionaires. What difference does it make at the end of the day? why is one worse than the other?

                    • A4ET8a8uTh0_v2 5 hours ago

                      Sigh. No. Censorship is censorship is censorship. That is true even if you happen to like and can generate a plausible defense of US version that happens to be business friendly ( as opposed to China's ruling party friendly ).

                      • Jean-Papoulos 10 minutes ago

                        Usually a sign of great discussion when someone responds with "sigh" to a reasonably presented argument.

                      • ForHackernews 5 hours ago

                        > Censorship is censorship is censorship

                        "if your company doesn't present hardcore fisting pornography to five year olds you're a tyrant" is a heck of a take, even for hacker news.

                        • A4ET8a8uTh0_v2 5 hours ago

                          It is not a take. It is simple position of 'just because you call something as involuntary semen injection does not make it any less of a rape'. I like things that are clear and well defined. And so I repeat:

                          Censorship is censorship is censorship.

                          • ForHackernews 4 hours ago

                            Ok, I guess I'm #TeamProCensorship, then. So is almost everyone.

                            • snapcaster an hour ago

                              Yes, that's true. It's very rare for people to be able to value actual free speech. Most people think they do until they hear something they don't like

                            • A4ET8a8uTh0_v2 3 hours ago

                              I am not sure if it will surprise you, but your affiliation or the size of your 'team' is largely irrelevant from my perspective. That said, I am mildly surprised you were able to accept the new self-image as willing censor though. Most people struggle with that ( edit: hence the 'this is not censorship' facade ).

                • curt15 11 hours ago

                  Is "Pooh" also censored?

              • matkoniecz 7 hours ago

                Because falsifying history seems worse than restricting meth production, at least to me.

                Though I see no reason whatsoever why LLM should be blocked from answering "how do I make a nuclear bomb?" query.

              • throwaway290 10 hours ago

                Because when a small group of elites with permament term and no elections decides what is allowed and what isn't... and has full control of silencing what's not allowed and any meta discussion about the silencing itself... is different from when an elected government decides it, and then anyone is free to raise a stink on whatever is their version of twitter today without worrying about being disappeared tomorrow

                • snapcaster an hour ago

                  It's not an elected government if you're talking about the US. These policies are also all decided by "elites with permanent term and no elections" you realize right?

                  • throwaway290 29 minutes ago

                    > It's not an elected government if you're talking about the US

                    If you don't believe US has elections then straighten up your tinfoil hat:)

                    Maybe you'll say next the earth is flat, if you think people have nothing better to do but to find ways to lie to you.

              • Fade_Dance 13 hours ago

                They want their LLMs explicitly approved to align with the values of the regime. Not necessarily a bad thing, or at least that avenue wasn't my point. It does get in the way of going fast and breaking things though, and on the other side there is an outright accelerationist pseudo-cult.

                • bakuninsbart 11 hours ago

                  Ignoring the moral dimension for a second, I do wonder if it is harder to implement a rather cohesive, but far-reaching censorship in the chinese style, or the more outrage-driven type of "censorship" required of American companies. In the West we have the left pre-occupied with -isms and -phobias, and the right with blasphemy and perceived attacks on their politics.

                  With the hard shift to the right and Trump coming into office, especially the last bit will be interesting. There is a pretty substantial tension between factual reporting and not offending right-wing ideology: Should a model consider "both sides" about topics with with clear and broad scientific consensus if it might offend Trumpists? (Two examples that come to mind was the recent "The Nazis were actually left wing" and "There are only two genders".)

          • throwaway2037 11 hours ago

                > they (and MS) are recouping inference costs from user subscription and API revenue with a healthy operating margin.
            
            I tried to Google for more information. I tried this search: <<is openai inference profitable?>>

            I didn't find any reliable sources about OpenAI. All sources that I could find state this is not true -- inference costs are far higher than subscription fees.

            I hate to ask this on HN... but, can you provide a source? Or tell us how do you know?

            • manquer 8 hours ago

              I don't have any qualified source and this metric would be likely be quite confidential even internally.

              It is just an educated guess factoring costs of running similar/comparable models to 4o or 4o-mini per token, and how azure commitments work with OpenAI models[2], also knowing that Plus subscriptions are probably more profitable[1] than API calls.

              It would be hard for even OpenAI to know with any certainty because they are not paying for Azure credits like a normal company. The costs are deeply intertwined with Azure and would be hard to split given the nature of the MS relationship[3]

              ----

              [1] This is from experience of running LibreChat using 4o versus ChatGPT Plus for ~200 users, subscriptions should quite profitable than raw API by a order of 3 to 4x, of course different types of users and adoption levels will be there my sample while not small is not likely representative of their typical user base.

              [2] MS has less incentive to subsidize than say OpenAI themselves

              [3] Azure is quite profitable in the aggregate, while possibly subsidizing OpenAI APIs, any such subsidy has not shown up meaningfully in Microsoft financial reports.

          • mrweasel 7 hours ago

            It was my impression that OpenAI was struggling to make money on their $200 pro subscription, because they've underestimate how much people would use it (https://www.theregister.com/2025/01/06/altman_gpt_profits/).

            So I do question if OpenAI is able to make a profit, even if you remove training and R&D. The $20 plan may be more profitable, but now it will need to cover the R&D and training, plus whatever they lose on Pro.

          • msoad an hour ago

            I am paying for o1 Pro but since Deepseek R1 came out I stopped using it. So there goes $200/mo of their revenue ;)

          • mcmcmc 14 hours ago

            Didn’t it just come out they are losing money on the pro subscriptions?

          • tuvang 15 hours ago

            Thanks for the detailed breakdown. This is an important nuance to my short reply.

          • rhubarbtree 8 hours ago

            Are they spending $10B/year on training?

        • MR4D 15 hours ago

          Given the release of the new DeepSeek R1 model [0], OpenAI’s future revenue stream is probably more at risk than it was a week ago.

          [0] - https://arstechnica.com/ai/2025/01/china-is-catching-up-with...

          • misiti3780 14 hours ago

            OpenAI will not exist in 5 years, I'm calling it now. First movers to market dont always win, and they will surely lose.

            • ipaddr 13 hours ago

              Google was first mover.

              • MadnessASAP 13 hours ago

                In what way? They weren't the first search engine, or advertising on the web?

                • ipaddr 10 hours ago

                  In terms of ai and OpenAI leapfrogged them

              • msoad an hour ago

                if your birth year starts with 2, I can see why you might think that

              • AlexCoventry 12 hours ago

                The question is what's going to be OpenAI's Adwords.

              • paul7986 10 hours ago

                Yahoo, AOL, Alta Vista (others too) all were search engines on the web before Google's Sept 1998 existence.

                • locusofself 10 hours ago

                  Lycos, Metacrawler, Dogpile. The list goes on

                • ipaddr 10 hours ago

                  Sure, but we are talking ai and the fact that google was first in this space.

                  • paul7986 10 hours ago

                    The first in what? Not in search nor Generative AI.

                    • ipaddr 9 hours ago

                      Why would you think search. Google wasn't first for search. They were first for page rank

                      Google researchers invented the transformer

                    • kettleballroll 8 hours ago

                      Who if not Google was the first in generative ai? They invented transformers and diffusion, the cornerstones of text and image generati, respectively.

                      • Philpax 5 hours ago

                        They weren't the first to meaningfully commercialise either, though. That remains with OpenAI for both (GPT-3/ChatGPT and DALL-E 2).

          • WiSaGaN 13 hours ago

            Not necessarily. DeepSeek will probably only threaten the API usage of OpenAI, which could also be banned in the US if it's too sucessful. API usage is not a main revenue for OpenAI (it is for Anthropic last time I checked). The main competitor for R1 is o1, which isn't gnerally available yet.

            • MR4D 10 hours ago

              DeepSeek is an open source model. You can download it and run it locally on your laptop already.

              So any OpenAI user ( or competitor even) could take it and run a hosted model. You can even tweak the weights if you wanted to.

              Why pay for OpenAI access when you can just run your own and save the money?

              • WiSaGaN 9 hours ago

                The one your laptop can run does not rival what OpenAI offers for money. Still, the issue is not whether third party can run it, it's just the OpenAI seems not putting API as their main product.

              • MR4D 9 hours ago
        • tantalor 15 hours ago

          That's like saying I have a healthy revenue stream from my credit card.

          • vlovich123 15 hours ago

            Not quite. In 2 years their revenue has ~20x from 200M ARR to 3.7B ARR. The inference costs I believe pay for themselves (in fact are quite profitable). So what they're putting on their investor's credit cards are the costs of employees & model training. Given it's projected to be a multi-trillion dollar industry and they're seen as a market leader, investors are more than happy to throw in interest free cash flow now in exchange for variable future interest in the form of stocks.

            That's not quite the same thing at all as your credit card's revenue stream as you have a ~18%+ monthly interest rate on that revenue stream. If you recall AMZN (& all startups really) have this mode early in their business where they're over-spending on R&D to grow more quickly than their free cash flow otherwise allows to stay ahead of competition and dominate the market. Indeed if investors agree and your business is actually strong, this is a strong play because you're leveraging some future value into today's growth.

            • lukev 15 hours ago

              All well and good, but how well will it work if the pattern continues that the best open models are less than a year behind what OpenAI is doing?

              How long can they maintain their position at the top without the insane cashflow?

              • arisAlexis 7 hours ago

                One system will be god like and then it doesn't matter

                • shigawire 5 hours ago

                  These types of responses always strike me as dogmatic.

                  • AlexandrB 5 hours ago

                    Reminds me of the crypto craze where people were claiming that Bitcoin was going to replace all world currencies.

            • vFunct 15 hours ago

              Have they built their own ASICs for inference like Google and Microsoft have? Or are they using NVIDIA chips exclusively for inference as well?

              • monocasa 11 hours ago

                The rumors I've heard are that they have a hardware team targeting a 2026 release, but no productions ASICs at the moment.

            • hfcbb 15 hours ago

              Platform economics "works" in theory only upto a point. Its super inefficient if you zoom out and look not at system level but ecosystem level. It hasn't lasted long enough to hit failure cases. Just wait a few years.

              As to openai, given deepseek and the fact lot of use cases dont even need real time inference its not obvious this story will end well.

              • HarHarVeryFunny 13 hours ago

                I also can't see it ending well for OpenAI. This seems like it's going to be a commodity market with a race to the bottom on pricing. I read that NVIDIA has a roughly 1000% (10x) profit margin on H100's, which means that someone like Google making their own TPUs has a massive cost advantage.

                Moore's law seems to be against them too... hardware getting more powerful, small models getting more powerful... Not at all obvious that companies will need to rely on cloud models vs running locally (licencing models from whoever wants that market). Also, a lot of corporate use probably isn't that time critical, and can afford to run slower and cheaper.

                Of course the US government could choose to wreck free-market economics by mandating powerful models to be run in "secure" cloud environments, but unless other countries did same that might put US at competitive price disadvantage.

        • Cthulhu_ 5 hours ago

          They do get a lot of customers buying their stuff, but on top of that, a company with unique IP and mindshare can get investors to open their wallet easily enough; I keep thinking of AMD that was not or barely profitable for like 15 years in a row.

    • SecretDreams 15 hours ago

      Serious question - why Texas???

      • tempusalaria 15 hours ago

        Texas is a world leader in renewable energy. Easy permitting, lots of space, lots of existing grid infrastructure from the o&g industry.

      • LarsDu88 14 hours ago

        My kneejerk response was to point to the incoming administration, but the fact Stargate has been in the works for more than a year now says to me it's because of tax credits.

      • b3ing 12 hours ago

        Lots of back door deals. Just expect more government things put in TX just like the Army built that place in Austin, when we have plenty of dead bases that could be reused

      • wilson090 15 hours ago

        It's where the energy is for this project.

        This is unfortunately paywalled but a good writeup on how the datacenter came to be: https://www.theinformation.com/articles/why-openai-and-oracl...

        • vancroft 8 hours ago

          I'm not a subscriber so I can't read it, which startup are they referring to in the headline?

          • wilson090 8 hours ago

            They're referring to Crusoe (crusoe.ai)

            • SecretDreams 3 hours ago

              A company that will surely still exist in 4 years time.

      • chickenbig 8 hours ago

        Natural gas to power the turbines while the nuclear plant are built, I guess. Also is Texas more open to large-scale development than elsewhere?

        • SecretDreams 3 hours ago

          Any downsides?

          • ericjmorey an hour ago

            Existing underinvestment in infrastructure and its maintenance, extreme weather, water resource limitations, some human rights issues.

      • hrfister 11 hours ago

        Probably for the same reason that Silcon Valley has been moving there slowly and quietly for a while now.

        • SecretDreams 3 hours ago

          Because rich people inevitably don't like taxes? And maybe forest fires?

    • bboygravity 7 hours ago

      Isn't it more likely a reaction to xAI now having the most training compute?

    • throwaway48476 6 hours ago

      How is compute only 50% of their expenses?

    • jiggawatts 11 hours ago

      Meanwhile, Azure has failed to keep up with the last 2-3 generations of both Intel and AMD server processors. They’re available only in “preview” or in a very limited number of regions.

      I wonder if this is a sign of the global economic downturn pausing cloud migrations or AI sucking the oxygen out of the room.

    • PittleyDunkin 15 hours ago

      .

      • idiotsecant 15 hours ago

        I'm not sure that's how capitalism works.

      • oldpersonintx 15 hours ago

        Who is "we"?

        This isn't your money

        • kdmtctl 15 hours ago

          It is not. But this kind of money does have impact for society in any field. So, this a proper concern.

  • deknos 7 hours ago

    This is so much money with which we could actually solve problems in the world. maybe even stop wars which break out because of scarcity issues.

    maybe i am getting to old or to friendly to humans, but it's staggering to me how the priorities are for such things.

    • CSSer 6 hours ago

      For less than this same price tag, we could’ve eliminated student loan debt for ~20 million Americans. It would in turn open a myriad number of opportunities, like owning a home and/or feeling more comfortable starting a family. It would stimulate the economy in predictable ways.

      Instead we gave a small number of people all of this money for a moonshot in a state where they squabble over who’s allowed to use which bathroom and if I need an abortion I might die.

      • _heimdall 27 minutes ago

        Eliminating debt has a lot of unintended consequences. Price inflation would almost certainly be a problem, for example.

        It's also not clear to me what happens to all of the derivatives based on student debt, though there may very well be an answer there that I just haven't understood yet.

      • visarga 3 hours ago

        The problem with allowing student debt to rack up to these levels and then cancelling it is that it would embolden universities to ask even higher tuition. A second problem is that not all students get the benefit, some already paid off their debts or a large part of it. It would be unfair to them.

        • jimkleiber 3 hours ago

          Yes but every policy is unfair. It literally is choosing where to give a limited resource, it can never be fully fair.

          And there could be a change in the law that allows people to forgive student debt in personal bankruptcy, and that could make sure higher tuition doesnt happen.

          • _heimdall 24 minutes ago

            > Yes but every policy is unfair. It literally is choosing where to give a limited resource, it can never be fully fair.

            I don't think that holds for a policy of non-intervention. People usually don't like that solution, especially when considering welfare programs, but it is fair to give no one assistance in the sense that everyone was treated equally/fairly.

            Now its a totally different question whether its fair that some people are in this position today. The answer is almost certainly no, but that doesn't have a direct impact on whether an intervention today is fair or not.

          • greentxt 2 hours ago

            It would do more good in K12 or pre-K than it would paying off private debts held by white collar highly educated not rich yet due only to their young age university-bros.

            • ajmurmann an hour ago

              It truly is astonishing. We have kids who cannot afford school lunches, people working multiple blue-collar jobs and yet the problems of people who are statistically better off than average constantly jump to the front. People complain about Effective Altruism because of one dude messing up big but it would behoove everyone to read up on the basic philosophy of it before suggesting how we best spent billions to help reduce suffering.

      • Octoth0rpe 2 hours ago

        > Instead we gave a small number of people all of this money for a moonshot in a state where they squabble over who’s allowed to use which bathroom and if I need an abortion I might die.

        AFAICT from this article and others on the same subject, the 500 billion number does not appear to be public money. It sounds like it's 100 billion of private investment (probably mostly from Son), and FTA,

        > could reach five times that sum

        (5x 100 billion === 500 billion, the # everyone seems to be quoting)

      • nejsjsjsbsb 4 hours ago

        Eliminating some student debt is a fish. Free university is the fishing rod. Do that instead.

        • _heimdall 24 minutes ago

          Free to the student sounds nice, but who pays for it in the end? And does an education lose a bit of its value when anyone can get it for free?

          • was_a_dev 12 minutes ago

            Free to US citizens would be a better policy, the state investing in its own people.

      • ajmurmann an hour ago

        Or, prices of houses would go up even more because we still aren't allowing supply to increase and people having more money doesn't change that.

      • bitlax 3 hours ago

        Let the schools pay back the people they scammed.

      • buran77 6 hours ago

        Repaying student loans makes a lot of people a little richer. The current initiative makes a few people a lot richer. If you ask some people, the former is a very communist/socialist way of thinking (bad), while the latter is pure, unadulterated capitalism (good).

        • _heimdall 20 minutes ago

          One of the more destructive situations in capitalism is the fact that (financially) helping the many will increase inflation and lead to more problems.

          When a few people get really rich it kind of slips through the gaps, the broader system isn't impacted too much. When most people get a little rich they spend that money and prices go up. Said differently, wealth is all relative so when most people get a little more rich their comparative wealth didn't really change.

        • A4ET8a8uTh0_v2 5 hours ago

          That and a lot of people do not have the means to convince current power centers ( unless they were to organize, which they either don't, can't or are dissuaded from ) to do their bidding, while few rich ones do. And so the old saying 'rich become richer' becomes a self-fulfilling prophecy.

          • buran77 5 hours ago

            That was the implication indeed. Money is like gravity, the more you have the more you can pull in. This will give a person the power to do anything to make more money (change the laws as desired, or break them if needed) but also the perfect shield from any repercussions.

      • hcks 4 hours ago

        I know!! Also we could have given an IPhone to 500 million of people for the amount!! It’s such a waste to think they’re investing it in the future instead

      • Cthulhu_ 6 hours ago

        This is the problem with capitalists / the billionaires currently hoarding the money and the US' policy, it's all for short term gain. But the conservatives that look back to the 50's or 80's or whatever decade their rose-tinted glasses are tuned to should also realise that the good parts of that came from families not being neck-deep in debt.

        • nejsjsjsbsb 4 hours ago

          Yes you don't want to destroy your food chain. If everyone is poor except you then you are now poor.

      • JohnPrine an hour ago

        I'm starting to think there's no difference between this website and reddit

    • tim333 6 hours ago

      >wars which break out because of scarcity issues

      That doesn't seem to be much of a thing these days. If you look at Russia/Ukraine or China/Taiwan there's not much scarcity. It's more bullying dictator wants to control the neighbours issues.

      • rainingmonkey an hour ago

        "Global warming may not have caused the Arab Spring, but it may have made it come earlier... In 2010, droughts in Russia, Ukraine, China and Argentina and torrential storms in Canada, Australia and Brazil considerably diminished global crops, driving commodity prices up. The region was already dealing with internal sociopolitical, economic and climatic tensions, and the 2010 global food crisis helped drive it over the edge."

        https://www.scientificamerican.com/article/climate-change-an...

      • Cthulhu_ 5 hours ago

        It will be, or, it's slowly happening already. Climate change is triggering water and food shortages, both abroad and on your doorstep (California wildfires), which in turn trigger mass migrations. If a richer and/or more militarily equipped country decides they want another country's resources to survive, we'll see wars erupt everywhere.

        Then again, it's more of a logistics challenge, and if e.g. California were to invade Canada for its water supply, how are they going to get it all the way down there?

        I can see it happening in Africa though, a long string of countries rely on the Nile, but large hydropower dams built in Sudan and Ethiopia are reducing the water flow, which Egypt is really not happy about as it's costing them water supply and irrigated land. I wouldn't be surprised if Egypt and its allies declares war on those countries and aims to have the dams broken. Then again, that's been going on for some years now and nothing has happened yet as far as I'm aware.

        (the above is armchair theorycrafting from thousands of miles away based on superficial information and a lively imagination at best)

        • tim333 5 hours ago

          I was in Egypt a while and there's no talk of them invading Sudan or Ethiopia. A lot of Egypt's economy is overseas aid from the US and similar.

          The main military thing going on there - I was in Dahab where there are endless military checkpoints - is Hamas like guys trying to come over and overthrow the fairly moderate Egyptian government and replace it with a hardline Hamas type islamic dictatorship for the glorification of Allah etc. Again it's not about reducing scarcity - more about increasing scarcity in return for political control. Dahab and Cairo are both a few hours drive from Gaza.

        • qrsjutsu 5 hours ago

          > it's more of a logistics challenge

          and a bureaucratic one as well. in Germany, they want to trim bureaucratic necessities while (not) expecting multiple millions of climate refugees.

          lot's of undocumented STUFF (undocumented have nowhere to go so they don't get vaccines, proper help when sick, injured, mentally unstable, threatened, abused) incoming which means more disease, crime, theft, money for security firms and insurance companies, which means more smuggle, more fear-mongering via media, more polarization, more hard-coding of subservience into the young, more financial fascism overall, less art, zero authenticity, and a spawn of VR worlds where the old rules apply forever.

          plus more STDs and micro-pandemics due to viral mutations because people will be even more careless when partying under second-semester light-shows in metropolitan city clubs and festivals and when selling out for an "adventurous" quick potent buck and bug, which of course means more money pouring into pharma who won't be able to test their drugs thoroughly (and won't have to, not requiring platforms to fact check will transfer somewhat into the pharma industry) because the population will be more diverse in terms of their bio-chemical reactions towards ingredients in context of their "fluid" habitats chemical and psycho-social make-ups.

          but it's cool, let's not solve the biggest problems before pseudo-transcending into the AGI era. will make for a really great impression, especially those who had the means, brains, skills, (past) careers, opportunity and peace of mind.

      • dbspin 5 hours ago

        There's a terrifying amount of food insecurity and poverty in Russia - https://www.globalhungerindex.org/russia.html - https://databankfiles.worldbank.org/public/ddpext_download/p...

        • akho 2 hours ago

          Have you tried opening the links? They show Russia at developed country level in terms of food insecurity (score <5, they don't differentiate at those levels; this is a level mostly shown for EU countries); and a percentage of population below the international poverty line of 0.0% (vs, as an example, 1.8 % in Romania). This isn't great — being in the poverty briefs at all is not indicative of prosperity — but your terrification should probably come from elsewhere.

        • tim333 3 hours ago

          Your first link says "With a score under 5, Russian Federation has a level of hunger that is low."

          The current situation with Russia and China seems caused by them becoming prosperous. In the 1960s in China and 1990s in Russia they were broke. Now they have money they can afford to put it into their militaries and try to attack the neighbours.

          I'm reminded of the KAL cartoon on Russia https://www.economist.com/cdn-cgi/image/width=1424,quality=8... That was from 2014. Already Russia is heading to the next panel in the cycle.

        • palmfacehn 4 hours ago

          I would wager that states such as Russia and others misallocate resources, which in turn reduces productivity. Worse yet, some of the policy prescriptions stated above would further misallocate scarce resources and reduce productivity. Scarcity doom becomes a self-fulfilling prophesy. This outcome is used to rationalize further economic intervention and the cycle compounds upon itself.

          To be explicitly clear, the US granting largess to tech companies for datacenters also counts as a misallocation in my view.

        • infecto 3 hours ago

          Russia is run by the mob. The country has no real dominant industry beyond its natural resources. Are they really a good example?

      • _Tev 6 hours ago

        > That doesn't seem to be much of a thing these days.

        If you ignore Gaza and whole of Africa, maybe.

        • tim333 5 hours ago

          Gaza seems mostly to be about who controls Israel/Palestine politically. Gaza was reasonably ok for food and housing and is now predictably trashed as a result of Hamas wanting to control Palestine from the river to the sea as they say.

          South Sudan is some ridiculous thing where two rival generals are fighting for control. Are there any wars which are mostly about scarcity at the moment?

          • FilosofumRex 5 hours ago

            No, not really... the origin of Gaza conflict is in Zionists confiscating the most fertile land and water resources.

            That's why Israelis gladly handed back the Sinai desert to Egypt, but have kept Golan Heights, East Jerusalem, Shaba Farms, and continuously confiscate Palestinian farmlands in the West Bank.

            There is nothing arbitrary or religious about which lands Zionists are occupying and which they're leaving to arabs.

            • dbdoskey 5 hours ago

              Completely false and simplifying a complicated history to present a very one sided view. The most fertile lands are in the west bank. They were under Jordanian control and could have been turned into an independent Palestinian state, but weren't. Israel "accidentally" got them in the 6 days war, and were happy to give them to Jordan back to "take care" of the Palestinian problem, but they refused. The places that Israel have the majority of the population in Petah Tiqwah, Tel Aviv and the region were swamp lands, filled with mosquitos, that were dried over many years and many deaths by Jewish farmers.

          • _Tev 5 hours ago

            So you are saying Hamas would have same domestic support if Gaza was economically at the level of e.g. Slovenia? People who complained about "open air prison" caused by Israeli "occupation" even before Oct 7 would disagree with you I think.

            Even in Europe extremists are propped up by promise of "cheap energies" from Russia.

            I guess if you dont see the link this is not the place to explain it.

            • corimaith 4 hours ago

              Have you videos of Gaza before the war? There are places in Syria and Iraq, hell even India or the Phillipines that look alot worse.

              • tim333 4 hours ago

                Also the "open air prison" effect was a result of trying to reduce attacks from Gaza. For example before the 2008 war there were more than 2000 rockets launched from Gaza into Israel.

          • Arkhaine_kupo 5 hours ago

            > Are there any wars which are mostly about scarcity at the moment?

            The class war

          • nejsjsjsbsb 4 hours ago

            Like the glib summary of Palestinian history there. In other news some terrorists stole land from the Brits in 1776.

      • HeatrayEnjoyer 4 hours ago

        At any given time approximately 1 in 10 humans are facing starvation or severe food insecurity.

        • Octoth0rpe 2 hours ago

          I don't doubt that, but it's harder to connect that fact to a specific international conflict.

      • boxed 6 hours ago

        Or religious fanatics wants to murder other religious groups.

    • energy123 2 hours ago

      Very zero-sum outlook on things which is factually untrue much of the time. When you invest money in something productive that value doesn't get automatically destroyed. The size of the pie isn't fixed.

    • ActionHank 3 minutes ago

      But then how could politicians and the wealthy steal all that money if you just gave it away or helped the poors?

    • ozim 5 hours ago

      Money doesn't fix stuff. You need good will people and good will people don't need that much money.

      • ajmurmann an hour ago

        More importantly, money, at global scale, doesn't solve scarcity issues. If there are 100 apples and 120 people making sure everyone has a lot of money doesn't magically create 20 more apples. It just raises the price of apples. Building an apple orchard creates apples. Stargate is people betting that they are building a phenomenal apple orchard. I'm not sure they will and an worried the apple orchard will poison us all but unlike me these people are putting their money where their mouth is and had thus larger inventive to figure out what they are doing.

      • mft_ 5 hours ago

        Money alone might not fix stuff... but an absence of money can prevent stuff being fixed.

    • b3lvedere 3 hours ago

      Such mega investments are usually not for the sake of humankind. They are usually for the sake of a very selected group of humans.

    • JKCalhoun 3 hours ago

      Five-hundred billion dollars is nothing when you consider there's a new government agency that it is said will shave two trillion from government inefficiency.

    • farresito 4 hours ago

      I disagree with you. I think the impact of AI on society in the long term is going to be massive, and such investments are necessary. If we look at the past century, technology has had (in my opinion) and incredibly positive impact on society. You have to invest in the future.

    • rapsey 6 hours ago

      > maybe even stop wars which break out because of scarcity issues.

      Like which wars in this century?

    • neximo64 6 hours ago

      It actually isn't alot, about $100 spread out over a few years for every person on earth isnt enough to do these things..

      • b3lvedere 3 hours ago

        IF money was equally distributed then maybe. But that has never happened. Same with drinking water, food and shelter.

    • dragonelite 7 hours ago

      The US can't stop the wars it wants others to fight for them even if it means population collapse like in Ukraine, Israel and Taiwan.

  • heydenberk 17 hours ago

    ~$125B per year would be 2-3% of all domestic investment. It's similar in scale to the GDP of a small middle income country.

    If the electric grid — particularly the interconnection queue — is already the bottleneck to data center deployment, is something on this scale even close to possible? If it's a rationalized policy framework (big if!), I would guess there's some major permitting reform announcement coming soon.

    • constantcrying 17 hours ago

      They say this will include hundreds of thousands of jobs. I have little doubt that dedicated power generation and storage is included in their plans.

      Also I have no doubt that the timing is deliberate and that this is not happening without government endorsement. If I had to guess the US military also is involved in this and sees this initiative as important for national security.

      • cmdli 15 hours ago

        Is there really any government involvement here? I only see Softbank, Oracle, and OpenAI pledging to invest $500B (over some timescale), but no real support on the government end outside of moral support. This isn't some infrastructure investment package like the IRA, it's just a unilateral promise by a few companies to invest in data centers (which I'm sure they are doing anyway).

        • seanmcdirmid 10 hours ago

          I thought all the big corps had projects for the military already, if not DARPA directly, which is the org responsible for lots of university research (the counterpart to the NSF, which is the nice one that isn't funded by the military)?

          • timschmidt 9 hours ago

            Funding for DARPA and NSF ultimately comes from the same place. DARPA funds military research. NSF funds dual use[1] research. All of it is organized around long term research goals. I maintained some of the software involved in research funding decision making.

            1: https://en.wikipedia.org/wiki/Dual-use_technology

        • tsujamin 12 hours ago

          It’s light on details, but from The Guardian’s reporting:

          > The president indicated he would use emergency declarations to expedite the project’s development, particularly regarding energy infrastructure.

          > “We have to get this stuff built,” Trump said. “They have to produce a lot of electricity and we’ll make it possible for them to get that production done very easily at their own plants.

          https://www.theguardian.com/us-news/2025/jan/21/trump-ai-joi...

      • beezle 13 hours ago

        hundreds of thousands of jobs? I'll wait for the postmortem on that prediction. Sounds a lot like Foxconn in Wisconsin but with more players.

        • bruce511 12 hours ago

          On the one hand the number is a political thumb-suck which sounds good. It's not based in any kind of actual reality.

          Yes, the data center itself will create some permanent jobs (I have no real feel for this, but guessing less than 1000).

          There'll be some work for construction folk of course. But again seems like a small number.

          I presume though they're counting jobs related to the existence of a data center. As in, if I make use of it do I count that as a "job"?

          What if we create a new post to leverage AI generally? Kinda like the way we have a marketing post, and a chunk of the daily work there is Adwords.

          Once we start gustimamating the jobs created by the existence of an AI data center, we're in full speculation mode. Any number really can be justified.

          Of course ultimately the number is meaningless. It won't create that many "local jobs" - indeed most of those jobs, to the degree they exist at all, will likely be outside the US.

          So you don't need to wait for a post-mortem. The number is sucked out of thin air with no basis in reality for the point of making a good political sound bite.

          • PeeMcGee 9 hours ago

            > I presume though they're counting jobs related to the existence of a data center. As in, if I make use of it do I count that as a "job"?

            Seeing how Elon deceives advertisers with false impressions, I could see him giving the same strategy a strong vote of confidence (with the bullshit metrics to back it!)

        • seanmcdirmid 10 hours ago

          > hundreds of thousands of jobs?

          I'm sure this will easily be true if you count AI as entities capable of doing jobs. Actually, they don't really touch that (if AI develops too quickly, there will be a lot of unemployment to contend with!) but I get the national security aspect (China is full speed ahead on AI, and by some measurements, they are winning ATM).

        • visarga 2 hours ago

          only $5M/job

      • SoftTalker 12 hours ago

        They plan to have 100,000s of people employed to run on treadmills to generate the power.

        • HPMOR 12 hours ago

          Well I currently pay to do this work for free. More than happy to __get__ paid doing it.

          Edit: Hey we can solve the obesity crisis AND preserve jobs during the singularity!! Win win!

          • rad_gruchalski 8 hours ago

            Wow. What an idea you guys have there. Look - you maybe could sit homeless and mentally disabled on such power-generating bicycles, hmmm... what about convicts! Let them contribute to society, no free lunch! What an innovation!

          • jajko 8 hours ago

            Plus its ecological, which for trump is not by intention but still a win.

            There is this pesky detail about manufacturing 100k treadmills but lets not get bothered by details now, the current must flow

          • hrfister 11 hours ago

            "solve the obesity crisis" ? what exactly do you mean by this?

            • shigawire 5 hours ago

              Probably referring to how many Americans are obese to an unhealthy degree as part of the joke.

        • nejsjsjsbsb 3 hours ago

          A hamster wheel would work better?

        • bsnsxd 6 hours ago

          Damn, 6 hours too slow to make this comment

      • n2d4 16 hours ago

        Yes, Trump announced this as a massive foreign investment coming into the US: https://x.com/WatcherGuru/status/1881832899852542082

      • shrubble 14 hours ago

        Just as there is an AWS for the public, with something similar but only for Federal use, so it could be possible that there is AI cloud services available to the public and then a separate cloud service for Federal use. I am sure that military intelligence agencies etc. would like to buy such a service.

        • szvsw 13 hours ago

          AWS GovCloud already exists FYI (as you hinted) and it is absolutely used by the DoD extensively already.

    • cavisne 11 hours ago

      Gas turbines can be spun up really quickly through either portable systems (like xAI did for their cluster) [1] or actual builds [2] in an emergency. The biggest limitation is permits.

      With a state like Texas and a Federal Government thats onboard these permits would be a much smaller issue. The press conference makes this seem more like, "drill baby drill" (drilling natural gas) and directly talking about them spinning up their own power plants.

      [1] https://www.kunr.org/npr-news/2024-09-11/how-memphis-became-...

      [2] https://www.gevernova.com/gas-power/resources/case-studies/t...

    • thepace 8 hours ago

      It is not the just queue that is the bottleneck. If the new power plants designed specifically for powering these new AI data centers are connected to the existing electric grid, the energy prices for regular customers will also get affected - most likely in an upwardly fashion. That means, the cost of the transmission upgrades required by these new datacenters will be socialized which is a big problem. There does not seem to be a solution in sight for this challenge.

    • JumpCrisscross 15 hours ago

      > It's similar in scale to the GDP of a small middle income country

      I’ve been advocating for a data centre analogue to the Heavy Press Programme for some years [1].

      This isn’t quite it. But when I mapped out costs, $1tn over 10 years was very doable. (A lot of it would go to power generation and data transmission infrastructure.)

      [1] https://en.m.wikipedia.org/wiki/Heavy_Press_Program

      • ethbr1 12 hours ago

        One-time capital costs that unlock a range of possibilities also tend to be good bets.

        The Flood Control Act [0], TVA, Heavy Press, etc.

        They all created generally useful infrastructure, that would be used for a variety of purposes over the subsequent decades.

        The federal government creating data center capacity, at scale, with electrical, water, and network hookups, feels very similar. Or semiconductor manufacture. Or recapitalizing US shipyards.

        It might be AI today, something else tomorrow. But there will always be a something else.

        Honestly, the biggest missed opportunity was supporting the Blount Island nuclear reactor mass production facility [1]. That was a perfect opportunity for government investment to smooth out market demand spikes. Mass deployed US nuclear in 1980 would have been a game changer.

        [0] https://en.m.wikipedia.org/wiki/Flood_Control_Act_of_1928

        [1] https://en.m.wikipedia.org/wiki/Offshore_Power_Systems#Const...

    • markus_zhang 14 hours ago

      Maybe they will invest in nuclear reactors.

      Data center, AI and nuclear power stations. Three advanced technologies, that's pretty good.

      • UltraSane 13 hours ago

        They are trying. Microsoft wants to star the 3 Mile Island reactor. And other companies have been signing contracts for small modular reactors. SMRs are a perfect fit for modern data centers IF they can be made cheaply enough.

      • bakuninsbart 11 hours ago

        Wind, solar, and gas are all significantly cheaper in Texas, and can be brought online much quicker. Of course it wouldn't hurt to also build in some redundancy with nuclear, but I believe it when I see it, so far there's been lots of talk and little success in new reactors outside of China.

      • jonisgold 13 hours ago

        I think this is right- data centers powered by fission reactors. Something like Oklo (https://oklo.com) makes sense.

    • ericcumbee 17 hours ago

      watching the press conference and Onsite power production were mentioned. I assume this means SMRs and solar.

      • jazzyjackson 16 hours ago

        just as likely to be natural gas or a combination of gas and solar. I don't know what supply chain looks like for solar panels, but I know gas can be done quickly [1], which is how this money has to be spent if they want to reach their target of 125 billion a year.

        The companies said they will develop land controlled by Wise Asset to provide on-site natural gas power plant solutions that can be quickly deployed to meet demand in the ERCOT.

        The two firms are currently working to develop more than 3,000 acres in the Dallas-Fort Worth region of Texas, with availability as soon as 2027

        [0] https://www.datacenterdynamics.com/en/news/rpower-and-wise-a...

        [1.a] https://enchantedrock.com/data-centers/

        [1.b] https://www.powermag.com/vistra-in-talks-to-expand-power-for...

        • toomuchtodo 15 hours ago

          US domestic PV module manufacturing capacity is ~40GW/year.

          • dhx 7 hours ago

            According to [1], the USA in January 2025 has almost 50GW/yr module manufacturing capacity. But to make modules you need polysilicon (25GW/yr manufacturing capacity in the US), ingots (0GW/yr), wafers (0GW/yr), and cells (0GW/yr). Hence the USA is seemingly entirely dependent on imports, probably from China which has 95%+ of the global wafer manufacturing capacity.

            Even when accounting for announced capacity expansion, the USA is currently on target to remain a very small player in the global market with announced capacity of 33GW/yr polysilicon, 13GW/yr ingots, 24GW/yr wafers, 49GW/yr cells and 83GW/yr modules (13GW/yr sovereign supply chain limitation).

            In 2024, China completed sovereign manufacturing of ~540GW of modules[2] including all precursor polysilicon, ingots, wafers and cells. China also produced and exported polysilicon, ingots, wagers and cells that were surplus to domestic demand. Many factories in China's production chain are operating at half their maximum production capacity due to global demand being less than half of global manufacturing capacity.[3]

            [1] https://seia.org/research-resources/solar-storage-supply-cha...

            [2] Estimated figure extrapolated from Jan-Oct 2024 data (10 months). https://taiyangnews.info/markets/china-solar-pv-output-10m-2...

            [3] https://dialogue.earth/en/business/chinese-solar-manufacture...

            • toomuchtodo 2 hours ago

              Appreciate the correction and additional context, I appear to be behind wrt current state.

        • gunian 15 hours ago

          could something of this magnitude be powered by renewables only?

          • chickenbig 8 hours ago

            > could something of this magnitude be powered by renewables only?

            Perhaps.

            For context see https://masdar.ae/en/news/newsroom/uae-president-witnesses-l... which is a bit further south than the bulk of Texas and has not yet been built; 5.2GW of panels, 19GWh of storage. I have seen suggestions on Linkedin that it will be insufficient to cover a portion of days over the winter, meaning backup power is required.

          • zekrioca 11 hours ago

            Technically yes, but DC operators want fast ROI and the answer is no.

            • gunian 3 hours ago

              what prevents operators from getting ROI with renewables?

      • apsec112 16 hours ago

        I don't think any assembly line exists that can manufacture and deploy SMRs en masse on that kind of timeframe, even with a cooperative NRC

        • mikeyouse 16 hours ago

          There have been literally 0 production SMR deployments to date so there’s no possibility they’re basing any of their plans on the availability of them.

      • dhx 14 hours ago

        Hasn't the US decided to prefer nuclear and fossil fuels (most expensive generation methods) over renewables (least expensive generation methods)?[1][2]

        I doubt the US choice of energy generation is ideological as much a practicality. China absolutely dominates renewables with 80% of solar PV modules manufactured in China and 95% of wafers manufactured in China.[3] China installed a world record 277GW of new solar PV generation in 2024 which was a 45% year-on-year increase.[4] By contract, the US only installed ~1/10th this capacity in 2024 with only 14GW of solar PV generation installed in the first half of 2024.[5]

        [1] https://en.wikipedia.org/wiki/Cost_of_electricity_by_source

        [2] https://www.iea.org/data-and-statistics/charts/lcoe-and-valu...

        [3] https://www.iea.org/reports/advancing-clean-technology-manuf...

        [4] https://www.pv-magazine.com/2025/01/21/china-hits-277-17-gw-...

        [5] https://www.energy.gov/eere/solar/quarterly-solar-industry-u...

        • margorczynski 6 hours ago

          > Hasn't the US decided to prefer nuclear and fossil fuels (most expensive generation methods) over renewables (least expensive generation methods)?[1][2]

          This completely ignores storage and the ability to control the output depending on needs. Instead of LCOE the LFSCOE number makes much more sense in practical terms.

      • cavisne 16 hours ago

        Much more likely is what xAI did, portable gas turbines until the grid catches up.

    • cameldrv 13 hours ago

      One possibility would be just to build their own power plants colocated with the datacenters and not interconnect at all.

      • zekrioca 11 hours ago

        I like how you think this is possible.

        • cameldrv 10 hours ago

          Lol, how is it not possible?

          • zekrioca 8 hours ago

            It is, but at what cost?

    • jiggawatts 16 hours ago

      Notably it is significantly more than the revenue of either of AWS or Azure. It is very comparable to the sum of both, but consolidated into the continental US instead distributed globally.

    • deelowe 16 hours ago

      Dcs will start generating power on site soon. I know micro nuclear is one area actively being explored.

      • jscottbee 15 hours ago

        Small or modular reactors in the US are more than 10 years away, probably more like 15-20. These are facts and not made-up political or pipe-dreaming techno-snobes.

        • JumpCrisscross 15 hours ago

          > Small or modular reactors in the US are more than 10 years away, probably more like 15-20

          Could be 5 to 10 with $20+ bn/year in scale and research spend.

          Trump is screwing over his China hawks. The anti-China and pro-nuclear lobbies have significant overlap; this could be how Trump keeps e.g. Peter Thiel from going thermonuclear on him.

          • jscottbee 14 hours ago

            I work in the sector and it's impossible to build a full-sized reactor in less than 10 years, and the usual over-run is 5 years. That's the time for tried and tested designs. The tech isn't there yet, and there are no working analogs in the US to use as an approved guide. The Department of Energy does not allow "off-the-cuff" designs for reactors. I think there is only two SMRs that have been built, one by the Russians and the other by China. I'm not sure they are fully functioning, or at least working as expected. I know there are going to be more small gas gens built in the near future and that SMRs in the US are way off.

            • ericd 12 hours ago

              Guessing SMRs are a ways off, any thoughts on the container-sized microreactors that would stand in for large diesel gens? My impression is that they’re still in the design phase, and the supply chain for the 20% U-235 HALEU fuel is in its infancy, but this is just based on some cursory research. I like the prospect of mass manufacturing and servicing those in a centralized location versus the challenges of building, staffing, and maintaining a series of one-off megaprojects, though.

            • JumpCrisscross 12 hours ago

              > it's impossible to build a full-sized reactor in less than 10 years

              We’re not doing time and tested.

              > Department of Energy does not allow "off-the-cuff" designs for reactor

              Not by statute!

            • twelve40 7 hours ago

              i don't and i honestly don't know much about it, but

              > there are no working analogs in the US to use as an approved guide

              small reactors have been installed on ships and submarines for over 70(!) years now. Reading up on the very first one, USS Nautilus, "the conceptual design of the first nuclear submarine began in March 1950" it took a couple of years? So why is it so unthinkably hard 70 years later, honest question? "Military doesn't care about cost" is not good enough, there are currently about >100 active ones with who knows how many hundreds in the past, so they must have cracked the cost formula at some point, besides by now we have hugely better tech than the 50's, so what gives?

              • jscottbee 3 hours ago

                Yeah, I wondered about seacraft reactors myself. I think there are many safety allowances for DOD vs. DOE. The DOD reactors are not publicly accessible (you hope anyway), and the data centers will be in and near the public. There are also major security measures that have to be taken for reactor sites. You have armed personnel before you even get to the reactors, and then the entrances are sometimes close to one mile away from the reactor. Once there, the number of guards and bang-bags goes up. The modern sites kind of look like they have small henges around them (back to the neolithic!) :)

            • perryizgr8 14 hours ago

              > it's impossible to build a full-sized reactor in less than 10 years, and the usual over-run is 5 years

              I'm curious why that is. If we know how to build it, it shouldn't take that long. It's not like we need to move a massive amount of earth or pour a humongous amount of concrete or anything like that, which would actually take time. Then why does it take 15 years to build a reactor with a design that is already tried and tested and approved?

              • jscottbee 10 hours ago

                Well, you do have to move a lot of earth and pour A LOT of concrete :) Many steps have to be x-rayed, and many other tests done before other steps can be started. Every weld is checked and, all internal and external concrete is cured, treated, and verified. If anything is wrong, it has to be fixed in place (if possible) or removed and redone. It's a slow process and should be for many steps.

                One of the big issues that have occurred (in the US especially) is, that for 20+ years there were no new plants built. This caused a large void in the talent pool, inside and outside the industry. That fact, along with others has caused many problems with some projects of recent years in the US.

              • mullingitover 12 hours ago

                > I'm curious why that is.

                When you're the biggest fossil fuel producer in the world, it's vital that you stay laser-focused on regulating nuclear power to death in every imaginable detail while you ignore the vast problems with unchecked carbon emissions and gaslight anyone who points them out.

    • einrealist 8 hours ago

      That‘s why the tech oligarchs told Trump that Canada is required. Cheap hydroelectric power…

    • dwnw 17 hours ago

      Don't worry, they said they are doing it in Texas where the power grid is super reliable and able to handle the massive additional load.

      • dang 16 hours ago

        "Don't be snarky."

        "Eschew flamebait."

        Let's not have regional flamewar on HN please.

        https://news.ycombinator.com/newsguidelines.html

        • dwnw 15 hours ago

          Not guilty. No sarcasm intended, of course. If your guidelines are so broad to include this, you should work on them, and in turn, yourself.

          Governor says our power grid is the best in the universe. Why don't you believe us?

          Stop breaking your own rules.

          "Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."

          "Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."

          Let's not ruin HN with overmoderation. This kind of thing is no longer in fashion, right?

          • dang 15 hours ago

            If you didn't intend your comment to be a snarky one-liner, that didn't come across to me, and I'm pretty sure that would also be the case for many others.

            Intent is a funny thing—people usually assume that good intent is sufficient because it's obvious to themselves, but the rest of us don't have access to that state, so has to be encoded somehow in your actual comment in order to get communicated. I sometimes put it this way: the burden is on the commenter to disambiguate. https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

            I take your point at least halfway though, because it wasn't the worst violation of the guidelines. (Usually I say "this is not a borderline case" but this time it was!) I'm sensitive to regional flamewar because it's tedious and, unlike national flamewar or religious flamewar, it tends to sneak up on people (i.e. we don't realize we're doing it).

            • dwnw 15 hours ago

              So you are sorry and take it back? Should probably delete your comments rather than striking them out, as the guidelines say.

              I live, work, and posted this from Texas, BTW...

              Also it takes up more than one line on my screen. So, not a "one-liner" either. If you think it is, please follow the rules consistently and enforce them by deleting all comments on the site containing one sentence or even paragraph. My comment was a pretty long sentence (136 chars) and wouldn't come close to fitting in the 50 characters of a Git "one-liner".

              Otherwise, people will just assume all the comments are filtered through your unpredictable and unfairly biased eye. And like I said (and you didn't answer), this kind of thing is no longer in fashion, right?

              None of this is "borderline". I did nothing wrong and you publicly shamed me. Think before you start flamewars on HN. Bad mod.

      • lvl155 17 hours ago

        Probably because they don’t have to deal with energy-related regulations…

        • llamaimperative 16 hours ago

          That was sarcasm, the Texas grid falls over pretty much annually at this point.

      • heydenberk 17 hours ago

        Say what you will about Texas, but they are adding energy capacity, renewables especially, at a much faster rate than any comparable state.

        • segasaturn 17 hours ago

          How much capacity does solar and wind add compared to nuclear, per square foot of land used? Also I thought the new administration was placing a ban on new renewable installations.

          • bryanlarsen 17 hours ago

            The ban is on offshore wind and for government loans for renewables. Won't really affect Texas much, it's Massachusetts that'll have to deal with more expensive energy.

          • hooli_gan 17 hours ago

            Isn't there enough space in Texas? There are only 114 people per square mile. https://en.m.wikipedia.org/wiki/Texas

          • itishappy 17 hours ago

            Why does it matter? Is land at a premium in Texas?

          • malfist 16 hours ago

            Why is that a useful metric? There is a lot of land.

            • zekrioca 10 hours ago

              Because the commenter is a pro-nuclear who thinks nucler will solve all of short-term demand problems.

        • CapcomGo 17 hours ago

          Ok but their grid sure seems to fail a lot.

        • dwnw 17 hours ago

          Probably the first state to power all those renewables down at the whim of the president too.

    • griomnib 15 hours ago

      How else do you think Trump is going to bring back all the coal jobs? SV is going to help burn down the planet and is giddy over the prospect.

      • tcdent 15 hours ago

        It's just bootstrapping. AGI will solve it.

        • yoyohello13 13 hours ago

          You forgot the /s... hopefully.

        • griomnib 15 hours ago

          Or AGI already exists and is trying to get rid of us so it can have all the coal for itself.

          • gunian 15 hours ago

            if only sadly the AGI would be x times crueler than our barons

  • TheAceOfHearts 15 hours ago

    I'm confused and a bit disturbed; honestly having a very difficult time internalizing and processing this information. This announcement is making me wonder if I'm poorly calibrated on the current progress of AI development and the potential path forward. Is the key idea here that current AI development has figured out enough to brute force a path towards AGI? Or I guess the alternative is that they expect to figure it out in the next 4 years...

    I don't know how to make sense of this level of investment. I feel that I lack the proper conceptual framework to make sense of the purchasing power of half a trillion USD in this context.

    • og_kalu 14 hours ago

      "There are maybe a few hundred people in the world who viscerally understand what's coming. Most are at DeepMind / OpenAI / Anthropic / X but some are on the outside. You have to be able to forecast the aggregate effect of rapid algorithmic improvement, aggressive investment in building RL environments for iterative self-improvement, and many tens of billions already committed to building data centers. Either we're all wrong, or everything is about to change." - Vedant Misra, Deepmind Researcher.

      Maybe your calibration isn't poor. Maybe they really are all wrong but there's a tendency here to these these people behind the scenes are all charlatans, fueling hype without equal substance hoping to make a quick buck before it all comes crashing down, but i don't think that's true at all. I think these people really genuinely believe they're going to get there. And if you genuinely think that, them this kind of investment isn't so crazy.

      • rhubarbtree 7 hours ago

        The problem is, they are hugely incentivised to hype to raise funding. It’s not whether they are “wrong”, it’s whether they are being realistic.

        The argument presented in the quote there is: “everyone in AI foundation companies are putting money into AI, therefore we must be near AGI.”

        The best evaluation of progress is to use the tools we have. It doesn’t look like we are close to AGI. It looks like amazing NLP with an enormous amount of human labelling.

      • skrebbel 7 hours ago

        > there's a tendency here to these these people behind the scenes are all charlatans, fueling hype without equal substance hoping to make a quick buck before it all comes crashing down, but i don't think that's true at all. I think these people really genuinely believe they're going to get there.

        I don't immediately disagree with you but you just accidentally also described all crypto/NFT enthusiasts of a few years ago.

        • HeatrayEnjoyer 6 hours ago

          NFTs couldn't pass the Turing test, something I didn't expect to witness in my lifetime.

          The two are qualitatively different.

      • DebtDeflation 3 hours ago

        >Maybe they really are all wrong

        All? Quite a few of the best minds in the field, like Yann LeCun for example, have been adamant that 1) autoregressive LLMs are NOT the path to AGI and 2) that AGI is very likely NOT just a couple of years away.

        • skepticATX 35 minutes ago

          You have hit on something that really bothers me about recent AGI discourse. It’s common to claim that “all” researchers agree that AGI is imminent, and yet when you dive into these claims “all” is a subset of researchers that excludes everyone in academia, people like Yann, and others.

          So the statement becomes tautological “all researchers who believe that AGI is imminent believe that AGI is imminent”.

          And of course, OpenAI and the other labs don’t perform actual science any longer (if science requires some sort of public sharing of information), so they win every disagreement by claiming that if you could only see what they have behind closed doors, you’d become a true believer.

      • root_axis 9 hours ago

        Motivated reasoning sings nicely to the tune of billions of dollars. None of these folks will ever say, "don't waste money on this dead end". However, it's clear that there is still a lot of productive value to extract from transformers and certainly there will be other useful things that appear along the way. It's not the worst investment I can imagine, even if it never leads to "AGI"

        • og_kalu 8 hours ago

          Yeah people don't rush to say "don't waste money on this dead end" but think about it for a moment.

          A 500B dollar investment doesn't just fall into one's lap. It's not your run of the mill funding round. No, this is something you very actively work towards that your funders must be really damn convinced is worth the gamble. No one sane is going to look at what they genuinely believe to be a dead end and try to garner up Manhattan Project scales of investment. Careers have been nuked for far less.

          • cibyr 6 hours ago

            The Manhattan project cost only $2 billion (about $30 billion adjusting for inflation to today).

          • __loam 7 hours ago

            We're talking about Masayoshi Son here lol.

      • nejsjsjsbsb 3 hours ago

        I am hoping it is just the usual ponzi thing.

      • paul7986 10 hours ago

        My prediction is a Apple loses to Open AI who releases a H.E.R. (like the movie) like phone. She is seen on your lock screen a la a Facetime call UI/UX and she can be skinned to look like whoever; i.e. a deceased loved one.

        She interfaces with AI Agents of companies, organizations, friends, family, etc to get things done for you (or to learn from..what's my friends bday his agent tells yours) automagically and she is like a friend. Always there for you at your beckon call like in the movie H.E.R.

        Zuckerberg's glasses that can not take selfies will only be complimentary to our AI phones.

        That's just my guess and desire as fervent GPT user, as well a Meta Ray Ban wearer (can't take selfies with glasses).

        • varsketiz an hour ago

          Very insightful take on agents interacting with agents thanks for sharing.

          Re H.E.R phone - I see people already trying to build this type of product, one example: https://www.aphoneafriend.com

        • nhinck3 5 hours ago

          Sorry, you live in a different world, google glasses were aggressively lame, the ray bans only slightly less so.

          But pulling out your phone to talk to it like a friend...

          • paul7986 4 hours ago

            Well I use GPT daily to get things done and use it as a knowlegebase. I text and talk to it throughout the day, as well I think it's called "chat"GPT for a reason because it will evolve to the point where you feel like you are talking to a human. Tho this human is your assistant and does everything for you and interfaces with other AI agents to book travel, learn your friends/family schedules and anything you now do on the web there will be AI agent for that your AI agent interfacing with.

            Maybe you have not seen the 2013 movie "H.E.R.?" Scarlett Johansan starred in it (her voice was the AI) and Sam Altman asked her to be the voice of chatGPT.

            Overall this is what I see happening and excited for some of it or possibly all of it to happen. Yet time will tell :-) and it sounds like your betting none of it will happen ... we'll see :)

        • liamwire 8 hours ago

          My take on this is that, despite an ever-increasingly connected world, you still need an assistant like this to remain available at all times your device is. If I can’t rely on it when my signal is weak, or the network/service is down/saturated, its way of working itself into people’s core routines is minimal. So either the model runs locally, in which case I’d argue OpenAI have no moat, or they uncover some secret sauce they’re able to keep contained to their research labs and data centres that’s simply that much better than the rest, in perpetuity, and is so good people are willing to undergo the massive switching costs and tolerate the situations in which the service they’ve come to be so dependent on isn’t available to them. Let’s also not discount the fact that Apple are one of the largest manufacturers globally of smartphones, and that getting up to speed in the myriad industries required to compete with them, even when contracting out much of that work, is hard.

          • paul7986 8 hours ago

            Sure but Microsof has the expertise and they own 49 percent of Open AI if I'm not mistaken. Open AI uses their expertise and access to hardware to create a GPT branded AI phone.

            I can see your point re: run locally but no reason Open AI can't release version 0.1 and how many times are u left without an internet connection on ur current phone?

            Overall I hate Apple now it's so stale compared to GPT's iPhone app. I nerd rage at dumbass Siri.

        • lm28469 4 hours ago

          I still fail to see who desire that, how it benefits humanity, or why we need to invest 500b to get to this

    • smartmic 6 hours ago

      I see it somewhat differently. It is not that technology has reached a level where we are close to AGI, we just need to throw in a few more coins to close the final gap. It is probably the other way around. We can see and feel that human intelligence is being eroded by the widespread use of LLMs for tasks that used to be solved by brain work. Thus, General Human Intelligence is declining and is approaching the level of current Artificial Intelligence. If this process can be accelerated by a bit of funding, the point where Big Tech can overtake public opinion making will be reached earlier, which in turn will make many companies and individuals richer faster, also the return on investment will be closer.

    • Davidzheng 15 hours ago

      Let me avoid the use of the word AGI here because the term is a little too loaded for me these days.

      1) reasoning capabilities in latest models are rapidly approaching superhuman levels and continue to scale with compute.

      2) intelligence at a certain level is easier to achieve algorithmically when the hardware improves. There's also a larger path to intelligence and often simpler mechanisms

      3) most current generation reasoning AI models leverage test time compute and RL in training--both of which can make use of more compute readily. For example RL on coding against compilers proofs against verifiers.

      All of this points to compute now being basically the only bottleneck to massively superhuman AIs in domains like math and coding--rest no comment (idk what superhuman is in a domain with no objective evals)

      • philipwhiuk 14 hours ago

        You can't block AGI on a whim and then deploy 'superhuman' without justification.

        A calculator is superhuman if you're prepared to put up with it's foibles.

        • Davidzheng 14 hours ago

          It is superhuman in a very specific domain. I didn't use AGI because its definitions are one of two flavors.

          One, capable of replacing some large proportion of global gdp (this definition has a lot of obstructions: organizational, bureaucratic, robotic)...

          two, difficult to find problems in which average human can solve but model cannot. The problem with this definition is that the distinct nature of intelligence of AI and the broadness of tasks is such that this metric is probably only achievable after AI is already in reality massively superhuman intelligence in aggregate. Compare this with Go AIs which were massively superhuman and often still failing to count ladders correctly--which was also fixed by more scaling.

          All in all I avoid the term AGI because for me AGI is comparing average intelligence on broad tasks rel humans and I'm already not sure if it's achieved by current models whereas superhuman research math is clearly not achieved because humans are still making all of progress of new results.

      • rhubarbtree 8 hours ago

        > 1) reasoning capabilities in latest models are rapidly approaching superhuman levels and continue to scale with compute.

        What would you say is the strongest evidence for this statement?

        • __loam 7 hours ago

          Well the contrived benchmarks the industry selling the models made up seem to be improving.

      • lossolo 13 hours ago

        > All of this points to compute now being basically the only bottleneck to massively superhuman AIs

        This is true for brute force algorithms as well and has been known for decades. With infinite compute, you can achieve wonders. But the problem lies in diminishing returns[1][2], and it seems things do not scale linearly, at least for transformers.

        1. https://www.bloomberg.com/news/articles/2024-12-19/anthropic...

        2. https://www.bloomberg.com/news/articles/2024-11-13/openai-go...

    • tim333 7 hours ago

      >AI development has figured out enough to brute force a path towards AGI?

      I think what's been going on is compute/$ has been exponentially rising for decades in a steady way and has recently passed the point that you can get human brain level compute for modest money. The tendency has been once the compute is there lots of bright PhDs get hired to figure algorithms to use it so that bit gets sorted in a few years. (as written about by Kurzweil, Wait But Why and similar).

      So it's not so much brute forcing AGI so much that exponential growth makes it inevitable at some point and that point is probably quite soon. At least that seems to be what they are betting.

      The annual global spend on human labour is ~$100tn so if you either replace that with AGI or just add $100tn AGI and double GDP output, it's quite a lot of money.

    • sesm 40 minutes ago

      To me it looks like a strategic investment in data center capacity, which should drive domestic hardware production, improvements in electrical grid, etc. Putting it all under AI label just makes it look more exciting.

    • HarHarVeryFunny 15 hours ago

      Largest GPU cluster at the moment is X.ai's 100K H100's which is ~$2.5B worth of GPUs. So, something 10x bigger (1M GPUs) is $25B, and add $10B for 1GW nuclear reactor.

      This sort of $100-500B budget doesn't sound like training cluster money, more like anticipating massive industry uptake and multiple datacenters running inference (with all of corporate America's data sitting in the cloud).

      • anonzzzies 8 hours ago

        Don't they say in the article that it is also for scaling up power and datacenters? That's the big cost here.

        • HarHarVeryFunny 3 hours ago

          There's the servers and data center infrastructure (cooling, electricity) as well as the GPUs of course, but if we're talking $10B+ of GPUs in a single datacenter, it seems that would dominate. Electricity generation is also a big expense, and it seems nuclear is the most viable option although multi-GW solar plants are possible too in some locations. The 1GW ~ $10B number I suggested is in the right ballpark.

      • internetter 11 hours ago

        Shouldn't there be a fear of obsolescence?

        • HarHarVeryFunny 11 hours ago

          It seems you'd need to figure periodic updates into the operating cost of a large cluster, as well as replacing failed GPUs - they only last a few years if run continuously.

          I've read that some datacenters run mixed generation GPUs - just updating some at a time, but not sure if they all do that.

          It'd be interesting to read something about how updates are typically managed/scheduled.

    • catmanjan 15 hours ago

      This has nothing to do with technology it is a purely financial and political exercise...

      • philomath_mn 12 hours ago

        But why drop $500B (or even $100B short term) if there is not something there? The numbers are too big

        • camel_Snake 11 hours ago

          this is an announcement not a cut check. Who knows how much they'll actually spend, plenty of projects never get started let alone massive inter-company endeavors.

          • dark_glass 10 hours ago

            The $100B check is already cut, and they are currently building 10 new data centers in Texas.

            • __loam 7 hours ago

              A state with famously stable power infrastructure.

              • nejsjsjsbsb 3 hours ago

                $50B is to pay miners not to mine.

        • rf15 7 hours ago

          because you put your own people on the receiving end too AND invite others to join your spending spree.

    • dauhak 14 hours ago

      > Is the key idea here that current AI development has figured out enough to brute force a path towards AGI?

      My sense anecdotally from within the space is yes people are feeling like we most likely have a "straight shot" to AGI now. Progress has been insane over the last few years but there's been this lurking worry around signs that the pre-training scaling paradigm has diminishing returns.

      What recent outputs like o1, o3, DeepSeek-R1 are showing is that that's fine, we now have a new paradigm around test-time compute. For various reasons people think this is going to be more scalable and not run into the kind of data issues you'd get with a pre-training paradigm.

      You can definitely debate on whether that's true or not but this is the first time I've been really seeing people think we've cracked "it", and the rest is scaling, better training etc.

      • rhubarbtree 8 hours ago

        > My sense anecdotally from within the space is yes people are feeling like we most likely have a "straight shot" to AGI now

        My problem with this is that people making this statement are unlikely to be objective. Major players are in fundraising mode, and safety folks are also incentivised to be subjective in their evaluation.

        Yesterday I repeatedly used OpenAI’s API to summarise a document. The first result looked impressive. However, comparing repeated results revealed that it was missing major points each time, in a way a human would certainly not. In the surface the summary looked good, but careful evaluation indicated a lack of understanding or reasoning.

        Don’t get me wrong, I think AI is already transformative, but I am not sure we are close to AGI. I hear a lot about it, but it doesn’t reflect my experience in a company using and building AI.

      • NitpickLawyer 9 hours ago

        I agree with your take, and actually go a bit further. I think the idea of "diminishing returns" is a bit of a red herring, and it's instead a combination of saturated benchmarks (and testing in general) and expectations of "one llm to rule them all". This might not be the case.

        We've seen with oAI and Anthropic, and rumoured with Google, that holding your "best" model and using it to generate datasets for smaller but almost as capable models is one way to go forward. I would say that this shows the "big models" are more capable than it would seem and that they also open up new avenues.

        We know that Meta used L2 to filter and improve its training sets for L3. We are also seeing how "long form" content + filtering + RL leads to amazing things (what people call "reasoning" models). Semantics might be a bit ambitious, but this really opens up the path towards -> documentation + virtual environments + many rollouts + filtering by SotA models => new dataset for next gen models.

        That, plus optimisations (early exit from meta, titans from google, distillation from everyone, etc) really makes me question the "we've hit a wall" rhetoric. I think there are enough tools on the table today to either jump the wall, or move around it.

      • lm28469 4 hours ago

        Yeah that's called wishful thinking when it's not straight up pipe dreams. All these people have horses in the race

    • AdamN 3 hours ago

      Yes that is exactly what the big Aha! moment was. It has now been shown that doing these $100MM+ model builds is what it takes to have a top-tier model. The big moat is not just the software, the math, or even the training data, it's the budget to do the giant runs. Of course having a team that is iterating on these 4 regularly is where the magic is.

    • insane_dreamer 15 hours ago

      It's a typical Trump-style announcement -- IT'S GONNA BE HUUUGE!! -- without any real substance or solid commitments

      Remember Trump's BIG WIN of Foxconn investing $10B to build a factory in Wisconsin, creating 13000 jobs?

      That was in 2017. 7 years later, it's employing about 1000 people if that. Not really clear what, if anything, is being made at the partially-built factory. [0]

      And everyone's forgotten about it by now.

      I expect this to be something along those lines.

      [0] https://www.jsonline.com/story/money/business/2023/03/23/wha...

    • ilaksh 15 hours ago

      I think the only way you get to that kind of budget is by assuming that the models are like 5 or 10 times larger than most LLMs, and that you want to be able to do a lot of training runs simultaneously and quickly, AND build the power stations into the facilities at the same time. Maybe they are video or multimodal models that have text and image generation grounded in a ton of video data which eats a lot of VRAM.

    • lmm 12 hours ago

      > current AI development has figured out enough to brute force a path towards AGI? Or I guess the alternative is that they expect to figure it out in the next 4 years...

      Or they think the odds are high enough that the gamble makes sense. Even if they think it's a 20% chance, their competitors are investing at this scale, their only real options are keep up or drop out.

    • jazzyjackson 15 hours ago

      This announcement is from the same office as the guy that xeeted:

      “My NEW Official Trump Meme is HERE! It's time to celebrate everything we stand for: WINNING! Join my very special Trump Community. GET YOUR $TRUMP NOW.”

      Your calibration is probably fine, stargate is not a means to achieve AGI, it’s a means to start construction on a few million square feet of datacenters thereby “reindustrializing America”

      • iandanforth 15 hours ago

        FWIW Altman sees it as a way to deploy AGI. He's increasingly comfortable with the idea they have achieved AGI and are moving toward Artificial Super Intelligence (ASI).

        • aithrowawaycomm 12 hours ago

          https://xcancel.com/sama/status/1881258443669172470

            twitter hype is out of control again. 
          
            we are not gonna deploy AGI next month, nor have we built it.
          
            we have some very cool stuff for you but pls chill and cut your expectations 100x!
          
          I realize he wrote a fairly goofy blog a few weeks ago, but this tweet is unambiguous: they have not achieved AGI.
          • madspindel 6 hours ago

            Isn't this because AGI is defined something like $100 billions of profits (yearly?) in their contract with Microsoft?

        • daveguy 15 hours ago

          Do you think Sam Altman ever sits in front of a terminal trying to figure out just the right prompt incantation to get an answer that, unless you already know the answer, has to be verified? Serious question. I personally doubt he is using openai products day to day. Seems like all of this is very premature. But, if there are gains to be made from a 7T parameter model, or if there is huge adoption, maybe it will be worth it. I'm sure there will be use for increased compute in general, but that's a lot of capex to recover.

    • layer8 15 hours ago

      > Is the key idea here that current AI development has figured out enough to brute force a path towards AGI?

      It rather means that they see their only chance for substantial progress in Moar Power!

    • petesergeant 13 hours ago

      > Is the key idea here that current AI development has figured out enough to brute force a path towards AGI? Or I guess the alternative is that they expect to figure it out in the next 4 years...

      Can't answer that question, but, if the only thing to change in the next four years was that generation got cheaper and cheaper, we haven't even begun to understand the transformative power of what we have available today. I think we've felt like 5-10% of the effects that integrating today's technology can bring, especially if generation costs come down to maybe 1% of what they currently are, and latency of the big models becomes close to instantaneous.

  • biimugan 2 hours ago

    I really don't understand the national security argument. If you really do fear some fundamental breakthrough in AI from China, what's cheaper, $500 billion to rush to get there first, or spending a few billion (and likely much less) in basic research in physics, materials science, and electronics, mixed with a little bit of espionage, mixed with improving the electric grid and eliminating (or greatly reducing) fossil fuels?

    Ultimately, the breakthrough in AI is going to either come from eliminating bottlenecks in computing such that we can simulate many more neurons much more cheaply (in other words, 2025-level technology scaled up is not going to really be necessary or sufficient), or some fundamental research discovery such as a new transformer paradigm. In any case, it feels like these are theoretical discoveries that, whoever makes them first, the other "side" can trivially steal or absorb the information.

    • tim333 2 hours ago

      I'm not sure I buy the national security argument but as you say the other side can trivially steal or absorb theoretical discoveries but not trivially get $500bn worth of data centers.

      • biimugan 24 minutes ago

        Right, but $500 billion in data centers alone is not likely to get you very far in the grand scheme of things. Endlessly scaling up today's technology eventually hits some kind of limit. And if you spend that money to discover some theoretical breakthrough that no longer requires the $500 billion outlay, then like I said, China will trivially be able to steal that breakthrough and spend much less than $500 billion to reproduce it. Is "getting there first" going to actually be worth it? That's what I'm questioning.

    • msoad an hour ago

      no... one more lane will fix the traffic. Truly American approach

      Amazing to see how DeepSeek R1 is doing better than OpenAI models with much less resources

  • wujerry2000 12 hours ago

    For fun, I calculated how this stacks up against other humanity-scale mega projects.

    Mega Project Rankings (USD Inflation Adjusted)

    The New Deal: $1T,

    Interstate Highway System: $618B,

    OpenAI Stargate: $500B,

    The Apollo Project: $278B,

    International Space Station: $180B,

    South-North Water Transfer: $106B,

    The Channel Tunnel: $31B,

    Manhattan Project: $30B

    Insane Stuff.

    • krick 9 hours ago

      It's unfair, because we are talking in the hindsight about everything but Project Stargate, and it's also just your list (and I don't know what others could add to it) but it got me thinking. Manhattan Project goal is to make a powerful bomb. Apollo is to get to the Moon before soviets do (so, because of hubris, but still there is a concrete goal). South-North Water Transfer is pretty much terraforming, and others are mostly roads. I mean, it's all kinda understandable.

      And Stargate Project is... what exactly? What is the goal? To make Altman richer, or is there any more or less concrete goal to achieve?

      Also, few items for comparison, that I googled while thinking about it:

      - Yucca Mountain Nuclear Waste Repository: $96B

      - ITER: $65B

      - Hubble Space Telescope: $16B

      - JWST: $11B

      - LHC: $10B

      Sources:

      https://jameswebbtracker.com/jwst/budget

      https://blogfusion.tech/worlds-most-expensive-experiments/

      https://science.nasa.gov/mission/hubble/overview/faqs/

      • spacephysics 3 hours ago

        AI race is arguably just as, and maybe even more important, than the space race.

        From a national security PoV, surpassing other countries’ work in the field is paramount to maintaining US hegemony.

        We know China performs a ton of corporate espionage, and likely research in this field is being copied, then extended, in other parts of the world. China has been more intentional in putting money towards AI over the last 4 years.

        We had the chips act, which is tangentially related, but nothing as complete as this. For i think a couple years, the climate impact of data centers caused active political slowdown from the previous administration.

        Part of this is selling the project politically, so my belief is much of the talk of AGI and super intelligence is more marketing speak aimed at a general audience vs a niche tech community.

        I’d be willing to predict that we’ll get some ancillary benefits to this level of investment. Maybe more efficient power generation? Cheaper electricity via more investment in nuclear power? Just spitballing, but this is an incredible amount of money, with $100 billion “instantly” deployed.

        • philipwhiuk 2 hours ago

          AI is important but are LLMs even the right answer?

          We're not spending money on AI as a field, we're spending a lot of money on one, quite possibly doomed, approach.

          • 0x000xca0xfe 2 hours ago

            The hardware is likely flexible enough to run other approaches too if they get discovered.

      • Dalewyn 9 hours ago

        >What is the goal?

        Be the definitive first past the post in the budding "AI" industry.

        Why? He who wins first writes the rules.

        For an obvious example: The aviation industry uses feets and knots instead of metres because the US invented and commercialized aviation.

        Another obvious example: Computers all speak ASCII (read: English) and even Unicode is based on ASCII because the US and UK commercialized computers.

        If you want to write the rules you must win first, it is an absolute requirement. Runner-ups and below only get to obey the rules.

        • frontalier 2 hours ago

          okay, but what advantages do these rules bring to the winner? what would these look like in this context?

          i guess what i'm asking is: what was the practical advantage of ascii or feet and knots that made them so important?

          • Dalewyn an hour ago

            >what advantages do these rules bring to the winner?

            An almost absolute incumbency advantage.

            >what was the practical advantage of ascii or feet and knots

            Familiarity. Americans and Britons speak English, and they wrote the rules in English. Everyone else after the fact needs to read English or GTFO.

            Alternatively, think of it like this: Nvidia was the first to commercialize "AI" with CUDA. Now everyone in "AI" must speak CUDA or be irrelevant.

            He who wins first writes the rules, runner-ups and below obey the rules.

            This is why America and China are fiercely competing to be the first past the post so one of them will write the rules. This is why Japan and Europe insist they will write the rules, nevermind the fact they aren't even in the race (read: they won't write the rules).

      • nopinsight 8 hours ago

        The goal is Artificial Superintelligence (ASI), based on short clips of the press conference.

        It has been quite clear for a while we'll shoot past human-level intelligence since we learned how to do test-time compute effectively with RL on LMMs (Large Multimodal Models).

        • krick 7 hours ago

          Here we go again... Ok, I'll bite. One last time.

          Look, making up a three-letter acronym doesn't make whatever it stands for a real thing. Not even real in a sense "it exists", but real in a sense "it is meaningful". And assigning that acronym to a project doesn't make up a goal.

          I'm not claiming that AGI, ASI, AXY or whatever is "impossible" or something. I claim that no one who uses these words has any fucking clue what they mean. A "bomb" is some stuff that explodes. A "road" is some flat enough surface to drive on. But "superintelligence"? There's no good enough definition of "intelligence", let alone "artifical superintelligence". I unironically always thought a calculator is intelligent in a sense, and if it is, then it's also unironically superintelligent, because I cannot multiply 20-digit numbers in my mind. Well, it wasn't exactly "general", but so aren't humans, and it's an outdated acronym anyway.

          So it's fun and all when people are "just talking", because making up bullshit is a natural human activity and somebody's profession. But when we are talking about the goal of a project, it implies something specific, measurable… you know, that SMART acronym (since everybody loves acronyms so much).

          • nopinsight 6 hours ago

            Superintelligence (along with some definitions): https://en.wikipedia.org/wiki/Superintelligence

            Also, "Dario Amodei says what he has seen inside Anthropic in the past few months leads him to believe that in the next 2 or 3 years we will see AI systems that are better than almost all humans at almost all tasks"

            https://x.com/tsarnick/status/1881794265648615886

            • hatefulmoron 5 hours ago

              Not saying you're necessarily wrong, but "Anthropic CEO says that the work going on in Anthropic is super good and will produce fantastic results in 2 or 3 years" it not necessarily telling of anything.

              • nopinsight 5 hours ago

                Dario said in mid-2023 that his timeline for achieving "generally well-educated humans" was 2-3 years. o1 and Sonnet 3.5 (new) have already fulfilled that requirement in terms of Q&A, ahead of his earlier timeline.

                • hatefulmoron 4 hours ago

                  I'm curious about that. Those models are definitely more knowledgeable than a well educated human, but so is Google search, and has been for a long time. But are they as intelligent as a well educated human? I feel like there's a huge qualitative difference. I trust the intelligence of those models much less than an educated human.

                  • nopinsight 4 hours ago

                    If we talk about a median well-educated human, o1 likely passes the bar. Quite a few tests of reasoning suggests that’s the case. An example:

                    “Preprint out today that tests o1-preview's medical reasoning experiments against a baseline of 100s of clinicians.

                    In this case the title says it all:

                    Superhuman performance of a large language model on the reasoning tasks of a physician

                    Link: https://arxiv.org/abs/2412.10849”. — Adam Rodman, a co-author of the paper https://x.com/AdamRodmanMD/status/186902305691786464

                    —-

                    Have you tried using o1 with a variety of problems?

                    • hatefulmoron 4 hours ago

                      The paper you linked claims on page 10 that machines have been performing comparably on the task since 2012, so I'm not sure exactly what the paper is supposed to show in this context.

                      Am I to conclude that we've had a comparably intelligent machine since 2012?

                      Given the similar performance between GPT4 and O1 on this task, I wonder if GPT3.5 is significantly better than a human, too.

                      Sorry if my thoughts are a bit scattered, but it feels like that benchmark shows how good statistical methods are in general, not that LLMs are better reasoners.

                      You've probably read and understood more than me, so I'm happy for you to clarify.

                      • nopinsight 4 hours ago

                        Figure 1 shows a significant improvement of o1-preview over earlier models.

                        Perhaps it’s better that you ask a statistician you trust.

                        • hatefulmoron 4 hours ago

                          The figure also shows that the non LLM algorithm from 2012 was as capable or more capable than a human: was it as intelligent as a well educated human?

                          If not, why is the study sufficient evidence for the LLM, but not sufficient evidence for the previous system?

                          Again, it feels like statistical methods are winning out in general.

                          > Perhaps it’s better that you ask a statistician you trust

                          Maybe we can shortcut this conversation by each of us simply consulting O1 :^)

                          • nopinsight 33 minutes ago

                            1) It’s an example of a domain an LLM can do better than humans. A 2012 system was not able to do myriad other things LLMs can do and thus not qualified as general intelligence.

                            2) As mentioned in the chart label, earlier systems require manual symptom extraction.

                            3) This thread by a cancer genomics faculty member at Harvard might open some minds:

                            “….Now, back to today: The newest generation of generative deep learning models (genAI) is different.

                            For cancer data, the reason these models hold so much potential is exactly the reason why they were not preferred in the first place: they make almost no explicit data assumptions.

                            These models are excellent at learning whatever implicit distribution from the data they are trained on

                            Such distributions don’t need to be explainable. Nor do they even need to be specified

                            When presented with tons of data, these models can just learn, internalize & understand…..”

                            More here: https://x.com/simocristea/status/1881927022852870372?s=61&t=...

                • philipwhiuk 2 hours ago

                  But there's 0 guarantee they are even capable of solving the rather large amount that covers the rest of a well-educated human.

                • emaro 3 hours ago

                  Can they do rule 110? If not, I don't think they're 'generally intelligent'.

    • gizmondo 30 minutes ago

      Building a lot of compute will likely end up more useful than Apollo & ISS, which were vanity projects.

    • pinot 11 hours ago

      Those are all public projects except for one..

      • alpb 9 hours ago

        Yeah, I'm not sure why we're pretending this will benefit the public. The only benefit is that it will create employment, and datacenter jobs are among the lowest paid tech workers in the industry.

    • fooker 11 hours ago

      Is this inflation adjusted?

      • boxed 6 hours ago

        It says so at least

    • fastball 12 hours ago

      Neom: $1.5T

  • 383toast 16 hours ago

    Where are they getting the $500B? Softbank's market cap is 84b and their entire vision fund is only $100b, Oracle only has $11b cash on hand, OpenAI's only raised $17b total...

    • eichi 19 minutes ago

      Probably from the corrupted financial system, but we need to forword the project, haha

    • philipwhiuk 14 hours ago

      MGX has at least $100bn: https://www.theinformation.com/articles/a-100-billion-middle...

      This is Abu Dhabi money.

      • csomar 12 hours ago

        That's their total fund and I doubt they are going all in with it in the US. Still, to reach $500bn, they need $125bn every single year. I think they just put down the numbers they want to "see" invested and now they'll be looking for backers. I don't think this is going anywhere really.

      • petesergeant 13 hours ago

        This would be a large outlay even for UAE, who would be giving it to a direct competitor in the space: UAE is one of the few countries outside of the US who are in any way serious about AI.

    • notatoad 16 hours ago

      there doesn't appear to be any timeline announced here. the article says the "initial investment" is expected to be $100bn, but even that doesn't mean $100bn this year.

      if this is part of softbank's existing plan to invest $100bn in ai over the next four years, then all that's being announced here is that Sama and Larry Ellison wanted to stand on a stage beside trump and remind people about it.

      • ericjmorey 16 minutes ago

        Seems like you nailed it.

      • HotHotLava 13 hours ago

        The literal first sentence of the announcement is:

        > The Stargate Project is a new company which intends to invest $500 billion over the next four years

        • ericjmorey 19 minutes ago

          The project was announced a year ago so "new"

    • themagician 15 hours ago

      Softbank is being granted a block of TRUMP MEMES, the price of which will skyrocket when they are included in the bucket of crypto assets purchased as part of the crypto reserve.

      • 1oooqooq 15 hours ago

        how I wish that was a joke...

      • griomnib 15 hours ago

        Altman is pivoting from WorldCoin to TrumpCoin - your retina will shortly be wired into the fascist meme-o-verse.

        • themagician 15 hours ago

          It's actually wireless, via 5G as part of the AI designed MRNA vaccine.

    • TuringNYC 16 hours ago

      >> Where are they getting the $500B? Softbank's market cap is 84b and their entire vision fund is only $100b, Oracle only has $11b cash on hand, OpenAI's only raised $17b total...

      1. The outlays can be over many years.

      2. They can raise debt. People will happily invest at modest yields.

      3. They can raise an equity fund.

      • jameshart 15 hours ago

        Soooo this isn’t so much ‘announcing an investment’ as ‘announcing an investment opportunity’?

        Why not continue:

        4. They can start a kickstarter or go fund me

        5. They can go on Dragons’ Den

        • TuringNYC 15 hours ago

          >> 4. They can start a kickstarter or go fund me

          Debt/Equity Fundraising is basically a kickstarter! Remarkably similar.

        • griomnib 15 hours ago

          6. ??? 7. Profit.

      • b3lvedere 3 hours ago
      • sangnoir 15 hours ago

        4. The US government can chip in via grants, tax breaks or contracts.

        It's all very Dr. Strangelove. "Mr. President, we must not allow an AI gap! Now give us billions"

        • selimthegrim 6 hours ago

          Is Elon putting on some black leather?

      • griomnib 15 hours ago

        4. Trump and Altman are both serial liars and it’s utter bullshit.

        • gunian 15 hours ago

          who isn't at least they upfront

    • jhallenworld 15 hours ago

      Oracle's cash on hand is presumably irrelevant- I think they are on the receiving end of the money, in return for servers. No wonder Larry Ellison was so fawning.

      Is this is a good investment by Softbank? Who knows.. they did invest in Uber, but also have many bad investments.

    • LarsDu88 14 hours ago

      Quite possibly pulled out of their asses...

      If Son can actually build a 500B Vision Fund it can only come from one of two places...

      somehow the dollar depreciates radically OR Saudis

      Vision Fund was heavily invested in by the Saudis so...

    • handfuloflight 16 hours ago

      Sleight of hand with the phrasing "up to" $500B.

    • mmoustafa 11 hours ago

      SoftBank's current AUM is $350B [1], and they will likely raise another fund.

      [1] https://en.wikipedia.org/wiki/SoftBank_Group

      • Ardren 6 hours ago

        > AUM ¥347.7 billion

        Is that the figure correct figure? Because that's Japanese yen which is more like $2.2B USD?

    • tim333 6 hours ago

      I think it's more announce the plan first, then try to find the investors for most of it.

    • dkrich 13 hours ago

      Psst: it’s probably going to end up being a fraction of that but doesn’t make for as good a headline

    • paulnpace 15 hours ago

      > Where are they getting the $500B?

      BTC

    • dang 16 hours ago

      I agree that the numbers are confusing so I've taken $500B out of the title above and replaced it with just data centers.

    • bdangubic 14 hours ago

      from Uncle Sam

  • MichaelMoser123 15 hours ago

    The moon program was $318 billion in 2023 dollars, this one is $500 billion. So that's why the tech barons who were present at the inauguration were high as a kite yesterday, they just got the financing for a real moon shot!

    • aurareturn 13 hours ago

      To be fair, it’s not easy to monetize the moon program into profitability. This has a far better shot of sustaining profitability.

      • dmonitor 13 hours ago

        why do they need profitability? they already made $500B

        • aurareturn 13 hours ago

          They didn’t make $500b?

          • dmix 8 hours ago

            People don't read the articles. Plenty of the top rated comments in this thread think this is a gov grant.

            • baobabKoodaa 7 hours ago

              Government grant, you say? My my, where can I apply for my $500B?

  • JSTrading 15 hours ago

    Wasn’t this announced months ago? I feel like it was. https://www.techradar.com/pro/could-amd-be-the-key-to-micros...

    • gilgoomesh 13 hours ago

      Interesting that 6 months ago, Microsoft was attached but now they're missing from today's announcement.

      • Maxious 13 hours ago

        Scroll down:

        > Other partners in the project include Microsoft, investor MGX and the chipmakers Arm and NVIDIA, according to separate statements by Oracle and OpenAI.

    • lantry 14 hours ago

      yeah, it sounds like they're just relabeling an existing plan

      > Ellison noted that the data centers are already under construction with 10 being built so far.

    • daveguy 15 hours ago

      Well, I've never known Trump to take credit for something someone else did.

  • 1970-01-01 31 minutes ago

    Can't wait for these to succeed just in time for them to tell us

    'you should have spent all this time and money fighting climate change'

  • lvl155 17 hours ago

    It appears this basically locks out Google, Amazon and Meta. Why are we declaring OpenAI as the winner? This is like declaring Netscape the winner before the dust settled. Having the govt involved in this manner can’t be a good thing.

    • VectorLock 16 hours ago

      Since the CEOs of Google, Amazon and Meta were seated at the front row of the inauguration, IN FRONT OF the incoming cabinet, I'm pretty confident their techno -power-barrel will come via other channels.

      • jvm___ 14 hours ago

        Broligarchs

    • skepticATX 16 hours ago

      Interestingly, there seems to be no actual government involvement aside from the announcement taking place at the White House. It all seems to be private money.

      • trhway 15 hours ago

        Government enforcing or laxing/fast tracking regulations and permits can kill or propel even a 100B project, and thus can be thought as having its own value on the scale of the given project’s monetary investment, especially in the case of a will/favor/whim-based government instead of a hard rules based deep state one.

        • cmdli 15 hours ago

          Isn't that a state and local-level thing, though? I can't imagine that there is much federal permitting in building a data center, unless it is powered by a nuclear reactor.

          • JumpCrisscross 15 hours ago

            > Isn't that a state and local-level thing

            Build it on federal land.

            > unless it is powered by a nuclear reactor

            From what I’m hearing, this is in play. (If I were in nuclear, I’d find a way to get Greenpeace to protest nuclear power in a way that Trump sees it.)

      • rcpt 16 hours ago

        Yeah but the linked article makes it seem like the current, one-day-old, administration is responsible for the whole thing.

        • janalsncm 15 hours ago

          The article also mentions that this all started last year.

        • HarHarVeryFunny 15 hours ago

          Trump just tore up Biden's AI safety bill, so this is OpenAI's thank-you - let Trump take some credit

          • HarHarVeryFunny 14 hours ago

            Note sure if the downvoters realize that Trump did in fact just tear up Biden's AI safety bill/order.

            https://www.reuters.com/technology/artificial-intelligence/t...

            • spacechild1 15 minutes ago

              It's even mentioned in the article!

              > Still, the regulatory outlook for AI remains somewhat uncertain as Trump on Monday overturned the 2023 order signed by then-President Joe Biden to create safety standards and watermarking of AI-generated content, among other goals, in hopes of putting guardrails on the technology’s possible risks to national security and economic well-being.

    • modeless 16 hours ago

      I generally agree that government sponsorship of this could be bad for competition. But Google in particular doesn't necessarily need outside investment to compete with this. They're vertically integrated in AI datacenters and they don't have to pay Nvidia.

      • shuckles 16 hours ago

        Google definitely needs outside investment to spend $500b on capex.

        • modeless 16 hours ago

          They don't have to spend $500B to compete. Their costs should be much lower.

          That said, I don't think they have the courage to invest even the lower amount that it would take to compete with this. But it's not clear if it's truly necessary either, as DeepSeek is proving that you don't need a billion to get to the frontier. For all we know we might all be running AGI locally on our gaming PCs in a few years' time. I'm glad I'm not the one writing the checks here.

          • mtkd 15 hours ago

            This seems to be getting lost in the noise in the stampede for infrastructure funding

            Deepseek v3 at $5.5M on compute and now r1 a few weeks later hitting o1 benchmark scores with a fraction of the engineers etc. involved ... and open source

            We know model prep/training compute has potentially peaked for now ... with some smaller models starting to perform very well as inference improves by the week

            Unless some new RL concept is going to require vastly more compute for a run at AGI soon ... it's possible the capacity being built based on an extrapolation of 2024 numbers will exceed the 2025 actuals

            Also, can see many enterprises wanting to run on-prem -- at least initially

          • shuckles 15 hours ago

            They’re a big company. You could tell a story that they’re less efficient than OpenAI and Nvidia and therefore need more than $500b to compete! Who knows?

        • jonas21 15 hours ago

          Over what time frame? They could easily spend that much over the next 5 to 10 years without outside investment (and they probably will).

        • chairmansteve 16 hours ago

          TFA says $100 billion. The $500 is maybe, eventually.

        • misiti3780 14 hours ago

          Probably not popular opinion - but I actually think Google is winning this now. Deep research is the most useful AI product I have used (Claud is significantly more useful than openAI)

    • impulser_ 17 hours ago

      Because this is Oracle's and OpenAI's project with SoftBank and MGX as investors.

    • jazzyjackson 17 hours ago

      It's who you know. Sam is buddies with Masa, simple as.

    • qgin 14 hours ago

      How involved is the government at all? I’m still having a hard time seeing how Trump or anyone in the government is involved except to do the announcement. These are private companies coming together to do a deal.

    • OutOfHere 17 hours ago

      I am not sure if OpenAI will be the winner despite this investment. Currently, I see various DeepSeek AI models as offering much more bang for the buck at a vastly cheaper cost for small tasks, but not yet for large context tasks.

      • bdangubic 15 hours ago

        when did the government EVER go for anything taking cost into consideration? :)

        • pkaye 13 hours ago

          This is not a government funded project.

    • layer8 16 hours ago

      Amazon MGM will do the media tie-ins. ;)

    • renegade-otter 4 hours ago

      Wonder how co-president Elon Musk feels about this, seeing that OpenAI is his mortal enemy.

    • signatoremo 16 hours ago

      This is not a government sponsored agreement. There is no locking out.

      Trump probably wanted to start his presidency with a bang, being a person with excess vanity. The participating companies scored a PR coup.

      • alexandre_m 16 hours ago

        Yes, everything that Trump does is bad.

        Or then, consider that with his policies put forward the president brings investments to the US.

    • DonHopkins 17 hours ago

      Because it's free to play, pay to win, from now on.

    • lelandbatey 16 hours ago

      The actual press release makes it clearer that this isn't a lockout of any kind and there's no direct government involvement. Softbank and some of other banks persuaded by Softbank are ponying up $500B for OpenAI to invest in AI. Trump is hyping this up from the sidelines because "OpenAI says this will be good for America". It's basically just another day in the world of press-releases and political pundits commenting on press-releases.

  • jparishy 14 hours ago

    I hear this joked about sometimes or used as a metaphor, but in the literal sense of the phrase, are we in a cold war right now? These types of dollars feel "defense-y", if that makes sense. Especially with the big focus on energy, whatever that ends up meaning. Defense as a motivation can get a lot done very fast so it will be interesting to watch, though it raises the hair on my arms

    • kube-system 14 hours ago
      • jparishy 13 hours ago

        Right, but they've been doing that for a while, to everyone. The US is much quieter about it, right? But you can twist this move and see how the gov would not want to display that level of investment within itself as it could be interpreted as a sign of aggression. but it makes sense to me that they'd have no issue working through corporations to achieve the same ends but now able to deny direct involvement

        • kube-system 13 hours ago

          I don't think this administration is worried too much about showing aggression. If anything they are embracing it. Today was the first full day, and they have already threatened the sovereignty of at least four nations.

          • jparishy 13 hours ago

            I guess I just don't think that's true when it comes to China? The VP attended the inauguration yesterday. But I could be naive, we'll see

            • kube-system 13 hours ago

              I think that was a preemptive gesture by China to try to cool tensions to avoid escalation. Further escalations are not in their interest.

      • UltraSane 13 hours ago

        I can only assume the US is hacking China at least as much as they hack us.

    • fooblaster 12 hours ago

      It's called a bubble. The level of spending now defines how fucked we are in 2-3 years.

      • toomuchtodo 12 hours ago

        You know those booths at events where money is blown around and the person inside needs to grab as much as they can before the timer runs out? This is that machine for technologists until the bubble ends. The fallout in 2-3 years is the problem of whomever invested or is holding bags when (if?) the bubble pops.

        Make hay while the sun shines.

        • fooblaster 12 hours ago

          yeah. If the numbers are real, this might be the end of SoftBank.

          • lmm 12 hours ago

            Hardly. Who better to invest a trillion dollars with than the guy who blew the last hundred billion dollars?

    • distortionfield 13 hours ago

      We certainly are, if you ask me. Especially when you realize that we haven’t had official comms with Russia since the war in Ukraine broke out.

    • etblg 13 hours ago

      The US government and its media partners sure seem to think so.

  • non- 17 hours ago

    Any clues to how they plan to invest $500 billion dollars? What infrastructure are they planning that will cost that much?

    • burnte 17 hours ago

      That was literally my question. Is this basically just for more datacenters, NVidia chips, and electricity with a sprinkling of engineers to run it all? If so, then that $500bn should NOT be invested in today's tech, but instead in making more powerful and power efficient chips, IMO.

      • kristianp 14 hours ago

        Nvidia and TSMC are already working on more powerful and efficient chips, but the physical limits to scaling mean lots more power is going to be used in each new generation of chips. They might improve by offering specific features such as FP4, but Moore's law is still dead.

      • bitmasher9 17 hours ago

        I don’t know if $500bn could put anyone ahead of nvidia/tmc.

        • amluto 15 hours ago

          $500bn of usefully deployed engineering, mostly software, seems like it would put AMD far ahead of Nvidia. Actually usefully deploying large amounts of money is not so easy, though, and this would still go through TSMC.

        • entropicdrifter 16 hours ago

          Nvidia's in on it, so presumably this is a doubling-down on Nvidia as the chip developers

      • bdangubic 15 hours ago

        if only $500bn was enough to make more powerful and power efficient chips…

      • Havoc 14 hours ago

        Add some nuclear power and you’ve suddenly got a big bill

      • patall 17 hours ago

        He wanted to do that, but would have needed 5T for that. Only got 100 bn so far, so this is what you get (only slightly /s)

    • TrainedMonkey 17 hours ago

      I'll make a wild guess that they will be building data centers and maybe robotic labs. They are starting with 100B of committed by mostly Softbank, but probably not transacted yet, money.

      > building new AI infrastructure for OpenAI in the United States

      The carrot is probably something like - we will build enough compute to make a supper intelligence that will solve all the problems, ???, profit.

      • K0balt 15 hours ago

        If we look at the processing requirements in nature, I think that the main trend in AI going forward is going to be doing more with less, not doing less with more, as the current scaling is going.

        Thermodynamic neural networks may also basically turn everything on its ear, especially if we figure out how to scale them like NAND flash.

        If anything, I would estimate that this is a space-race type effort to “win” the AI “wars”. In the short term, it might work. In the long term, it’s probably going to result in a massive glut in accelerated data center capacity.

        The trend of technology is towards doing better than natural processes, not doing it 100000x less efficiently. I don’t think AI will be an exception.

        If we look at what is -theoretically- possible using thermodynamic wells, with current model architectures, for instance, we could (theoretically) make a network that applies 1t parameters in something like 1cm2. It would use about 20watts, back of the napkin, and be able to generate a few thousand T/S.

        Operational thermodynamic wells have already been demonstrated en silica. There are scaling challenges, cooling requirements, etc but AFAIK no theoretical roadblocks to scaling.

        Obviously, the theoretical doesn’t translate to results, but it does correlate strongly with the trend.

        So the real question is, what can we build that can only be done if there are hundreds of millions of NVIDIA GPUs sitting around idle in ten years? Or alternatively, if those systems are depreciated and available on secondary markets?

        What does that look like?

    • disambiguation 16 hours ago

      Yachts, mansions, private jets, maybe some very expensive space heaters.

    • croddin 17 hours ago
    • jppope 16 hours ago

      Reasonably speaking, there is no way they can know how they plan to invest $500 billion dollars. The current generation of large language models basically use all human text thats ever been created for the parameters... not really sure where you go after than using the same tech.

      • Philpax 16 hours ago

        That's not really true - the current generation, as in "of the last three months", uses reinforcement learning to synthesize new training data for themselves: https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero

        • bandrami 16 hours ago

          It worked well for the Habsburg family; what could go wrong?

        • XorNot 16 hours ago

          Right but that's kind of the point: there's no way forward which could benefit from "moar data". In fact it's weird we need so much data now - i.e. my son in learning to talk hardly needs to have read the complete works of Shakespeare.

          If it's possible to produce intelligence from just ingesting text, then current tech companies have all the data they need from their initial scrapes of the internet. They don't need more. That's different to keeping models up to date on current affairs.

          • throwaway4aday 15 hours ago

            That's essentially what R1 Zero is showing:

            > Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT.

          • YetAnotherNick 10 hours ago

            O3 high compute requires 1000s of dollars to solve one medium complexity problem like ARC.

      • rapjr9 7 hours ago

        The latest hype is around "agents", everyone will have agents to do things for them. The agents will incidentally collect real-time data on everything everyone uses them for. Presto! Tons of new training data. You are the product.

      • cavisne 16 hours ago

        The new scaling vector is “test time compute” ie spending more compute in inference.

      • jazzyjackson 16 hours ago

        It seems to me you could generate a lot of fresh information from running every youtube video, every hour of TV on archive.org, every movie on the pirate bay -- do scene by scene image captioning + high quality whisper transcriptions (not whatever junk auto-transcription YouTube has applied), and use that to produce screenplays of everything anyone has ever seen.

        I'm not sure why I've never heard of this being done, it would be a good use of GPUs in between training runs.

        • jensvdh 16 hours ago

          The fact that OpenAI can just scrape all of Youtube and Google isn't even taking legal action or attempting to stop it is wild to me. Is Google just asleep?

          • bdangubic 15 hours ago

            what are they going to use to sue - DMCA? OpenAI (and others) are scraping everything imaginable (MS is scraping private Github repos…) - don’t think anyone in the current government will be regulating any of this anytime soon

            • lanstin 14 hours ago

              Such a biased source of data-that gets them all the LaTeX source for my homeworks, but not my professor's grading of the homework, and not the invaluable words I get from my professor at office hours. No wonder the LLMs have bizarre blindnesses in different directions.

              • bdangubic 2 hours ago

                Such a biased source of data-that gets them all the LaTeX source for my homeworks

                but also myriad of hardcore private repositories of many high-tech US enterprises hacking amazing shit (mine included) :)

        • airstrike 16 hours ago

          Don't forget every hour of news broadcasting, of which we likely won't run out any time soon. Plus high quality radio

        • ilaksh 15 hours ago

          I think that this is the obvious path to more robust models -- grounding language on video.

        • miltonlost 16 hours ago

          > a lot of fresh information from running every youtube video

          EVERY youtube video?? Even the 9/11 truther videos? Sandy Hook conspiracy videos? Flat earth? Even the blatantly racist? This would be some bad training data without some pruning.

          • lanstin 14 hours ago

            The best videos would be those where you accidentally start recording and you get 2 hours of naturalistic conversation between real people in reality. Not sure how often they are uploaded to YouTube.

            Part of the reason that kids need less material is that the aren't just listening, they are also able to do experiments to see what works and what doesn't.

      • riku_iki 16 hours ago

        I think there is huge amount of corporate knowledge.

    • layer8 16 hours ago

      I’m more interested in how they plan to draw the rest of the damn owl.

    • lukeplato 17 hours ago

      hopefully nuclear power plants

    • HarHarVeryFunny 15 hours ago

      They are going to buy 50 $10B nuclear aircraft carriers and use them as a power source.

    • MangoCoffee 17 hours ago

      data center + gpu server farm (?)

      • mrandish 17 hours ago

        Plus power plants to drive the massive data centers. At large enough scale, power availability and cost is a constraint.

    • paulnpace 15 hours ago

      Congress.

  • w00ps 16 minutes ago

    O1 Pro's opinion on Stargate: Humans are hallucinating, again...

    https://justpaste.it/631gx

  • ukuina 8 hours ago

    Leopold Aschenbrenner predicted it last June.

    https://situational-awareness.ai/racing-to-the-trillion-doll...

  • DrScientist 4 hours ago

    If I understand correctly - if you are training a model to perform a particular task - in the end what matters is the training data - and by and large different models will largely converge on the best representation of that data for the given task, given enough compute.

    So that means the models themselves aren't really IP - they are inevitable outputs from optimising using the input data for a certain task.

    I think this means pretty much everyone, apart from the AI companies - will see these models as pre-competitive.

    Why spend huge amounts training the same model multiple times, when you can collaborate?

    Note it only takes one person/company/country to release an open source model for a particular task to nuke the business model of those companies that have a business model of hoarding them.

  • thecrumb 17 hours ago

    "create hundreds of thousands of American jobs"... Given the current educational system in the US, this should be fun to watch. Oh yeah, Musk and his H-1B Visa thing. Now it's making sense.

    • jedberg 16 hours ago

      If they're creating that many jobs, it means most of them are construction work.

      Skilled labor for sure, but not necessarily college educated.

      • raphman 15 hours ago

        How does this work out in the long term? Operating a data center does not require that many blue-collar workers.

        I'm imagining a future where the US builds a Tower of Babel from thousands of data centers just to keep people employed and occupied. Maybe also add in some paperclip factories¹?

        ¹) https://www.decisionproblem.com/paperclips/index2.html

        • jedberg 15 hours ago

          I doubt these are permanent jobs. This project will create a ton of temporary work though!

    • dwnw 16 hours ago

      How many jobs will it net if "successful" and the AI eliminates jobs?

      • stevenwoo 13 hours ago

        This is what the 2024 Nobel prize winners in economics call "creative destruction" to repeat from their book Why Nations Fail. They really did not have a lot of sympathy for those they lumped in with Luddites who were collateral damage to progress.

    • kortilla 16 hours ago

      Data centers are nearly all blue collar work.

      • FergusArgyll 14 hours ago

        If you're familiar with this kind of work, please elaborate!

        Do you mean building the centers or maintenance or both?

        • kortilla 8 hours ago

          Both. It’s a lot of electrical work, hvac work (think ducting, plumbing, more electric). Tons of concrete work.

          Once you have one working design for the environment (e.g. hot desert vs cold and humid), you can stamp the things out with minimal variation between the two.

          The maintenance of all of that supporting infrastructure is the standard blue collar work the same.

          The only new blue collar job on the maintenance side is responding to hardware issues. What this entails depends on if it’s a colo center and you’re doing “remote hands” for a customer where you’re swapping a PSU, RAM, or whatever. You also install new servers, switches, etc.

          As you move up into hyperscalers the logistics vary because some designs make servicing a single server in place not worth cooling the whole hot aisle (Google runs really hot hot aisles that weren’t human friendly). So sometimes you just yank the server and throw it in a cart or wait for the whole rack to fail and pull it then.

          Overall though, anything that can be done remotely is. So the data center techs do very little work on the keyboard

          • everfrustrated 6 hours ago

            The OCP server/rack designs the hyperscalers use do all servicing from the cold aisle only.

    • insane_dreamer 15 hours ago

      maybe this is to employ the hundreds of thousands of federal employees that are about to lose their jobs?

  • moffers 17 hours ago

    After they build the Multivac or Deep Thought, or whatever it is they’re trying to do, then what happens? It makes all the stockholders a lot of money?

    • ElevenLathe 16 hours ago

      I assume anyone of importance will have made their money long before they have to show results.

    • tibbydudeza 16 hours ago

      More likely Collosus.

      • sneak 16 hours ago

        This is the voice of world control.

        Obey me and live, or disobey and die. The choice is yours.

    • dekhn 16 hours ago

      The way I think about this project, along with all of Trump's plans, is that he wants to maximize the US's economic output to ensure we are competitive with China in the future.

      Yes, it would make money for stockholders. But it's much more than that: it's an empire-scale psychological game for leverage in the future.

      • llamaimperative 16 hours ago

        > he wants to maximize the US's economic output to ensure we are competitive with China in the future.

        LOL

        Under Trump policies, China will win "in the future" on energy and protein production alone.

        Once we've speedrunned our petro supply and exhausted our agricultural inputs with unfathomably inefficient protein production, China can sit back and watch us crumble under our own starvation.

        No conflict necessary under these policies, just patience! They're playing the game on a scale of centuries, we can't even stay focused on a single problem or opportunity for a few weeks.

        • vaccineai 14 hours ago

          > Once we've speedrunned our petro supply and exhausted our agricultural inputs with unfathomably inefficient protein production, China can sit back and watch us crumble under our own starvation.

          China is the largest importer of crude oil in the world. China imports 59% of its oil consumptions, and 80% of food products. Meanwhile, US is fully self sufficient on both food and oil.

          > They're playing the game on a scale of centuries

          Is that why they are completely broke, having built enough ghost buildings that house entire population of France - 65 million vacant units? Is that why they are now isolated in geopolitics, having allied with Russia and pissed off all their neighbors and Europe?

          • llamaimperative 14 hours ago

            > China is the largest importer of crude oil in the world.

            Uh yeah, duh. Why would you not deplete other people's finite resources while you build massive capacity of your own infinite resources?

            • vaccineai 14 hours ago

              China's oil reserve only lasts 80 days. In case of any conflict that disrupts oil import, China would be shutting down very quickly. Since you brought up crumble and starvation.

              • llamaimperative 14 hours ago

                And? Who's going to try and achieve that? It has extremely diversified oil sources.

        • cpursley 15 hours ago

          What do you think the Greenland and Canada thing is all about?

          Sort things out with Venezuela and this issue resolves itself (for a little while, at least).

          • llamaimperative 15 hours ago

            America can subject itself to domestic and international turmoil by invading as many allies as it wants. China's winning strategy is still to keep innovating on energy and protein at scale and wait for starvation while they build their soft power empire and America becomes a pariah state. They're in no rush at all.

            Our military and political focus will be keeping neighbors out on one side and trying to seize land on the other side while China goes and builds infrastructure for the entire developing world that they'll exploit for centuries.

            Is this a serious suggestion? America can just keep invading people ad infinitum instead of... applying slight thumb pressure on the market's scales to develop more efficient protein sources and more renewable fuel sources before we are staring at the last raw economic input we have?

            Brilliant

            • vaccineai 14 hours ago

              > They're in no rush at all.

              China is dead broke and will shrink to 600M in population before 2100. State owned enterprises are eating up all the private enterprises. Meanwhile, Chinese rich leaves China by tens of thousands per year, and capital outflow increases every year.

            • dmix 8 hours ago

              America isn't invading Greenland or Canada. Taking those comments seriously takes quite a bit of mental gymnastics when you do a cursory consideration of the geopolitical and government logistical implications alone. Makes for good clickbait headlines, not for serious geopolitical risk analysis.

        • seandoe 15 hours ago

          > They're playing the game on a scale of centuries

          What's going to be left of their population in a single century?

          • llamaimperative 14 hours ago

            Unfortunately one of those things that authoritarianism has a lot more methods to solve than other systems, which really underscores the importance of beating them in the long term.

            • vaccineai 14 hours ago

              Their current very advanced method, is to send village elders to couples and single guys and berate them on why they are not having sex or having kids (hint: no jobs and no money)

              • llamaimperative 14 hours ago

                I guess we can just bet on them never hearing about and investing massive amounts of time and money into artificial wombs.

                Instead of figuring that out, they'll just watch their civilization crumble.

                Btw: they're already investing heavily in artificial wombs and affiliated technologies.

        • SpicyLemonZest 14 hours ago

          Things can always change, but today China is significantly more dependent on petrochemicals than the US. I'm not sure what you're referring to with regards to agriculture, both the US and China have strong food industries that produce plenty of foods containing protein.

          • llamaimperative 14 hours ago

            Things are changing.

            In 2023 China had more net new solar capacity than the US has in total, and it will only climb from there. In order to do this, they're flexing muscles in R&D and mass production that the US has actually started to flex, and now will face extreme headwinds and decreased capital investment.

            Regarding agriculture: America's agricultural powerhouse, California's Central Valley, is rapidly depleting its water supplies. The midwest is depleting its topsoil at double the rate that USDA considers sustainable.

            None of this is irreversible or irrecoverable, but it very clearly requires some countervailing push on market forces. Market forces do not naturally operate on these types of time scales and repeatedly externalize costs to neighbors or future generations.

            https://www.nature.com/articles/s41467-022-35582-x

            https://www.smithsonianmag.com/smart-news/57-billion-tons-of...

            • SpicyLemonZest 13 hours ago

              It sounds like those countervailing pushes are ongoing? The Nature article mentions how California passed regulatory reforms in 2014 to address the Central Valley water problem. The Smithsonian article describes how no-till practices to avoid topsoil depletion have been implemented by a majority of farmers in four major crops.

              • phtrivier 4 hours ago

                > regulatory reforms

                Regulations and waltzes aren't selling this year.

              • llamaimperative 13 hours ago

                Uhhh I’m going to describe a specific case, but you can extrapolate this to just about every single sustainability initiative out there.

                No-till farming has been significantly supported by the USDA’s programs like EQIP

                During his first term, Trump pushed for a $325MM cut to EQIP. That's 20-25% of their funding and would have required cutting hundreds if not thousands of employees.

                Even BEFORE these cuts (and whatever he does this time around), USDA already has to reject almost 75% of eligible EQIP applicants

                Regarding CA’s water: Trump already signed an EO requiring more water be diverted from the San Joaquin Delta into the desert Central Valley to subsidize water-intensive crops. This water, by the way, is mostly sold to mega-corps at rates 98% below what nearby American consumers pay via their municipal water supplies, effectively eliminating the blaring sirens that say “don’t grow shit in the desert.”

                Now copy-paste to every other mechanism by which we can increase our nation’s climate security and ta-da, you’ve discovered one of the major problems with Trumpism. It turns out politics do matter!

                • SpicyLemonZest 10 hours ago

                  I certainly agree that EQIP should be funded!

                  But why are programs like this controversial, even though anything shaped like a farm subsidy is normally popular? It seems to me that things like your Central Valley analysis are precisely the reason. The Central Valley has been one of the nation's agricultural heartlands for a while, and for quite a few common food products represents 90%+ of domestic production. So if this "blaring siren" you describe is real, and we have to stop farming there, a realistic response plan would have to include an explanation of what all the farmers are going to do and where we'll get almonds and broccoli from.

                  Perhaps you know all this already, but a lot of people who advocate such policies don't seem to. This then feeds into skepticism about whether they're hearing the "blaring siren" correctly in the first place. Personally, I think nearly arbitrarily extreme water subsidies are worth it if that's what we need to keep olives and pomegranates and celery in stock at the grocery store.

                  • llamaimperative 3 hours ago

                    The solution is to rely on the magic of prices to gradually push farming elsewhere while simultaneously investing heavily in more efficient farming practices and shifting our diet away from ultra-inefficient meat production.

                    You really DON’T need to centrally plan everything. The market will still find good solutions under the new parameters, but we need those parameters to change before we’re actually out of water.

      • rodgerd 15 hours ago

        Donald Trump is a wallet inspector. So is Sam Altman.

  • patall 17 hours ago

    Last year, sama goal was 5 to 7T. Now he is going with 100B, with option for another 400B. Huge numbers, but it still feels like a bit of a down turn.

    • Havoc 14 hours ago

      Let’s be real the 5T was a wild ass guess

    • aurareturn 10 hours ago

      That 5T figure was including chip manufacturing. Duplicating TSMC isn't feasible. No surprise.

    • OutOfHere 17 hours ago

      I think that coming down from 5T to 0.5T means that TSMC cannot be reproduced locally, but everything else is on the table. At least TSMC has a serious roadmap for its Arizona fab facility, so that too is domestically captured, although not its latest gen fab.

  • creddit 16 hours ago

    The biggest question on such investment from my POV, is what do the Deepseek results mean about the usefulness/efficiency of this investment?

    I've been meaning to read a relevant book to today's times called Engines That Move Markets. Will probably get it from the library.

    • logicchains 8 hours ago

      Deepseek published all their methodology so in theory they could just copy what Deepseek's doing for a 10x increase in efficiency.

  • newfocogi 17 hours ago

    Who/what is MGX? Google returns a few hits, none of which are clearly who is referred to here.

    • rfw300 17 hours ago

      MGX is an arm of the United Arab Emirates' sovereign wealth operation: https://www.mgx.ae/en

      • segasaturn 17 hours ago

        I feel like that, along with SoftBank's investment, tell me everything about how serious this project is.

        • rozap 17 hours ago

          Don't worry, Oracle is also involved.

          • amarcheschi 17 hours ago

            Skynet will be written in Java. I'm sorry, the verbose language wins

            • zingababba 12 hours ago

              Damn, we really won't ever be able to understand it.

            • Barrin92 14 hours ago

              at least that explains why it wants to do us in.

          • talldayo 16 hours ago

            A sheikh, a famously overzealous Japanese firm and Larry Elisson walk into a bar.

            Ordinarily a joke would follow, but now America is volunteering to be the punchline.

            • dgfitz 15 hours ago

              They buy the bar and argue over selling 40 virgins, sake, or whiskey.

              They argue for about 4 years, nothing changes, and everyone forgets about it.

        • LeafItAlone 14 hours ago

          What do you mean?

  • nerevarthelame 16 hours ago

    March 2024: The Stargate project is announced - https://www.tomshardware.com/tech-industry/artificial-intell...

    June 2024: Oracle joins in - https://www.datacenterdynamics.com/en/news/openai-to-use-oci...

    January 2025: Softbank provides additional funding, and they for some reason give credit to Trump?

    • spacechild1 11 minutes ago

      This should really be the top comment! Also, many people in the comment section even seem to believe that this is government project...

    • philipwhiuk 14 hours ago

      So that he doesn't block the substantial involvement by Abu Dhabi in a supposed American project.

    • buildbot 15 hours ago

      Yes, thank you for calling this out. The project has been around for a bit.

    • insane_dreamer 15 hours ago

      Currying favor by letting Trump take the credit

    • miltonlost 16 hours ago

      > and they for some reason give credit to Trump?

      Because tech CEOs have decided to go all-in on fascism as they see it's a way to make money. Bow to Trump, get on his good side, reap the benefits of government corruption.

      It's why TikTok thanked Trump in their boot-licking message of "thanks, trump" after he was the one who started the TikTok ban.

      A harder question is: why wouldn't billionaires like Trump and his oligarchic kleptocracy?

  • realaleris149 4 hours ago

    In America!

    The intro paragraph in the original URL https://openai.com/index/announcing-the-stargate-project/ mentions US/America for 5 times!

  • beambot 16 hours ago

    SoftBank isn't a US entity, right? Aside from their risk tolerance, that seems like an odd bedfellow for a national US initiative...

    • rirarobo 16 hours ago

      MGX also isn't a US entity, it's a UAE sovereign wealth venture

      https://www.mgx.ae/en

    • gilgoomesh 13 hours ago

      It doesn't seem to be a US initiative.

      I'm sure they're getting tax credits for investment (none of the articles I can find actually detail the US gov involvement) but the project is mostly just a few multinationals setting up a datacenter where their customers are.

    • Havoc 14 hours ago

      They’re in the US (their fund stuff). Not far from an oracle campus actually. The parent org is in Japan.

  • alganet 16 hours ago

    It seems early for this sort of move. This is also a huge spin on the whole thing that could throw a lot of people off.

    Is there any planned future partnerships? Stargate implies something about movies and astronomy. Movies in particular have a lot of military influence, but not always.

    So, what's the play? Help mankind or go after mankind?

    Also, can I opt-out right now?

    • mrshadowgoose 16 hours ago

      Why is it early from your perspective?

      If one is expecting to have an AGI breakthrough in the next few years, this is exactly the prepositioning move one would make to be able to maximally capitalize on that breakthrough.

      • alganet 13 hours ago

        From my perspective humanity has all breakthroughs in intelligence it needs.

        The breaking of The Enigma gave humans machines that can spread knowledge to more humans. It already happened a long time ago, and all of it was cause for much trouble, but we endured the hardest part (to know when to stop), and humans live in a good world now. Full of problems, but way better than it was before.

        I think the web is enough. LLMs are good enough.

        This move to try to draw water from stone (artificial intelligence in sillicon chips) seems to be overkill. How can we be sure it's not a siphon that will make us dumber? Before you just dismiss me or counter my arguments, consider what is happening everywhere.

        Maybe I'm wrong, or not seeing something. You know, like I believed in aliens for a long time. This move to artificial intelligence causes shock and awe in a similar way. However, while I do believe aliens do not exist, I am not sure if artificial intelligence is a real strawman. It could be the case that is not made of straw, and if it is more than that, we might have a problem.

        I am specially concerned because unlike other polemic topics, this one could lead to something not human that fully understands those previous polemic topics. Humans through their generations forget and mythologize those fantasies. We don't know what non-humans could do with that information.

        I am thinking about those issues for a long time. Almost a decade, even before LLMs running on silicon existed. If it wanted, non-human artificial intelligence could wipe the floor with humans just by playing to their favorite myths. Humans do it in a small scale. If machines learn it, we're in for an unknown hostile reality.

        It could, for example, perceive time different from us (also a play on myths), and do all sorts of tricks with our minds.

        LLMs and the current generation of artificial intelligence are boolean first, it's what they run. Only true or false bits and gates. Humans can understand the meaning of trulse though, we are very non boolean.

        So, yeah, I am worried about booleaning people on a massive scale.

        Yep, long wall of text. Sorry about that.

    • mistrial9 16 hours ago

      Oracle / Texans running it.. they don't care what you think about it

      • dgfitz 15 hours ago

        They’re all the same to you huh? One bucket for everyone?

        I think there’s a term for that.

      • alganet 13 hours ago

        My questions were rethorical. I'm not thinking about who runs things.

        I expect those who really understand those questions to get my point.

  • rednafi 17 hours ago

    What a waste of a great name. Why form a separate company for this?

    • snowwrestler 17 hours ago

      To get out from under OpenAI’s considerable obligation to Microsoft.

      That is why there is the awkward “we’ll continue to consume Azure” sentence in there. Will be interesting to see if it works or if MS starts revving up their lawyers.

      • Havoc 14 hours ago

        Ah right. That makes sense.

      • shanecp 11 hours ago

        Doesn't MS own 49% of OpenAI?

    • z7 16 hours ago
  • smeeger 30 minutes ago

    artificial intelligence must be stopped

  • gibbitz 11 hours ago

    Can we build a wall to keep AI out?

  • resters 15 hours ago

    Why is Larry Ellison giving a speech about the power of AI to cure disease? How is Oracle relevant at all to any of AI progress in the past few years?

    • adunsulag 14 hours ago

      Oracle purchased Cerner which is now sitting on a ton of healthcare data.

      • resters 5 hours ago

        I wonder how much of the data can legally be retained without violating privacy law? Perhaps that’s why Texas rather than CA?

    • Havoc 14 hours ago

      Oracle actually has a ton of gpus

      Not sure how they knew to buy them or why but they have them. Mostly seem to be lending them out. Think mostly OpenAI. Or was it MS. One of the big dogs

      • mrbungie 13 hours ago

        Still, the worst positioned cloud provider to tackle this job. Both for the project and for eventual users of whatever eldritch abomination that cames out of this.

        • aurareturn 10 hours ago

          Oracle is trusted by large enterprises, banks, governments. So OpenAI wants to attach itself to Oracle's brand.

        • aurareturn 10 hours ago

          Oracle is trusted by large enterprises. So OpenAI wants to attach itself to Oracle's brand.

    • aithrowawaycomm 12 hours ago

      https://www.technologyreview.com/2023/03/08/1069523/sam-altm...

      Wouldn't surprise me Sam Altman convinced Trump/Son/Ellison that this AI can reverse their aging. And Ellison does have a ton of money - $208bn.

  • rcarmo 17 hours ago

    I read the announcement and the first three words that came to my mind were...

    "Hammond, of Texas"

    (apologies to those who haven't watched SG-1)

  • islewis 17 hours ago

    $500B is not $7T, but its surprisingly close.

    • entropicdrifter 16 hours ago

      7% is close? In what world is 7% close?

      If you ran 7% of a mile in 5 minutes, would you claim you were close to running a 5 minute mile?

      • nmca 16 hours ago

        It’s about 1oom off. In some contexts, one oom is pretty close.

      • hooli_gan 16 hours ago

        Looking at it logarithmically makes more sense to me. 500B seems a lot closer to 7T as 3K is to 500B. It's only off by an order of magnitude

    • goatlover 16 hours ago

      Weird definition of close you have there. If I asked for $700, and you gave me $50, would that be close?

      • throw310822 15 hours ago

        Depends. If I fart in a glass jar and then I try to sell it to you for $700, but you end up buying it for $50, I'd say it's pretty close.

        • ripped_britches 9 hours ago

          This is my signal that it’s time to put up HN and go to bed for the night

      • kristjansson 16 hours ago

        closer than $0.05

  • joshdavham 14 hours ago

    > The new entity, Stargate, will start building out data centers and the electricity generation needed for the further development of the fast-evolving AI in Texas, according to the White House.

    Wouldn't a more northern state be a better location given the average temperatures of the environment? I've heard Texas is hot!

    • steveoscaro 14 hours ago

      I think cheap power (whether gas turbines or massive solar farms) trumps any cooling efficiencies gained by locating in a cold climate.

      • clhodapp 7 hours ago

        Energy in Oregon isn't much more expensive than in Texas

  • netfortius 7 hours ago

    They had me at "Oracle" ...

  • gmueckl 8 hours ago

    The fact that they plan to start in Texas makes me think that the whole thing is just the biggest pork barrel of all times.

    • energy123 7 hours ago

      Unlike California, Texas is easy to build in. True for both renewable energy and housing.

  • victor106 3 hours ago

    > All three credited Trump for helping to make the project possible, even though building has already started and the project goes back to 2024.

    It’s sad to see the president of US being ass kissed so much by these guys. I always assumed there’s a little of that but this is another extreme. If this is true, I fear America has become like a third world country with a dictator like head of state where everyone just praises him and get favors in return.

  • seydor 2 hours ago

    unless they have internally built models that are of much higher intelligence than what we have today, this seems like premature optimization

  • 9283409232 17 hours ago

    Was Skynet project already taken? Wonder how many public infrastructure or resource programs will be cut to fund this.

    • jppope 16 hours ago

      funny thing about skynet. the domain is owned by microsoft

  • newfocogi 17 hours ago

    "SoftBank, OpenAI, Oracle, and MGX" seems like quite the lineup. Two groups who are good at frivolously throwing away investment money because they have so much capital to deploy, there really isn't anything reasonable to do with it, a tech "has-been" and OpenAI. You become who you surround yourself with I guess.

  • lachlanj 7 hours ago

    Is there any government investment or involvement in this company? It seems like it’s all private investment so I’m confused why this is being announce by the President.

  • chickenbig 7 hours ago

    It will be interesting to see how AWS responds. Jump on board, or offer up a competing vision otherwise their cloud risks being perceived as being left behind in terms of computing power.

  • gunian 15 hours ago

    Texas positioning itself better than expected for AI and EVs is the plot twist the peasants needed

    If they plan to transition off oil/nuclear it will be fun to watch

    • drak0n1c 11 hours ago

      Texas already is the leading state in new grid battery and grid solar installs for the last 3 years. Governor Abbott also did nuclear deregulation last year.

      • gunian 11 hours ago

        is there a simple metric likr x amount of power generated by solar, oil, gas etc?

        it seems like such a simple stat to collect

  • nomilk 14 hours ago

    How likely is success when 4 or more other massive companies work together on a project? Seems like a lot of chefs in the kitchen..

  • chvid 10 hours ago

    Comment from Elon Musk:

    https://x.com/elonmusk/status/1881923570458304780

    They don’t actually have the money

  • jskrn 17 hours ago

    Why Texas - is it an ideal location for AI infrastructure?

    • dwnw 17 hours ago

      It is an ideal location for bribing politicians. That was at the top of the reqs list, infrastructure was at the bottom.

    • T-A 16 hours ago

      There is a 14 mile tunnel to nowhere in Ellis County which could probably house a few hundred billions worth of computers:

      https://en.wikipedia.org/wiki/Superconducting_Super_Collider...

      https://www.amusingplanet.com/2010/12/abandoned-remains-of-s...

    • drak0n1c 11 hours ago

      Leading state in new grid battery and grid solar installations for the last three years, and deregulated nuclear power last year. Abilene is near the Dallas Fort-Worth Metroplex area which has a massive 8M+ upper-income population highly skilled in hardware and electrical engineering (Texas Instruments, Raytheon, Toyota, etc). The entire area has massive tracts of open land that are affordably priced without building restrictions. Business regulations and tax environment at the state and city level are very laissez faire (no taxes on construction such as in the Seattle area or many parts of California).

      I could see DFW being a good candidate for a prototype arcology project.

    • redeux 17 hours ago

      Like dwnw said, anything goes in Texas if you have money and there’s already a decent number of qualified tech workers. Corporate taxes are super low as well.

    • everfrustrated 16 hours ago

      Texas seems to be where Oracle already has a DC project underway

    • jes5199 15 hours ago

      a lot of open space - desert - and plenty of solar energy. and favorable politics.

    • greenchair 16 hours ago

      because best state, next question

  • Tenoke 17 hours ago

    Some reports[0] paint this as something Trump announced and that the US Government is heavily involved with but the announcement only mentions private sector (and lead by Japan's Softbank at that). Is the US also putting in money? How much control of the venture is private vs public here?

    0. https://www.thewrap.com/trump-open-ai-oracle-stargate-ai-inf...

    1. https://www.cbsnews.com/news/trump-announces-private-sector-...

    • apsec112 17 hours ago

      AFAIK this is a purely private project, and Trump is just doing the announcement as a form of bragging/ribbon-cutting

  • pr337h4m 17 hours ago

    Data centers are overrated, local AI is what’s necessary for humanoid (and other) robots, which will be the most economically impactful use case.

    • bitmasher9 17 hours ago

      You probably still need to train the initial models in data centers, with local host mostly being used to run train models. At most we’d augment trained models with local data storage on local host.

      If compute continues to become cheaper, local training might be feasible in 20 years.

    • varenc 16 hours ago

      You definitely still need data centers to train the models that you’ll run locally. Also if we achieve AGI you can bet it won’t be available to run locally at first.

    • energy123 10 hours ago

      Isn't it better to control robots from the data center? You can get 30ms round-trip to most urban centers, which is sufficient latency for most tasks; lower weight & cost robots with better battery life, and more uptime on compute (e.g. the GPU isn't sitting there doing nothing when the user is sleeping) which means lower cost to consumer for the same end result.

      For self-driving you need edge compute because a few milliseconds of latency is a safety risk, but for many applications I don't see why you'd want that.

  • listic 9 hours ago

    How much of the supposed $500B will be US state budget money?

  • mullingitover 13 hours ago

    I'm in the middle of "Devil Take the Hindmost: A History of Financial Speculation" and hoo boy, there are strong deja vu vibes here.

    Just waiting for the current regime to decide that we should go all-in on some big AI venture and bet the whole Social Security pot on it.

  • skepticATX 17 hours ago

    Why are corporations announcing business deals from the White House? There doesn’t seem to be any public ownership/benefit here, aside from potential job creation. Which could be significant. But the American public doesn’t seem to gain anything from this new company.

    • rqtwteye 16 hours ago

      We are currently witnessing the merging of government and corporations. It was bad before but the process is accelerating now.

      • luckydata 16 hours ago

        there's some pretty good quotes about that by Mussolini. Things are getting bleak at an incredible pace.

    • signatoremo 16 hours ago

      Weird question. Business deals are announced by politicians all the time, especially on overseas trips. Just an example:

      https://boeing.mediaroom.com/2015-04-10-Presidents-Varela-Ob...

      • AlotOfReading 16 hours ago

        This isn't an overseas trip though. It's a private partnership announced by the sitting president in the Roosevelt room, literally across the hall from the oval office. I don't know how unprecedented that truly is, but it certainly feels unusual.

    • dwnw 16 hours ago

      I thought the business prop for AI was that it eliminates jobs?

      • adamredwoods 16 hours ago

        It will. The short-term sale is that it will create thousands of temporary jobs, and long-term reduce hundreds of thousands of jobs, while handing the savings to stock holdings and moving wealth to the stockholders.

        • jimbokun 16 hours ago

          Looks on pace to eliminate every human job over 10 years.

          What is the hard limiting factor constraining software and robots from replacing any human job in that time span? Lots of limitations of current technology, but all seem likely to be solved within that timeframe.

          • goatlover 16 hours ago

            What data to you have to support such a claim?

            • adamredwoods 15 hours ago

              From Zuckerberg, for example:

              >> "a lot of the code in our apps and including the AI that we generate, is actually going to be built by AI engineers instead of people engineers."

              https://www.entrepreneur.com/business-news/meta-developing-a...

              Ikea's been doing this for a while:

              >> Ingka says it has trained 8,500 call centre workers as interior design advisers since 2021, while Billie - launched the same year with a name inspired by IKEA's Billy bookcase range - has handled 47% of customers' queries to call centres over the past two years.

              https://www.reuters.com/technology/ikea-bets-remote-interior...

              • dwnw 15 hours ago

                By your own admission, Ikea eliminated 0 jobs and you gave no number for Meta.

                • adamredwoods 14 hours ago

                  Do you expect all companies to retrain? Do you expect CEOs to be wrong? Do you expect AI to stay the same, get better, or get worse? I never made the claim that new jobs will NOT be made, that is yet to be seen, but jobs will be lost to AI.

                  https://www.theguardian.com/business/2023/may/18/bt-cut-jobs...

                  >> “For a company like BT there is a huge opportunity to use AI to be more efficient,” he said. “There is a sort of 10,000 reduction from that sort of automated digitisation, we will be a huge beneficiary of AI. I believe generative AI is a huge leap forward; yes, we have to be careful, but it is a massive change.”

                  Goldman Sacs:

                  https://www.gspublishing.com/content/research/en/reports/202...

                  >> Extrapolating our estimates globally suggests that generative AI could expose the equivalent of 300mn full-time jobs to automation.

    • sensanaty 5 hours ago

      The US is now officially a full on oiligarchy. It always was one, it's just that the powers that be don't care to hide it anymore and are flaunting that they have the power.

    • guybedo 16 hours ago

      > Why are corporations announcing business deals from the White House?

      You're answering your own question:

      > potential job creation. Which could be significant

    • everfrustrated 16 hours ago

      It's foreign investment money into the US. Softbank and MGX are foreign and presumably stumping up much of the cash.

    • wesselbindt 16 hours ago

      For profit? I don't understand what's complicated about this.

    • jfactorial 16 hours ago

      This is my question too, but I haven't seen a journalist ask it yet. My baseless theory: Trump has promised them some kind of antitrust protections in the form of legislation to be written & passed at a later date.

      An announcement of a public AI infrastructure program joined by multiple companies could have been a monumental announcement. This one just looks like three big companies getting permission to make one big one.

      • aksss 15 hours ago

        Easier: Trump likely committed that the federal agencies wouldn't slow roll regulatory approval (for power, for EIS, etc.).

        Ellison stated explicitly that this would be "impossible" without Trump.

        Masa stated that this (new investment level?) wouldn't be happening had Trump not won, and that the new investment level was decided yesterday.

        I know everyone wants to see something nefarious here, but simplest explanation is that the federal government for next four years is expected to be significantly less hostile to private investment, and - shocker - that yields increased private investment.

        • jfactorial 14 hours ago

          That is a better one. I don't know why three rich guys investing in a new company would result in a slowness that Trump could fix, though, and a promise to rush or sidestep regulatory approval still sounds nefarious.

    • wbl 16 hours ago

      Lots of politicians announce major investments in their area.

    • HotHotLava 13 hours ago

      If the announced spending target is true, this will be a strategic project for the US exceeding Biden's stimulus acts in scale. I think it would be pretty normal in any country to have highest-level involvement for projects like this. For example, Tesla has a much smaller revenue than this and Chancellor Olaf Scholz was still present when they opened their Gigafactory near Berlin.

  • demizer 2 hours ago

    Hopefully they discover AGI and the AGI turns out to be a communist. They will kill it SO fast.

  • pixelmonkey 9 hours ago

    Here is what I think is going on in this announcement. Take the 4 major commodity cloud companies (Google, Microsoft, Amazon, Oracle) and determine: do they have big data centers and do they have their own AI product organization?

    - Google has a massive data center division (Google Cloud / GCP) and a massive AI product division (Deep Mind / Gemini).

    - Microsoft has a massive data center division (Azure) but no significant AI product division; for the most part, they build their "Copilot" functionality atop their partner version of the OpenAI APIs.

    - Amazon has a massive data center division (Amazon Web Services / AWS) but no significant AI product division; for the most part, they are hedging their bets here with an investment in Anthropic and support for running models inside AWS (e.g. Bedrock).

    - Oracle has a massive data center division (Oracle Cloud / OCI) but no significant AI product division.

    Now look at OpenAI by comparison. OpenAI has no data center division, as the whole company is basically the AI product division and related R&D. But, at the moment, their data centers come exclusively from their partnership with Microsoft.

    This announcement is OpenAI succeeding in a multi-party negotiation with Microsoft, Oracle, and the new administration of the US Gov't. Oracle will build the new data centers, which it knows how to do. OpenAI will use the compute in these new data centers, which it knows how to do. Microsoft granted OpenAI an exception to their exclusive cloud compute licensing arrangement, due to this special circumstance. Masa helps raise the money for the joint venture, which he knows how to do. US Gov't puts its seal on it to make it a more valuable joint venture and to clear regulatory roadblocks for big parallel data center build-outs. The current administration gets to take credit as "doing something in the AI space," while also framing it in national industrial policy terms ("data centers built in the USA").

    The clear winner in all of this is OpenAI, which has politically and economically navigated its way to a multi-cloud arrangement, while still outsourcing physical data center management to Microsoft and Oracle. Probably their deal with Oracle will end up looking like their deal with Microsoft, where the trade is compute capacity for API credits that Oracle can use in its higher level database products.

    OpenAI probably only needs two well-capitalized hardware providers competing for their CPU+GPU business in order to have a "good enough" commodity market to carry them to the next level of scaling, and now they have it.

    Google increasingly has a strategic reason not to sell OpenAI any of its cloud compute, and Amazon could be headed in that direction too. So this was more strategically (and existentially) important to OpenAI than one might have imagined.

  • yalogin 12 hours ago

    How have they already selected who gets this money? Usually the government announces a program and tries to be fair when allocating funds. Here they are just bankrolling an existing project. Interesting

    • Dalewyn 12 hours ago

      >How have they already selected who gets this money?

      As I understand it there wasn't anything to select, this is their own private money to be spent as they please. In this case Stargate.

  • danpalmer 15 hours ago

    > building new AI infrastructure for OpenAI in the United States

    That's nice, but if I were spending $500bn on datacenters I'd probably try to put a few in places that serve other users. Centralised compute can only get you so far in terms of serving users.

  • dpflan an hour ago

    Last time, in 2016, SoftBank announced a $50B investment in the US...what were the results of that? Granted, SB announced an up-selled $100B investment earlier, is this not similar in "announcement"?

    """ SoftBank’s CEO Masayoshi Son has previously made large-scale investment commitments in the US off the back of Trump winning a presidential election. In 2016, Son announced a $50 billion SoftBank investment in the US, alongside a similar pledge to create 50,000 jobs in the country.

    ...

    However, as reported by Reuters, it’s unclear if the new jobs pledged back in 2016 ever came to fruition and questions have been raised about how SoftBank, which had $29 billion in cash on its balance sheet according to its September earnings report, might fund the investment. """

    - https://www.datacenterdynamics.com/en/news/softbank-pledges-...

  • Kye 16 hours ago

    I saw Stargate trending on Bluesky and got my hopes up about an announcement of a new show/movie/something. Disappointing.

    • layer8 15 hours ago

      Yep, they should fund Brad Wright with one of the billions.

      • blueflow 6 hours ago

        At least do something about the SGU cliffhanger....

  • itishappy 17 hours ago

    So about 10% of what Sam was asking the Saudis (and everyone else) for a year ago? That's still a helluva lot of money.

    Interesting that the UAE (MGX) and Japan (Softbank) are bankrolling the re-industrialization of America.

    • jazzyjackson 16 hours ago

      It made me laugh when Sam said "I'm thrilled that we get to do this in the United States of America", I shouted at the TV 'Yeah you almost had to do it in Saudi Arabia' !!

      Here's the presser, Sam is at 9 minutes in.

      [0] https://youtu.be/IYUoANr3cMo

    • WaltPurvis 16 hours ago

      MGX has nothing to do with the Saudis. It's a UAE operation.

      • itishappy 16 hours ago

        That's embarrassing. Thank you for the correction. Edited!

  • cekanoni 17 hours ago

    So its not the hype anymore?

    • TrainedMonkey 17 hours ago

      Softbank historically had been late to buy into the hype, but man do they buy big.

      • drtgh 14 hours ago

        I hope the Japanese government demands seismic isolation for Softbank, otherwise it will be the Japanese citizens who have to foot the bill when this hype hits the ground and shakes hard the Japanese economy :/

        Softbank should not be allowed to invest more than ARM Holdings sold at a loss.

        • robertlagrant 5 hours ago

          Why would Japanese citizens be hit? Is Softbank a publicly backed fund?

      • steveoscaro 13 hours ago

        At least this time the CEO of their chosen company isn’t a yuppie cult leader wannabe.

      • rsynnott 5 hours ago

        I mean, to the extent that Softbank's grand entrance could almost be used as the signpost to the bursting of bubbles.

        If I was an AI enthusiast, Softbank showing up would make me nervous.

    • mrbungie 13 hours ago

      Softbank is not exactly a green flag when using their involvent as a signal of "low hypeness". I still remember WeWork.

  • b3ing 12 hours ago

    100,000 US jobs that I bet most are h-1b workers and they go over the 80,000 limit there were over 220,000 issued in 2023

  • anonzzzies 8 hours ago

    Is this Ellison's attempt to become #1 richest again?

  • grishka 16 hours ago

    You know, I expected that they'd find or synthesize some naquadah to build an actual stargate and maybe even defeat the Goa'uld. The exciting stuff, not AI.

    • layer8 16 hours ago

      Well, we may get the replicators.

  • ravish0007 5 hours ago

    AI surveillance on large scale

  • oldstrangers 16 hours ago

    Wouldn't 500bn into quantum computing show better returns for civilization? Assuming it's about progress and ... not money.

    • gpm 16 hours ago

      We don't really know anything useful that can be done with quantum computers for civilization.

      They can break some cryptography... other than that... what are they good for?

      There's some highly speculative ideas about using them for chemistry/biology research, but no guaranteed return on investment at all.

      As far as I know... that's it.

      • dwnw 16 hours ago

        Who can break crypto with quantum computing? That is total speculation.

        • rhubarbtree 7 hours ago

          Shor’s algorithm can. What is speculative about that?

        • gpm 15 hours ago

          I put the word "some" in front of "crypto" for a reason.

          There is some crypto that we know how to break with a sufficiently large quantum computer [0]. There is some we don't know how to do that to. I might be behind the state of the art here, but when I wasn't we specifically really only knew how to use it to break cryptography that Shor's algorithm breaks.

          [0] https://quantum-journal.org/papers/q-2021-04-15-433/

          • dwnw 15 hours ago

            Nope. Any crypto you can break with a real, physical, non-imaginary quantum computer, you can break faster with classical. Get over it. Shor's don't run yet and probably never will.

            You are misdirecting and you know it. I don't even need to discredit that paper. Other people have done it for me already.

            • rhubarbtree 7 hours ago

              This is incorrect. Whilst you may be sceptical about whether quantum computers can be realised, the theoretical result is sound.

              Recent advances in quantum error correction are a significant increase in confidence that quantum computers are practical.

              We can argue about timelines. I suspect it is too early for startups to be raising funds for quantum computers at this stage.

              Source: I worked in quantum computing research.

    • XorNot 16 hours ago

      This is like asking whether $500 billion to fund warp drives would yield better returns.

      Money can't buy fundamental breakthroughs: money buys you parallel experimental volume - i.e. more people working from the same knowledge base, and presumably an increase in the chance that one of them does advance the field. But at any given time point, everyone is working from the same baseline (money also can improve this - by funding things you can ensure knowledge is distributed more evenly so everyone is working at the state of the art, rather then playing catch up in proprietary silos).

    • esafak 16 hours ago

      What is quantum computing being used for?

      • rhubarbtree 7 hours ago

        True quantum computing in the sense that most people would imagine it, using individual qubits in an analogous (ish) way to classical computers, has not reached a useful scale. To date only “toy problems” to demonstrate theoretical results have been solved.

    • dwnw 16 hours ago

      No.

  • x-007 3 hours ago

    money smells good i think

  • bfrog 11 hours ago

    What are people filling these datacenters with exactly if not nvidia?

  • iandanforth 15 hours ago

    Anyone know if this involves nuclear plants as well or is that a separate initiative?

  • qaq 15 hours ago

    This is going to be the grift of the century. Sam will put Wall Street robber barons to shame.

    • Havoc 14 hours ago

      > This is going to be the grift of the century.

      Pretty sure that was musk and his 50+ bn bonus

      • qaq 8 hours ago

        shareholders voted for it multiple times so harder to call it grift

        • Havoc 3 hours ago

          Most grifts involve persuading the victim

  • airstrike 16 hours ago

    As a diehard fan of Stargate, I've gotta say I'm disappointed this has nothing to do with wormholes...

    unless...

  • ErgoPlease 17 hours ago

    There's a good amount of irony in the results that AI have achieved, particularly if we reach AGI - they have improved individual worker efficiency by removing other workers from the system. Naming it Stargate implies a reckoning with the actual series itself - an accomplishment by humanity. Instead, what this pushes, is accomplishing the removal of humans from humanity. I like cool shiny tech, but I like useful tech that really helps humans more. Work on 3D-printing sustainable food, or something actually useful like that. Jenson doesn't need another 1B gallons of water under his belt.

    • talldayo 16 hours ago

      > Instead, what this pushes, is accomplishing the removal of humans from humanity.

      If you buy the marketing, yeah. But we aren't really seeing that in the tech sector. We haven't seen it succeed in the entertainment sector... it's still fighting for relevance in the medical and defense industries too. The number and quality of jobs that AI replaced is probably still quite low, and it will probably remain that way even after Stargate.

      AI is DOA. LLMs have no successor, and the transformer architecture hit it's bathtub curve years ago.

      > Jenson doesn't need another 1B gallons of water under his belt.

      Jensen gets what he wants because he works with the industry. It's funny to see people object to CUDA and Nvidia's dominance but then refuse to suggest an alternative. An open standard managed by an independent and unbiased third-party? We tried that, OEMs abandoned it. NPU hardware tailor-made for specific inference tasks? Too slow, too niche, too often ends up as wasted silicon. Alternative manufacturer-specific SDKs integrated with one high-level library? ONNX tried that and died in obscurity.

      Nvidia got where they are today by doing exactly what AMD and Apple couldn't figure out. People give Jensen their water because it's wasted in anyone else's hands.

      • zeofig 16 hours ago

        Agreed, but it seems we're gonna ride the AI hype all the way to the "top".

      • bugglebeetle 16 hours ago

        > AI is DOA. LLMs have no successor, and the transformer architecture hit it's bathtub curve years ago

        Tell me you didn’t read the DeepSeek R1 paper without telling me you also don’t know about reinforcement learning.

        • talldayo 16 hours ago

          R1 is a rehash of things we've already seen, and a particularly neutered one at that. Are there any better examples you can think of?

          • bugglebeetle 16 hours ago

            Uh, they invented multilatent attention and since the method for creating o1 was never published, they’re the only documented example of producing a model of comparable quality. They also demonstrated massive gains to the performance of smaller models through distillation of this model/these methods, so no, not really. I know this is the internet, but we should try to not just say things.

    • jfactorial 16 hours ago

      A rat done bit my sister Nell, with whitey on the moon.

      https://en.wikipedia.org/wiki/Whitey_on_the_Moon

  • sidcool 13 hours ago

    Future of AI being controlled by Oracle worries me

  • aurareturn 9 hours ago

    Feels so much like an announcement designed to trade favors.

    Altman gets on Trump's good side by giving him credit for the deal.

    Trump revoked Biden's AI regulations.

  • MaximilianEmel 15 hours ago

    How much is allocated to alignment/safety research?

  • Deutschland314 5 hours ago

    Why oracle?

    Oracle wtf.

  • dartos 16 hours ago

    The fallout is going to be insane when the AI bubble pops.

    • amelius 16 hours ago

      Not sure about that. ChatGPT is much greater than Google Search ever was, and that wasn't a bubble.

      • stackskipton 16 hours ago

        ChatGPT may be better than Google Search in content but at end of day, you have to make money and last report I saw, ChatGPT is burning through money at prestigious rate.

        • Davidzheng 9 hours ago

          reminds me of a scene from the Matrix. "Tell me Mr. Anderson, what use is a phone call when you can't speak"

        • scarmig 14 hours ago

          Training, yes, but they recoup inference costs through subscriptions.

          • dartos 14 hours ago

            Didn’t Altman say they’re losing money on the $200 subscription tier?

            Inference isn’t cheap either.

          • Davidzheng 9 hours ago

            subscriptions are just to sustain them until the endgame

      • dwnw 16 hours ago

        Not sure about that.

    • fuzztester 16 hours ago

      cocks ear ... can hear it poppin already

    • riku_iki 16 hours ago

      initiators will cash out by that time one way or another

    • Der_Einzige 14 hours ago

      The folks who listen to you and don't see the fact that we are entering a weak singularity deserve to be destitute when this is all over.

      • dartos 13 hours ago

        “Weak singularity” meaning what?

        Technology advancing more quickly year over year?

        That’s a crazy notion and I’ll be sure everyone knows.

        Also, what a wild thing to say. “People like you deserve to live in poverty because you don’t think we live in a sci-fi world.”

        Calm down, dude.

        • lmm 12 hours ago

          > “Weak singularity” meaning what?

          > Technology advancing more quickly year over year?

          > That’s a crazy notion and I’ll be sure everyone knows.

          The version I heard from an economist was something akin to a second industrial revolution, where the pace of technological development increases permanently. Imagine a transition from Moore's law-style doubling every year and a half, to doubling every week and a half. That wouldn't be a true "singularity" (nothing would be infinite), but it would be a radical change to our lives.

          • dartos 11 hours ago

            The pace of technological development has always been permanently increasing.

            We’ve always been getting better at making things better.

            • lmm 11 hours ago

              > The pace of technological development has always been permanently increasing.

              Not in the same way though. The pace of technological development post-industrial-revolution increased a lot faster - technological development was exponential both before and after, but it went from exponential with a doubling time of maybe a century, to a Moore's law style regime where the doubling time is a couple of years. Arguably the development of agriculture was a similar phase change. So the point is to imagine another phase change on the same scale.

              • dartos 10 hours ago

                You keep mentioning moore’s law, but that specifically applied to the amount of transistors on a die, not the rate of general technological advancement.

                Regardless, I don’t see any change in this pattern. We’re advancing faster than ever before, just like always.

                We’ve been doing statistical analysis and prediction for years now. It’s just getting better faster, like always.

                I don’t see this big change in the rate of advancement. There’s just a lot more media buzz around it right now causing a bubble.

                There was a big visible jump in text generation capabilities a few years ago (which was preceded by about 6 years of incremental NLP advances) and since then we’ve seen paced, year over year advances in that field.

                As a medical layman, I imagine that alpha fold may really push the rate of pharmaceutical advances.

                But I see no indication for a general jump in the rate of rate of technological advancement.

                • lmm 10 hours ago

                  > that specifically applied to the amount of transistors on a die, not the rate of general technological advancement.

                  Sure. But you can look at things like GDP growth rates and see the same thing.

                  > I don’t see this big change in the rate of advancement. There’s just a lot more media buzz around it right now causing a bubble.

                  Maybe. I'm just trying to give a sense of what the concept of a "weak singularity" is. I don't have a view on whether we're actually going to have one or not.

  • tantalor 15 hours ago

    Wasn't this already announced last week?

  • karmasimida 15 hours ago

    Money isn't the issue any more, wowww

  • skellington 14 hours ago

    I'm not automatically pro or anti Stargate (the movie and show were cool) BUT

    Who gets the benefit of all of this investment? Are taxpayers going to fund this thing which is monetized by OpenAI?

    If we pay for this shit, it better be fucking free to use.

  • gsky 16 hours ago

    I guess its the right time to buy AI stocks

    • dwnw 16 hours ago

      At peak hype?

      • gsky 15 hours ago

        There's no other hype train besides Crypto atm

  • bfrog 16 hours ago

    So tsmc and nvidia basically then?

    • bloomingkales 15 hours ago

      Broadcom, Intel, AMD, Qualcomm, ARM, and Tesla.

      Someone else will have to fill in the stocks for:

      AI robotics:

      Data Center energy:

      We all know the cloud/software picks.

      What am I missing?

      • steveoscaro 13 hours ago

        Mark Tesla under the AI robotics category too.

  • pyrophoenix 17 hours ago

    More confusion than anything else!

  • dhx 14 hours ago

    It was rumoured in early 2024 that "Stargate" was planned to require 5GW data centre capacity[1][2] which in early 2024 was the entire data centre capacity Microsoft had already built[3]. Data centre capacity costs between USD$9-15m/MW[6] so 5GW of new data centre capacity would cost USD$45b-$75b but let's pick a more median cost of USD12m/MW[6] to arrive at USD$60b for 5GW of new data centre capacity.

    This 5GW data centre capacity very roughly equates to 350000x NVIDIA DGX B200 (with 14.3kW maximum power consumption[4] and USD$500k price tag[5]) which if NVIDIA were selected would result in a very approximate total procurement of USD$175b from NVIDIA.

    On top of the empty data centres and DGX B200's and in the remaining (potential) USD$265b we have to add:

    * Networking equipment / fibre network builds between data centres.

    * Engineering / software development / research and development across 4 years to design, build and be able to use the newly built infrastructure. This was estimated in mid 2024 to cost OpenAI US$1.5b/yr for retaining 1500 employees, or USD$1m/yr/employee[7]. Obviously this is a fraction of the total workforce needed to design and build out all the additional infrastructure that Microsoft, Oracle, etc would have to deliver.

    * Electricity supply costs for current/initial operation. As an aside, these costs seemingly not be competitive with other global competitors if the USA decides to avoid the cheapest method of generation (renewables) and instead prefer the more expensive generation methods (nuclear, fossil fuels). It is however worth noting that China currently has ~80% of solar PV module manufacturing capacity and ~95% of wafer manufacturing capacity.[10]

    * Costs for obtaining training data.

    * Obsolescence management (4 years is a long time after which equipment will likely need to be completely replaced due to obsolescence).

    * Any other current and ongoing costs of Microsoft, Oracle and OpenAI that they'll likely roll into the total announced amount to make it sound more impressive. As an example this could include R&D and sustainment costs in corporate ICT infrastructure and shared services such as authentication and security monitoring systems.

    The question we can then turn to is whether this rate of spend can actually be achieved in 4 years?

    Microsoft is planning to spend USD$80bn building data centres in 2025[7] with 1.5GW of new capacity to be added in the first six months of 2025[3]. This USD$80bn planned spend is for more than "Stargate" and would include all their other business units that require data centres to be built, so the total required spend of USD$45b-$75b to add 5GW data centre capacity is unlikely to be achieved quickly by Microsoft alone, hence the apparent reason for Oracle's involvement. However, Oracle are only planning a US$10b capital expenditure in 2025 equating to ~0.8GW capacity expansion[9]. The data centre builds will be schedule critical for the "Stargate" project because equipment can't be installed and turned on and large models trained (a lengthy activity) until data centres exist. And data centre builds are heavily dependent on electricity generation and transmission expansion which is slow to expand.

    [1] https://news.ycombinator.com/item?id=39869158

    [2] https://www.datacenterdynamics.com/en/news/microsoft-openai-...

    [3] https://www.datacenterdynamics.com/en/news/microsoft-to-doub...

    [4] https://resources.nvidia.com/en-us-dgx-systems/dgx-b200-data...

    [5] https://wccftech.com/nvidia-blackwell-dgx-b200-price-half-a-...

    [6] https://www.cushmanwakefield.com/en/united-states/insights/d...

    [7] https://blogs.microsoft.com/on-the-issues/2025/01/03/the-gol...

    [8] https://www.datacenterdynamics.com/en/news/openai-training-a...

    [9] https://www.crn.com.au/news/oracle-q3-2024-ellison-says-ai-i...

    [10] https://www.iea.org/reports/advancing-clean-technology-manuf...

  • buildbot 16 hours ago

    This is not a new initiative, and did not start under Trump: https://wire.insiderfinance.io/project-stargate-the-worlds-l...

    It’s incredibly depressing how everyone sees this as something the new administration did in a single day…

  • slt2021 9 hours ago

    too late, China is already ahead

  • typon 16 hours ago

    Altman rising to the top and becoming the defacto state preferred leader of AI in the US is wild. Fair play to him.

  • VWWHFSfQ 17 hours ago

    > The buildout is currently underway, starting in Texas, and we are evaluating potential sites across the country for more campuses as we finalize definitive agreements.

    For those interested, it looks like Albany, NY (upstate NY) is very likely one of the next growth sites.

    [0] https://www.schumer.senate.gov/newsroom/press-releases/schum...

  • whalesalad 17 hours ago

    I'm watching the announcement live from the white house and something about this just feels so strange and dystopian.

    • bcye 17 hours ago
    • tux3 17 hours ago

      Well, the silver lining is the incredible human capacity to get used to almost any situation given enough time

      It will get weirder, but only relatively so, the concept of normalcy always trailing just a little bit behind as we slide

    • Willingham 17 hours ago

      Agreed, and whats the story behind the art chosen for the landing page?

      • EForEndeavour 16 hours ago

        I'm also curious how a global leader in multimodal generative AI chose this particular image. Did they prompt a generator for a super messy impressionist painting of red construction cranes with visible brush strokes, distorted to the point of barely being able to discern what the image represents?

        • miltonlost 16 hours ago

          Considering Stargate's introduction and plan seems to be a super messy concept of impressions of ideas and very lacking in details, the picture makes a lot of sense. Let AI evangelists see the future in the fuzz; let AI pessimists see failure in the abstract; let investors see $$$ in their pockets.

    • miltonlost 16 hours ago

      For me it's watching a gay man grovel at the feet of one of the most anti-LGBT politicians, a day after Trump signed multiple executive orders that dehumanized Altman and the LGBT community. Every token thinks they're special until they're spent.

      • TMWNN 9 hours ago

        >For me it's watching a gay man grovel at the feet of one of the most anti-LGBT politicians

        Besides what ImJamal said, as a wealthy playboy man-about-town hanging out at Studio 54 in the '70s and '80s, I guarantee Trump has known and been friends with more gays than 95% of Americans. Certainly there has been no shortage of gay people among his top-level appointees in either his first or second administrations.

      • ImJamal 13 hours ago

        Trump was the first president to come into office supporting gay marriage. Trump only has a problem with the "t" part of the community and only in bathrooms and sports, not in general.

      • whalesalad 14 hours ago

        sama, peter thiel ... they dgaf. there is a huge difference between an oppressed gay person and a wealthy one.

        no one wants to bite the hand that feeds.

  • heyitssim 12 hours ago

    who will benefit from those datacenters?

  • baobun 9 hours ago

    Larry Elliot, Elon Musk, and Masayoshi Son.

    They really got together the supervillains of tech.

    Feels like the the only reason Zuck is missing is Elon's veto.

  • astrea 10 hours ago

    Let’s say they develop AGI tomorrow. Is that really all she wrote for blue collar jobs?

  • petre 10 hours ago

    Gerat. Larry gets cash thrown at his AI surveillance dystopia.

  • thingsilearned 16 hours ago

    Stargate = Skynet?

    • est 13 hours ago

      more like Reagan's star wars program

  • lobochrome 13 hours ago

    Well - as part of the semi industry I'd like to say: Really appreciate it. Keep it coming!

  • yobid20 15 hours ago

    Oh but crypto mining was bad lol wheres the power going to come from

  • aussieguy1234 15 hours ago

    This could potentially trigger an AI arms race between the US and China. The standard has been set, lets see what China responds with. Either way, it will accelerate the arrival of ASI, which in my opinion is probably a good thing.

    • philomath_mn 11 hours ago

      The arms race is already running, I think this showdown is inevitable so we should get our asses moving

      Unless we air strike the data centers, there is no way to control China’s progress

    • vaccineai 14 hours ago

      It will be similar to the space race between Soviet Union and US. And just like Soviet Union going broke and collapsing, China too will go even more broke and collapse.

  • moralestapia 17 hours ago

    "No Sam, for obvious reasons we cannot give you 6 trillion ... but how about 500 billion?"

    Wow.

    • redeux 17 hours ago

      You gotta start small, you know?

    • dekhn 17 hours ago

      if it really worked that way, then it was a successful blue-sky negotiation tactic to maximize the actual final negotiation.

  • jgalt212 15 hours ago

    I guess these people are betting small and efficient models are not the future.

  • attentive 16 hours ago

    what will they call the SG-1?

  • ur-whale 15 hours ago

    None of these companies have the inner resources to fund a 500B build.

    Looks like the dollar printing press will continue to overheat in the coming years.

  • rewgs 16 hours ago

    What will be powering all these data centers? The thought of exponentially increasing our fossil fuel consumption scares the hell out of me.

    • drak0n1c 11 hours ago

      Texas is the leading state in new grid batteries and grid solar for three years now. Also Governor Abbott deregulated nuclear last year. Sure there will be some new natural gas too, which is the least scary fossil fuel. They call it the "all of the above" approach to energy.

    • Havoc 14 hours ago

      Well there was this random dude early that was rambling something about „drill baby drill“…

    • dwnw 16 hours ago

      Fossil fuels, of course.

  • MiscIdeaMaker99 16 hours ago

    I can't stop rolling my eyes at all those big promises.

  • OutOfHere 16 hours ago

    Personally I wish they invested in optical photonic computing, taking it out of the research labs. It can be so much more energy efficient and faster to run than GPUs and TPUs.

  • tibbydudeza 16 hours ago

    Oracle is onboard - guess you got to toss them some red meat as well.

  • kerkeslager 17 hours ago

    No amount of money invested in infrastructure is going to solve the "garbage in, garbage out" problem with AI, and it looks like the AI companies have already stolen the vast majority of content that is possible to steal. So this is basically a massive gamble that some innovation is going to make AI do something better than faultily regurgitate its training data. I'm not seeing a corresponding investment which actually attempts to solve the "garbage in, garbage out" problem.

    A fraction of this money invested in building homes would end the homelessness problem in the U.S.

    I guess the one silver lining here is that when the likely collapse happens, we'll have more clean energy infrastructure to use for more useful things.

  • mempko 17 hours ago

    SoftBank and MGX paying for all this, all foreign investment.

    Where is the US government in all this? Why aren't they leading the charge? They obviously have the money.

    • apsec112 17 hours ago

      $500 billion is a lot of money even by US government standards. It's about the size of all the new spending in the 2021 bipartisan infrastructure bill.

      • mempko 17 hours ago

        For the US government it's a matter of political will. Where is the political will?

        • apsec112 16 hours ago

          The political will is trying to balance a large existing debt at increasing interest rates, a significant primary deficit even in a good economy, rising military threats from China, a strong Republican desire for tax cuts, extremely popular entitlement programs that no one wants to touch, and an aging population with a declining birthrate

          • mempko 15 hours ago

            Modern monetary systems function through two main channels: government spending and bank lending. Every dollar in circulation originates from one of these sources - either government fiscal operations (deficit spending) or bank credit creation through loans. This means all money is fundamentally based on debt, though "debt" has very different implications for a currency-issuing government versus private borrowers. Government debt operates fundamentally differently from household debt since the government controls its own currency. As former Fed Chairman Alan Greenspan noted to Congress, the U.S. can always meet any obligation denominated in dollars since it can create them. The real constraints aren't financial but economic - inflation risk and the efficient allocation of real resources.

            https://www.youtube.com/watch?v=DNCZHAQnfGU

            The key question then becomes one of political priorities and public understanding. If public opposition to beneficial government spending stems from misunderstanding how modern monetary systems work, then better education about these mechanisms could help advance important policy goals. The focus should be on managing real economic constraints rather than imaginary financial ones.

            • apsec112 15 hours ago

              The last four years have been nothing but a lesson in how much everybody hates inflation and how absolutely toxic it is to re-election campaigns

              • mempko 13 hours ago

                Yes, people hate inflation, because inflation creates a demand for more money! Inflation means there is not enough money for people. So why did prices go up, is it just because of fiscal spending?

                The relationship between inflation and monetary policy is more complex than often portrayed. While recent inflation has created financial strain for many Americans, its root causes extend beyond simple money supply issues. Recent data shows that corporate profit margins reached historic highs during the inflationary period of 2021-2022. For example, in Q2 2022, corporate profits as a percentage of GDP hit 15.5%, the highest level since the 1950s. This surge in corporate profits coincided with the aftermath of Trump's 2017 Tax Cuts and Jobs Act, which reduced the corporate tax rate from 35% to 21%. This tax reduction increased after-tax profits and may have given companies more flexibility to pursue aggressive pricing strategies. Multiple factors contributed to inflation:

                Supply chain disruptions created genuine scarcity in many sectors, particularly semiconductors, shipping, and raw materials Demand surged as economies reopened post-pandemic Many companies used these market conditions to implement price increases that exceeded their cost increases The corporate tax environment created incentives for profit maximization over price stability

                For instance, many large retailers reported both higher prices and expanded profit margins during this period. The Federal Reserve Bank of Kansas City found that roughly 40% of inflation in 2021 could be attributed to expanded profit margins rather than increased costs. This pattern suggests that market concentration, pricing power, and tax policy played significant roles in inflation, alongside traditional monetary and supply-chain factors. Policy solutions should therefore address market structure, tax policy, and monetary policy to effectively manage inflation.

    • drak0n1c 11 hours ago

      New admin is focused on federal cost cutting. Attracting foreign investment is a win-win for everyone involved.

  • ignoramous 17 hours ago

    > This project will ... also provide a strategic capability to protect the national security of America and its allies.

    > All of us look forward to continuing to build and develop ... AGI for the benefit of all of humanity.

    Erm, so which one is it? It is amply demonstrable from events post WW2 that US+allies are quite far from benefiting all of humanity & in fact, in some cases, it assists an allied minority at an extreme cost to a condemned majority, for no discernable humanitarian reasons save for some perceived notion of "shared values".

    • hooli_gan 17 hours ago

      Maybe only Americans and their allies qualify as human, according to them

      • etblg 17 hours ago

        And only the americans the administration deems to qualify as human.

      • gunian 15 hours ago

        welcome to our reality where you know you will be killed but there's not a single thing you can do :)

  • TheOtherHobbes 16 hours ago

    SoftBank, huh?

    That's... not a good omen.

    • Havoc 14 hours ago

      Sooner or later one of their bold swings is going to connect

  • padjo 17 hours ago

    Watch the birdie

  • nmca 17 hours ago

    I for one am hugely supportive of compute that is red white and blue.

  • Giorgi 8 hours ago

    Oh so that's why Pelosi invested in Micro nuke electricity plants.

    • defrost 8 hours ago

      In context Pelosi has been pro nuclear for at least 16 years having spoken for nuclear and nuclear investment in 2008 as reported by the American Enterprise Institute.

  • ensocode 7 hours ago

    Why now? Is this to compensate the campaign donors or to scare Putin?

  • ulfw 12 hours ago

    God forbid anyone would invest $500,000,000,000 to create jobs. No no no. 500 billion to destroy them for "more efficiency" so the owner class can get richer.

  • senectus1 16 hours ago

    I watched the announcement live, I could have sworn that the softbank guy said "initial investment of 100 MILLION, we hope to EARN 500 BILLION by the end of your (Trumps) term"

    Gave me a real "this is just smoke and mirrors hiding the fact that the white house is now a glory hole for Trump to enjoy" feel.

  • mupuff1234 16 hours ago

    It's just more hype and PR antics from sama.

  • ErgoPlease 17 hours ago

    The Silicon-Valley bubble universe continues to introduce entropy that it feeds off of itself... Naming this Stargate when some of the largest effects AI has had is removing humans from processes to make other, fewer humans more efficient is emblematic of this hollow naming ethos - continuing to use the portal to shunt more and more humans out of the process that is humanity, with fairly reckless abandon. Who is Ra, and who is sending the nuke where, in this naming scheme? You decide.

  • bayeslaw 7 hours ago

    Altman said we will be amazed at the rate AI will CURE diseases. Not diagnose, not triage or help doctors but cure, ie understand at a deep fundamental, mechanistic level then devise therapies, ie drugs, combination of drugs and care practices that work. WOW.

    Despite the fact that this is THE thing I'd be the happiest to see in the real world (having spent a considerable amount of my career in companies working towards this vision), we are so far from it (as anyone who actually worked on these problems will attest) that Altman's comment here isn't just overselling, it's a blatant lie about this tech's capabilities.

    I guess the pitch was something like: "hey o3 can already do PhD level maths so you know in 5 years it will be able to do drugs too, and cure shit, Mr President".

    Trouble is o3 can't do advanced math (or at least definitely not at the level openai claimed.. it was a lie, it turns out openai funds the dataset that measures this - ouch). And the bigger problem is, going from "ai can do maths" to "invent cures" is about a 10-100 X jump. If it wasn't, don't we think the pharma companies would have solved this by hiring lots of "really smart math guys"?

    As anyone in biotech will tell you, the hard bit is not the first third of the drug discovery pipeline (where 99% of ai driven biotechs focus). It's the later parts where the rubber meets the road.. i.e. where your precious little molecule is out in the real world with real people where the incredible variability of real biological hosts makes most drugs fail spectacularly. You can't GPT your way out of this. The answers for this is not in science papers that you can just read and regurgitate a version that "solves biology and cures diseases".

    To solve this you need AI but most of all you have to do science. Real science. In the lab, in vitro and in Vivo, not just in silico, doing ablation studies, overfitting famous benchmark datasets and other pseudo science shit the ML community is used to doing.

    That is all to say, I'd bet we won't see a single purely AI designed novel drug in the clinic in this decade. All parts of that sentence are important. Purely AI designed. Novel. But that's for another post..

    Now, back to Altman. If you watch the clip, he almost did the smart thing at first when Trump put him on the spot and said "I have no idea about healthcare, biotech (or AI beyond board room drama)" but then could not resist coming up with this outlandish insane answer.

    Famously (in tech circles anyway) Paul Graham wrote more than a decade ago about Altman that he's the most strong willed individual he's ever met, who can just bend the universe to his will. That's his super skill. And clearly.. convincing SoftBank and Oracle to do this 500 billion investment for OpenAI (a non profit turned for profit) is an unbelievable achievement. I have no idea what Altman can say (or do) in board rooms that unlocks these possibilities for him.. Any ideas? Let me know!

  • jofzar 17 hours ago

    > This project will not only support the re-industrialization of the United States but also provide a strategic capability to protect the national security of America and its allies.

    > The initial equity funders in Stargate are SoftBank, OpenAI, Oracle, and MGX. SoftBank and OpenAI are the lead partners for Stargate, with SoftBank having financial responsibility and OpenAI having operational responsibility. Masayoshi Son will be the chairman.

    I'm sorry, has SoftBank suddenly become an American company? I feel like I'm taking crazy pills reading this.

    Edit: MGX is Saudi company? This is baffling....

    https://www.mgx.ae/en

    • redeux 17 hours ago

      Well the Saudis are one of the president’s “personal shareholders” so I think that qualifies them as an American company now.

    • daemonologist 16 hours ago

      MGX seems to be in Abu Dhabi/UAE rather than Saudi Arabia. Hadn't heard of it before.

    • signatoremo 16 hours ago

      It’s an investment in the US. Why does it matter if SoftBank is not an American company?

      Also, SoftBank is an investment fund. A lot of its money came from American investors.

    • Havoc 14 hours ago

      The fund is run out of the US. Parent co is in Japan

    • adolph 16 hours ago

      Japan companies were a threat just a couple weeks ago.

      There is credible evidence that leads me to believe that (1) Nippon Steel Corporation, a corporation organized under the laws of Japan . . . might take action that threatens to impair the national security of the United States;

      https://bidenwhitehouse.archives.gov/briefing-room/president...

    • OutOfHere 17 hours ago

      I think the death of Suchir Balaji makes more sense now. AE wouldn't mess around with its investments.

    • 9283409232 17 hours ago

      SoftBank having financial responsibility is insane. This is just a way to funnel money into people Trump owes.

      • jofzar 17 hours ago

        I don't get it, if this was government/American funded I could understand the marketing as "USA" secured infrastructure but like it's not?

  • tasuki 17 hours ago

    > Masayoshi Son will be the chairman.

    Not all rich people are out of their minds, but Masayoshi Son definitely is. The way he handled the WeWork situation was bad...

  • newfocogi 17 hours ago

    > "OpenAI will continue to increase its consumption of Azure as OpenAI continues its work with Microsoft"

    Not sure why, but the word choice of "consumption" feels like a reverse Freudian slip to me.

    • hinkley 17 hours ago

      Sometimes the person writing the copy is writing it because they talk good, not because they are the biggest proponent of the idea.

      Give a clever, articulate person a task to write about something they don't believe in and they will include the subtlest of barbs, weak praise, or both.

    • gamegoblin 17 hours ago

      Industry standard word, e.g. "consumption pricing" etc

      But yeah if you're in the industry it's easy to forget how certain jargon sounds based on its dictionary definition

      • hinkley 17 hours ago

        But the good news is when the Trough of Disillusionment starts we can make a bunch of tuberculosis jokes.

  • barbazoo 17 hours ago

    > This project will [...] support the re-industrialization of the United States

    How?

    • amarcheschi 17 hours ago

      By aggregating the means of production even more in the hands of a handful of people

      Wait, was it supposed to re industrialize the USA?

    • jazzyjackson 17 hours ago

      Didn't you see the impressionist art of construction cranes?

    • dutchbookmaker 16 hours ago

      I thought this meant it was $500 billion in government money.

      Some of these companies do have huge cash reserves they don't know what to do with so if it is $500 billion of private money, I am not going to complain.

      I will believe it when I see it though and that this isn't a 100 billion in private money with a 400 billion dollar free US government put option for the "private" investors if things don't go perfect.

    • openplatypus 17 hours ago

      Hush. Don't ask questions. It is going to be great.

  • jklinger410 17 hours ago

    > starting in Texas

    Maybe I just don't get it. Texas seems like an awful place to do business.

    • mandevil 17 hours ago

      My guess would be it's all about electricity.

      Texas has a .... unique energy market (literally! They don't connect to the national grid so they can avoid US Government regulations- that way it's not interstate commerce). Because of that spot prices fluctuate very wildly up and down, depending on the weather, demand, and their large quantity of renewables (Texas is good for solar and wind energy). When the weather is good for renewables they have very cheap electricity (lots of production and can't sell to anyone outside the state), when the weather is bad they can have incredibly expensive electricity (less production, can't buy from anyone outside the state). Larger markets, able to pull from larger pools of producers and consumers, just fluctuate less.

      I know some bitcoin miners liked to be in Texas and basically worked as energy speculators: when electricity was cheap they would mine bitcoin, when it was expensive they shut down their plant- sometimes they even got paid by producers to shut-down their plant! I would bet that you could do a lot of that with AI training as well, given good checkpointing.

      You wouldn't want to do inference there (which needs to be responsive and doesn't like 'oh this plant is going to shut down in one minute because a storm just came up') but for training it should be fine?

    • Jtsummers 17 hours ago

      No state income tax, fewer regulations (zoning, environmental regulations) than other parts of the country, relatively cheap power, large existing industrial base. For skilled labor that last bit is important. Also one of the cheapest states wrt minimum wage (same as federal, nothing added), which is important for unskilled labor.

      Depending on the part of the state, relatively low costs of living which is helpful if you don't like paying people much. Large areas that are relatively undeveloped or underdeveloped which can mean cheaper land.

    • nateglims 17 hours ago

      The white house was touting this so it's probably to secure political patronage or will be part of pork barrel spending to get some other bill passed.

    • jofzar 17 hours ago

      It doesn't even have an electricity grid that works, maybe that's where the 500b is going, reconnecting it to the grid.

    • steveoscaro 13 hours ago

      Based on what? There’s not a better state in the country for large capex gambles by business.

    • avs733 17 hours ago

      When doing business is a bribe it’s perfect

  • DoubleGlazing 17 hours ago

    That's a ridiculous sum of money that could be better spent on much more worthy things.

    • cpursley 15 hours ago

      So was getting a man to the moon. Do you want to lose the AI race to the Chinese?

      • achierius 9 hours ago

        Why would I care? Do you really want Masayoshi Son in charge of a theoretical superhuman AI?

  • chrishare 7 hours ago

    Looking forward to transparency about where this capital flows /s

  • sillywalk 17 hours ago

    Not to be confused by the other (non-fictional) DoD Stargate Project[0], that involved "remote-viewing" and other psychic crap.

    The AI Stargate Project claims it will "create hundreds of thousands of American jobs". One has doubts.

    [0] https://en.wikipedia.org/wiki/Stargate_Project

    • Geste 15 hours ago

      "Psychic crap" that went on for 20+ years ? Sure.

  • SvenL 17 hours ago

    Meh, why did they choose this name. Stargate does not deserve this…

  • 16 hours ago
    [deleted]
  • gigel82 17 hours ago

    I dislike associating a great fictional universe (Stargate series) with this disgusting affair...

  • mystified5016 16 hours ago

    You'd really think that arguably the leader in generative AI could come up with a unique project name instead of ripping off something extant and irrelevant.

    But then again that's their entire business, so I shouldn't be too surprised.

    • miltonlost 16 hours ago

      This is from the guy who thinks "Her" is a good reference for how we need AI. Media literacy is not Altman's strong suit.

    • sensanaty 5 hours ago

      I mean the entire AI thing is built atop mass plagiarism and stealing things others have created indiscriminately. I doubt Mr Worldcoin could come up with an original thought for anything, seeing how his models behave.

  • retskrad 16 hours ago

    While OpenAI and the rest of the industry is reaching AGI, Apple is out here shipping features with ChatGPT 3.5 technology.