Apart from my general queasiness about the whole AGI scaling business and the power concentration that comes with it, these are the exact four people/entities that I would not want to be at the tip of said power concentration.
By the time this project is done it will have been dead for 2 years.
Too many greedy mouths.
Too many corporations.
Too little oversight.
Too broad an objective.
Technology is moving too quickly for them to even guess at what to aim for.
<quote>
Citizens will be on their best behavior because we are constantly recording and reporting everything that's going on," Ellison said, describing what he sees as the benefits from automated oversight from AI and automated alerts for when crime takes place. "We're going to have supervision," he continued. "Every police officer is going to be supervised at all times, and if there's a problem, AI will report the problem and report it to the appropriate person.
</quote>
What is far more important to understand is to ignore all that nonsense and focus on who makes money? It will be Ellison and his buddies making tens of billions of dollars/year selling 'solutions' to local governments, all paid by your property taxes. This also enables an ecosystem of theft, where others benefit a lot more. With the nexus of Private Prisons, kids for cash judges (or judges investing in stock of prisons), DEA/police unions, DEA unions, small rural towns increasing prison population (because they get added to the total pop, and get funds allocated).
More importantly this is extremely attractive to police who can steal billions every day from civil forfeiture, they have access to anyone who makes a bank withdrawal or transacts in cash, all displayed in real time feeds, ready for grabbing!
> "Citizens will be on their best behavior because we are constantly recording and reporting everything that's going on," Ellison said, describing what he sees as the benefits from automated oversight from AI and automated alerts for when crime takes place.
Wow! It is genuinely frightening that these people should be in control of our future!
Literal 'new world order' stuff here. Alex Jones and crew got so excited that their guy was in the driver's seat that they didn't notice the actual illuminati lizard people space lasers being deployed.
I don't think we'll ever have a zero-crime society, neither should we aim to be one. But being left to the vagaries of police (and union) politics, culture and the complications of city budgets is clearly broken.
Example: Cities are being presented a false choice between accepting deadly high speed chases vs zero criminal accountability [1], which in the world of drones seems silly [2]
I don't want the police to have unfettered access to surveil any and all citizens but putting camera access behind a court warrant issued by a civilian elected judge doesn't feel that dystopian to me.
Is that what Ellison was alluding to? I have no idea, but we are no longer in a world where we should disregard this prima facie.
There's a few that have tried to implement this, and I want to live in none of them.
The US will fare no better if it walks down this path, and honestly will likely fare worse for it's cultural obsession with individualism over community.
If you're lucky, you might get your chance to live in Thiel's and Ellison's techbro utopia. Make sure to tell us how great it is to be subjected to people with no accountability, but all of the power over every aspect of your life.
Just Ellison alone brings unwelcome feeling of having Oracle craziness forced down our collective throats, but I share your concern about the unholy alliance generated in front of us.
My immediate reaction to the announcement was one of these is not like the others. OpenAI, a couple of big investment funds, Microsoft, Nvidia, and...............Oracle?
Oracle makes perfect sense in that they are 1) a massive datacenter company, and 2) sell a variety of saas products to enterprises, which is a major target market for AI.
> Oracle has 2-3% market share as a Cloud Provider.
And the market leader is what, 30%? about 1 order of magnitude. That's not such a huge difference, and I suspect that Oracle's size is disproportionate in the enterprise space (which is where a lot of AI services are targeted) whereas AWS has a _ton_ of non-enterprise things hosted.
In any case, 2-3% is big enough where this kind of investment is 1) financially possible, 2) desirable to grow to be #2 or #3
There is a certain reason that last weeks everybody and their grandma is simping for Trump. Nobody would want to be on his bad side right now. Moreover, we hear here and there that Trump "keeps his promises". A lot of the promises we do not know about and we may never will. These people did not spend money supporting his campaign for nothing. In other places and eras this would have been called corruption, now it is called "keeping his promises".
Trump is one of the most famous people in the world for not keeping promises of paying debts. But there is money to be made temporarily when he is running a caper, as long as you can get your hand in the pot before he steals it.
If your knee jerk response to any political discussion even remotely critical of 'your guy' is to snap into whataboutisim instead of participating in the conversation you might need a outrage pornography detox for a while.
> There is a certain reason that last weeks everybody and their grandma is simping for Trump. Nobody would want to be on his bad side
It's worth keeping in mind how extremely unfriendly to tech the last admin was. At this point, it's basically proven in court that emails of the form "please deboost person x or else" were send, and there's probably plenty more we don't know about.
Combine that with the troubles in Europe which Biden's administration was extremely unwilling to help with, the obstacles thrown in the way of major energy buildouts, which are needed for AI... one would have to be stupid to be a tech CEO and not simp for Trump.
Tech has been extremely Democratic for many years. The Democrats have utterly alienated tech, and now they reap the consequences.
Nice euphemism for giving people autonomy in their data and privacy.
Most of there companies are so large that they cannot really fail anymore. At this point it has very little to do with protecting themselves, more with making them more powerful than governments. JD Vance are said that the US could drop support for NATO if Europe tries to regulate X [1]. Oligarchs have fully infiltrated the US government and are trying to do the same to other countries.
I disagree with the grandparent. They don't support Trump because they do not want to be on his bad side (well, at least not only that), they support Trump because they see the opportunity to suppress regulation worldwide and become more powerful than governments.
We just keep making excuses (fiduciary duties, he just doesn't know how to wave his arm because he's an autist [2]). Why not just call it what it is?
I do agree that big part of why they support Trump is for anti-regulation reasons. But, it is also a fact that Trump is one of them, a businessman, not a politician. With Trump they can now discuss more business and less policies. There is a certain dealing of business right now that seems not at all transparent. And in this, the amount of public simping is really weird to what usually happens, everybody praising Trump even before he was taking office, and even tiktok, "coming out" as whatever etc.
Oligarchs want less regulation, but they also want these beefy government contracts. They want weaker government to regulate them and stronger government to protect them and bully other countries. Way I see it, what they actually want is control of the government, and with Trump they have it (more than before).
> Tech has been extremely Democratic for many years. The Democrats have utterly alienated tech, and now they reap the consequences.
Well, on the other side it can be said that Big Tech wasn't really on the side of democracy (note: democracy, not the Democrat Party) itself, and it hasn't been for years - at the very least ever since Cambridge Analytica was discovered. The "big tech" sector has only looked at profit margins, clicks, eyeballs and other KPIs while completely neglecting its own responsibility towards its host, and it got treated as the danger it posed by the Biden administration and Europe alike.
As for the cryptocoin world that has also been campaigning for the 45th: they are an even worse cancer on the world. Nothing but a gigantic waste of resources (remember the prices of GPUs, HDDs and RAM going through the roof, coal power plants being reactivated?), rug pulls and other scams.
The current shift towards the far-right is just the final masks falling off. Tech has rather (openly) supported the 45th than to learn from the chaos it has brought upon the world and make at least a paper effort to be held accountable.
Yes, big tech was the kid caught in the corner cleaning out the cookie jar and threw a tantrum when one parent moved the jar out of reach as punishment in effort to help the industry learn self-control. Now the other parent has come home and has not only returned the cookie jar to the kid but pledged to bring them packs of cookies by the shipping container to gorge on in exchange for favors.
We have more energy and are pumping more domestic oil than ever. We are a major exporter of LNG. Trump just killed EV subsidies, and electric charging network funding.
What are you talking about via Europe? Holding tech companies accountable to meddling in domestic politics? Not allowing carte blanche to user data?
I understand (though do not like) large corps tiptoeing around Trump in order to manipulate him, it is due to fear. Not due to Trump having respectable values.
Mostly benefiting the fossil fuel industry. How are they going to power this? Gas is the only option that can be implemented within single years. And this is going to need a lot of power.
There probably will be a clause of mandatory consumption of a given percentage of power generated from coal ensuring continued coal generation of a given minimum providing excellent talking-points for broadcasting to the incumbent's base.
You need to stop this nonsense. Pollution is a long term problem, but it does not mean it is productive to do what Germany has done and cease development.
Tax breaks, government forced to become a customer etc. the usual. Just like the astronauts to Mars thing will just shovel your money that might have gone to NASA into Musk's pocket.
> the usual. Just like the astronauts to Mars thing will just shovel your money that might have gone to NASA into Musk's pocket.
The difference is that Musk can do twice as much for 1/10 what Nasa thinks the program will cost, which is never what the program will actually cost, and Musk will do it in half that time to boot.
The guy is an unhinged manchild, but if what you care about is having your money well spend and getting to Mars as cheaply as possible, he's exactly who you're looking for.
Tax breaks, i.e. a company extracting wealth from a community without paying into the systems that keep all the parts of that community running, forcing the community to ultimate subsidize that business's weath extraction from them.
Companies do not extract value, they create value which is then transferred to the people via the market through voluntary exchange (ideally). Where have you learned about those things? Oh, yeah, “community” , i.e. Marx.
This is utter nonsense. If 1000 people go to a deserted island with no government and taxation would that mean the inflation will be plus infinity or at least very high??? Inflation is monetary phenomenon, it happens when money is being printed.
In that case there would be no inflation or deflation, assuming a fixed money supply and no economic growth. However, the the key here is that the government, the federal government anyways, is spending money regardless of the tax break. Anytime the government writes a check, that's a little bit more money floating around; anytime the government collects some money, such as taxes, there's that much less money to be had. Every tax break causes the money supply to increase more relative to if the tax break did not exist, causing more inflation (or less deflation, if that were the case). If the government spent exactly as much as it taxed, then there would be... actually deflation, because the economy is growing. This is the basics of fiscal policy.
There's also the monetary policy, which is when the federal reserve does this on purpose. The general principle is the same, but instead it spends its money buying bonds and gets its money selling those bonds, and creates a bunch of rules about where banks keep their money so it always has some money on hand.
Assuming the tax money has to come from somewhere at some point, those who pay taxes have to make up the shortfall from those who have tax breaks. So far the US just kicks that can down the road so...
That is a big assumption. Tax money need not be a constant. But for the sake of following the same logic: if companies pay bigger taxes, they also have to make up the shortfall. Actually, this last one is much more accurate statement. Companies do not pay taxes, PEOPLE pay taxes. So taxes are paid either by the employees, the clients or by the owners (which in case of the big tech are generally common people). With high taxation you are hurting: the customers, the workers and the middle class saving for their retirement. Who is winning the tax money: state bureaucracy, corrupt politicians and the business around them, people who live like parasites (or rather are forced to live like that, because they are electoral power).
It won't unless there's another (r)evolution in the underlying technology / science / algorithms, at this point scaling up just means they use bigger datasets or more iterations, but it's more finetuning and improving the existing output then coming up with a next generation / superintelligence.
> Being pessimistic, how come no human supergeniuses ever took over the world? Why didn't Leibniz make everyone else into his slaves?
We already did. Look at the state of animals today vs <1 mya. Bovines grown in unprecedented mass numbers to live short lives before slaughter. Wolves bred into an all new animal, friendly and helpful to the dominate species. Previously apex predators with claws, teeth, speed and strength, rendered extinct.
Sometimes I wonder if we are going to be the unkillable plague that takes over the universe. Or maybe we will dissappear in a blink. It's hard to know, we don't have any reference point except ourselves.
Once you have one AGI, you can scale it to many AGI as long as you have the necessary compute. An AGI never needs to take breaks, can work non-stop on a problem, has access to all of the world's information simultaneously, and can interact with any system it's connected to.
To put it simply, it could outcompete humanity on every metric that matters, especially given recent advancements in robotics.
...so it can think really hard all the time and come up with lots of great, devious evil ideas?
Again, I wonder why no group of smart people with brilliant ideas has unilaterally imposed those ideas on the rest of humanity through sheer force of genius.
An equivalent advance in autonomous robotics would solve the force projection issue, if that's what you're getting at.
I don't know if this will happen with any certainty, but the general idea of commoditising intelligence very much has the ability to tip the world order: every problem that can be tackled by throwing brainpower at it will be, and those advances will compound.
Also, the question you're posing did happen: it was called the Manhattan Project.
Quite a few have succeeded in conquering large fractions of the Earth's population: Napoleon, Hitler, Genghis Khan, the Roman emperors, Alexander the Great, Mao Zedong. America and Britain as systems did so for long periods of time.
All of these entities would have been enormously more powerful with access to an AGI's immortality, sleeplessness, and ability to clone itself.
Alexander the Great made his conquests by building a really good reputation for war, then leveraging it to get tribute agreements while leaving the local governments intact. This is a good way to do it when communication lines are slow and unreliable, because the emperor just needs to check tribute once a year to enforce the agreements, but it's weak control.
If Alexander could have left perfectly aligned copies of himself in every city he passed, he could have gotten much more control and authority, and still avoided a fight by agreeing to maintain the local power structure with himself as the new head of state.
Oh, you're assuming an entire networking infrastructure as well. That makes way more sense, but the miracle there isn't AGI - without networking they'd lose alignment over time. Honestly, I feel like it would devolve in a patchwork of different kingdoms run by an Alexander figurehead... where have I seen this before?
The problem you're proposing could be solved via a high quality cellular network.
I consider many successful military leaders and politicians to be geniuses as well. In my books, Caesar is as genius as Newton!
Having said that, we do not to understand the world to exploit it for ourselves. And what better way to understand and exploit the universe than science? Its an endearment.
"this generation shall not pass"... to me that's about as credible as wanting to "preserve human consciousness" by going to Mars.
Setting the world on fire and disrupting societies gleefully, while basically building bunkers (figuratively more than literally) and consolidating surveillance and propaganda to ride out the cataclysm, that's what I'm seeing.
And the stories to sell people on continuing to put up with that are not even good IMO. Just because the people who use the story to consolidate wealth and control are excited about that, we're somehow expected to be excited about the promise of a pair of socks made from barbed wire they gave us for Christmas. It's the narcissistic experience: "this is shit. this benefits you, not me. this hurts me."
One thing is sure, actual intelligence, regardless of how you may define it, something that is able to reason and speak freely, is NOT what people who fire engineers for correcting them want. It's not about a sort of oracle for humanity to enjoy and benefit from, that just speaks "truth".
of course. its an arms race by definition so its all a military project. and already one whistleblower was brazenly murdered by our government to protect our horse in this race.
A serious question though, what does happen when AIs are filing lawsuits autonomously on behalf of the powerful, the courts clearly won't be able to cope unless you have AI powered courts too? None of how these monumental changes will work has been thought through at all, let's hope AI is smart enough to tell us what to do...
> A serious question though, what does happen when AIs are filing lawsuits autonomously on behalf of the powerful
It won't just be at the behalf of the powerful.
If lawyers are able to file 10x as many lawsuits per hour, the cost of filing a lawsuit is going to go down dramatically, and that's assuming a maximally-unfriendly regulatory environment where you still officially need a human lawyer in the loop.
This will enable people to e.g. use letters signed by an attorney at law, or even small claims court, as their customer support hotline, because that actually produces results in today.
Nobody is prepared for that. Not the companies, not the powerful, not the courts, nobody.
Unless you can afford your lawsuit to take up substantial time on Stargate and make a much stronger case than your average Joe who is still using o1 for their lawsuits
I'm envisioning a future where there's a centralized "legal exchange", much like the NYSE, where high speed machines file micro-ligation billions of times faster than any human can, which is decided equally quickly, an unrelenting back and forth buzz of lawsuits and payouts as every corporation wages constant automated legal battle. Small businesses are consumed in seconds, destroyed by the filing of a million computerized grievances while the major players end up in a sort of zero-sum stalemate, where money is constantly moving, but it never shifts the balance of power.
... has anyone ever written a book about this? If not, I think I'm gonna call dibs.
Oracle could reasonably be hit with some sort of stick every time they filed a frivolous lawsuit until the AI got tuned appropriately. Then it'd be a situation where Oracle were continuously suing people who don't follow the law, following a reasonably neutral and well calibrated standard that is probably going to end up as similar to an intelligent and well practised barrister. That would be acceptable. If people aren't meant to be following the law that is a problem for the legislators.
>A serious question though, what does happen when AIs are filing lawsuits autonomously on behalf of the powerful,
AI controlled cheap Chinese drones will start flying into their residencies carrying some trivial to make high explosives. With the class wars getting hotter in next few years we may be saying that Luigi Mangione had the right ideas towards the PMC, but he was underachiever.
Regarding to your question, yes. I'd prefer a healthy counterbalance to what we have currently. Ideally, I'd prefer cooperation. A worldwide cooperation.
Arguably the cooperation between the US and China has lead to the most economic growth and prosperity in human history, it's a shame the US and China are returning to a former time.
From what I've read about DeepSeek and its founder, I would very much prefer them, even with China factored in. At least if these particular Four Horsemen are the only alternative.
On a tangential note, those who wish to frame this as the start of the great AI war with China (in which they regrettably may be right), should seriously consider the possibility of coming out on the losing end. China has tremendous industrial momentum, and is not nearly as incapable of leading-edge innovation as some Americans seem to think.
No, I was rather pointing out that getting into an altercation that you are likely (even if not guaranteed) to lose may not be the smartest of ideas. On occasion, humans have been known to fruitfully engage in cooperation and de-escalation. Please pardon my naive optimism.
"Great AI war with China", "altercation" are excessively harsh characterizations. There is nothing "escalatory" in competing for leadership in new industries with other states, nor should it be "regrettable". No one, to my knowledge, is planning to nuke DeepSeek data centers or something.
I wish I could agree with you. But have you read Aschenbrenner's "Situational Awareness" [1]? I am very much afraid that the big decision makers in AI do in fact think in those terms, and do not in any way frame this as fair competition for the benefit of all.
A person heavily invested in this wave of AI succeeding saying AI will be big and we will have AGI next year? Sure.
I don't think there is much point of reading the whole thing after the following:
"Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the willful blindness of “it’s just predicting the next word”."
we need to cooperate and put aside our petty politicking right now. the potential downsides of ‘racing’ without building a safety scaffold are catastrophic.
the outcome would be exactly the same. AGI leads the human race off of a cliff, not in the direction of one human interest group vs another. the only difference would be that it was china that was responsible for the extinction if the human race rather than another country. i would prefer to die with dignity… the outcome we should all be advocating for is a global halt of AI research — not because it would be easy but because there is no other option.
China is much more peaceful nation compared to US. So, yes, I'd prefer China leading AI research any day. They are interested in mutual trade and prosperity, they respect local laws and culture, all unlike US.
I think there's a more nuanced version of this: China respects local laws and culture _outside of what they view as China_ more than the US does. It's also worth noting that China's policy in Xinjiang is somewhat narrowly targeted at religion, and less other aspects like cuisine or clothing. That said, religion is nigh impossible to separate from the broader idea of culture in much of the world.
Give me a break. China has overseas police stations as bases of operation for harassing ex-pats and dissidents. That's not "respecting local laws and culture".
I encountered this almost first person. When American company goes like an elephant, bribing local officials left and right, using dirty practices to push out concurrents. At the same time, Chinese companies try very hard to abide to local regulations and trying to resolve all issues using local courts, etc. Like actually civilised people.
What happens inside China is nothing of my interest, it's their business. They existed for millennias, they probably know how to manage themselves. They are not trying to expand outside of may be Taiwan, they don't put their military bases in my country, they don't fund so-called "opposition" and that's good enough for me.
Bribery is probably one of the few cases where the US is significantly better than bad actors in both China and the EU, both of which have major problems with overseas bribery
If you had AlQaeda in a hypothetical region near Florida with almost two-yearly terror attacks, you would shit bricks and
create jails/prisons with more security than the Pentagon itself.
Holy smokes. Do folks like you actually believe this? China has its own style of colonialism (whatever you want to call it) but it certainly exists as strong as the US flavor.
Quite a few from an economic perspective. Like I said they have their own style of colonialism. To think they are some peaceful loving nation is foolish. Maybe in the last 10 years China have had the military equipment capable of handling an offensive. They have been smart and done all their dealings via money. Without going too far in whataboutism, I simply find it ridiculous to classify China as a warm fuzzy nation with their long list of human rights issues. That does not mean America is peaceful and loving, simply that perhaps the two countries are not so different in net.
> Like I said they have their own style of colonialism.
That's moving the goalposts and doesn't address the issue.
>They have been smart and done all their dealings via money.
You mean just like the country who issues the world reserve currency and who's intelligence agencies get involved in destabilizing regimes across the world?
> That's moving the goalposts and doesn't address the issue.
Is this how you make a constructive argument? Perhaps I was expecting too much from a joke account but this style of whataboutism is boring.
My post that you responded to set my premise which was that China has its own form of colonialism that is quite different than Americas but it exists and it’s quite strong. To classify China as a peaceful loving nation that respects other cultures is as if we were saying the US has never started a conflict. It’s factually a lie. China has a long list of human rights issues, they factually do not respect other cultures even within their own borders. I am not defending America but pointing out that China is not what the OP stated.
Are you the kind of superficial petty person who needs to take jabs at the messenger's name and not the message itself?
And are you really in the position to throw stones from a glass house with that account name? If you had your real name and social media profiles linked in the bio I'd understand, but you're just being hypocritical, petty and childish here with this 'gotcha'.
> To classify China as a peaceful loving nation that respects other cultures
I never made such a classification. You're building your own strammen to form a narrative you can attack but you're not saying anything useful the contradicts my PoV and wasting our time. Since you're obviously arguing in bad faith I won't converse with you further. Goodbye.
If you have an argument that is actually on topic with what I said please continue, otherwise save your troll account for someone else. The whataboutism/gaslighting is silly. You clearly cannot read threads or respond in a logical form to the right person. The conversation at hand was about China and in response to the OP classifying them as a loving and respectful nation. I made no attempt to defend the US and it has been you moving the goalposts. You throw about whataboutism around and then simply runoff with some flimsy excuse about multiple people being unable to converse with you. Troll account.
Cumpiler asked two very clear and direct questions:
>How many countries has China invaded and bombed in the last 30 years?
>How many deaths did China's warmongering caused abroad?
You didn't answer those, just started hand waving some stuff about China's "own form of colonialism" -- without even explaining what that is and how it works (which personally I'd be curious to hear about, and believe *is*" likely guilty of violence).
So you very clearly are the one guilty of shifting the goalposts, going on tangents, and bringing up usernames instead of real arguments.
Yeah, really the only thing missing from this initiative was the personal information of the vast majority of the United States population handed over on a silver platter.
That sentiment calls for reflection - whoever ends up on top of the heap after the AI craze settles down is going to be someone that everyone objects to. Elon Musk was himself an internet darling up until he became wealthy and entrenched.
That said, this does look like dreadful policy at the first headline. There is a lot of money going in to AI, adding more money from the US taxpayer is gratuitous. Although in the spirit of mixing praise and condemnation, if this is the worst policy out of Trump Admin II then it'll be the best US administration seen in my lifetime. Generally the low points are much lower.
Nietzsche wrote about these phenomena a long time ago in his Genealogy of Morality. there will never be someone who reaches the top who doesn’t become an object of ire in modern Western culture.
> That sentiment calls for reflection - whoever ends up on top of the heap after the AI craze settles down is going to be someone that everyone objects to.
I agree in principle. And realistically, there is no way Altman would not be part of this consortium, much as I dislike it. But rounding out the team with Ellison, Son and Abu Dhabi oil money in particular -- that makes for a profound statement, IMHO.
> That sentiment calls for reflection - whoever ends up on top of the heap after the AI craze settles down is going to be someone that everyone objects to.
Did we see the same fallout from the space-race from a couple generations ago?
I don't think so — certainly not in the way you're framing it. So I guess I don't accept your proposition as a guarantee of what will happen.
A couple of generations ago we didn't have the internet and the only things people heard about were being managed. The big question was whether the media editors wanted to build someone up or tear them down.
The spoils of the space race would have gone to someone a lot like Musk. Or Ellison. Or Masayoshi Son. Or Sam Altman. Or the much worse old-moneyed types. The US space program was, famously, literally employing ex-Nazis. I doubt the beneficiaries of the money had particularly clean hands either
> That sentiment calls for reflection - whoever ends up on top of the heap after the AI craze settles down is going to be someone that everyone objects to. Elon Musk was himself an internet darling up until he became wealthy and entrenched.
Trying to process this but doesn’t his fall from grace have more to him increasing his real personality to the world? Sometime around calling that guy a pedo. Not much bothers me but at the very least his apparent lack of decision making calls into question many things.
You have to keep in mind Microsoft is planning on spending almost 100B in datacenter capex this year and they're not alone. This is basically OpenAI matching the major cloud provider's spending.
This could also be (at least partly) a reaction to Microsoft threatening to pull OpenAI's cloud credits last year. OpenAI wants to maintain independence and with compute accounting for 25–50% of their expenses (currently) [2], this strategy may actually be prudent.
Depends on your definition of profitability, They are not recovering R&D and training costs, but they (and MS) are recouping inference costs from user subscription and API revenue with a healthy operating margin.
Today they will not survive if they stop investing in R&D, but they do have to slow down at some point. It looks like they and other big players are betting on a moat they hope to build with the $100B DCs and ASICs that open weight models or others cannot compete with.
This will be either because training will be too expensive (few entities have the budget for $10B+ on training and no need to monetize it) and even those kind of models where available may be impossible to run inference with off the shelf GPUs, i.e. these models can only run on ASICS, which only large players will have access to[1].
In this scenario corporations will have to pay them the money for the best models, when that happens OpenAI can slow down R&D and become profitable with capex considered.
[1] This is natural progression in a compute bottle-necked sector, we saw a similar evolution from CPU to ASICS and GPU in the crypto few years ago. It is slightly distorted comparison due to the switch from PoW to PoS and intentional design for GPU for some coins, even then you needed DC scale operations in a cheap power location to be profitable.
They will have an endless wave of commoditization chasing behind them. NVIDIA will continue to market chips to anyone who will buy... Well anyone who is allowed to buy, considering the recent export restrictions. On that note, if OpenAI is in bed with the US government with this to some degree, I would expect tariffs, expert restrictions, and all of that to continue to conveniently align with their business objectives.
If the frontier models generate huge revenue from big government and intelligence and corporate contracts, then I can see a dynamo kicking off with the business model. The missing link is probably that there need to be continual breakthroughs that massively increase the power of AI rather than it tapering off with diminishing returns for bigger training/inference capital outlay. Obviously, openAI is leveraging against that view as well.
Maybe the most important part is that all of these huge names are involved in the project to some degree. Well, they're all cross-linked in the entire AI enterprise, really, like OpenAI Microsoft, so once all the players give preference to each other, it sort of creates a moat in and of itself, unless foreign sovereign wealth funds start spinning up massive stargate initiatives as well.
We'll see. Europe has been behind the ball in tech developments like this historically, and China, although this might be a bit of a stretch to claim, does seem to be held back by their need for control and censorship when it comes to what these models can do. They want them to be focused tools that help society, but the American companies want much more, and they want power in their own hands and power in their user's hands. So much like the first round where American big tech took over the world, maybe it's prime to happen again as the AI industry continues to scale.
Why would China censoring Tiananmen Square/whatever out of their LLMs be anymore harmful to the training process when the US controlled LLMs also censor certain topics, eg "how do I make meth?" or "how do I make a nuclear bomb?".
Because China censors very common words and phrases such as "harmonized", "shameless", "lifelong", "river crabbed", "me too". This is because Chinese citizens uses puns and common phrases initially to get around censors.
They are absolutely different flavors. OpenAI is not being told by the government to censor violence, sex or racism - they're being told that by their executives.
News flash: household-name businesses aren't going to repeat slurs if the media will use it to defame them. Nevermind the fact that people will (rightfully) hold you legally accountable and demand your testimony when ChatGPT starts offering unsupervised chemistry lessons - the threat of bad PR is all that is required to censor their models.
There's no agenda removing porn from ChatGPT any more than there's an agenda removing porn from the App Store or YouTube. It's about shrewd identity politics, not prudish shadow government conspiracies against you seeing sex and being bigoted.
I don't know why people care if they're being censored by government officials or private billionaires. What difference does it make at the end of the day? why is one worse than the other?
Sigh. No. Censorship is censorship is censorship. That is true even if you happen to like and can generate a plausible defense of US version that happens to be business friendly ( as opposed to China's ruling party friendly ).
It is not a take. It is simple position of 'just because you call something as involuntary semen injection does not make it any less of a rape'. I like things that are clear and well defined. And so I repeat:
Yes, that's true. It's very rare for people to be able to value actual free speech. Most people think they do until they hear something they don't like
I am not sure if it will surprise you, but your affiliation or the size of your 'team' is largely irrelevant from my perspective. That said, I am mildly surprised you were able to accept the new self-image as willing censor though. Most people struggle with that ( edit: hence the 'this is not censorship' facade ).
Because when a small group of elites with permament term and no elections decides what is allowed and what isn't... and has full control of silencing what's not allowed and any meta discussion about the silencing itself... is different from when an elected government decides it, and then anyone is free to raise a stink on whatever is their version of twitter today without worrying about being disappeared tomorrow
It's not an elected government if you're talking about the US. These policies are also all decided by "elites with permanent term and no elections" you realize right?
They want their LLMs explicitly approved to align with the values of the regime. Not necessarily a bad thing, or at least that avenue wasn't my point. It does get in the way of going fast and breaking things though, and on the other side there is an outright accelerationist pseudo-cult.
Ignoring the moral dimension for a second, I do wonder if it is harder to implement a rather cohesive, but far-reaching censorship in the chinese style, or the more outrage-driven type of "censorship" required of American companies. In the West we have the left pre-occupied with -isms and -phobias, and the right with blasphemy and perceived attacks on their politics.
With the hard shift to the right and Trump coming into office, especially the last bit will be interesting. There is a pretty substantial tension between factual reporting and not offending right-wing ideology: Should a model consider "both sides" about topics with with clear and broad scientific consensus if it might offend Trumpists? (Two examples that come to mind was the recent "The Nazis were actually left wing" and "There are only two genders".)
> they (and MS) are recouping inference costs from user subscription and API revenue with a healthy operating margin.
I tried to Google for more information. I tried this search: <<is openai inference profitable?>>
I didn't find any reliable sources about OpenAI. All sources that I could find state this is not true -- inference costs are far higher than subscription fees.
I hate to ask this on HN... but, can you provide a source? Or tell us how do you know?
I don't have any qualified source and this metric would be likely be quite confidential even internally.
It is just an educated guess factoring costs of running similar/comparable models to 4o or 4o-mini per token, and how azure commitments work with OpenAI models[2], also knowing that Plus subscriptions are probably more profitable[1] than API calls.
It would be hard for even OpenAI to know with any certainty because they are not paying for Azure credits like a normal company. The costs are deeply intertwined with Azure and would be hard to split given the nature of the MS relationship[3]
----
[1] This is from experience of running LibreChat using 4o versus ChatGPT Plus for ~200 users, subscriptions should quite profitable than raw API by a order of 3 to 4x, of course different types of users and adoption levels will be there my sample while not small is not likely representative of their typical user base.
[2] MS has less incentive to subsidize than say OpenAI themselves
[3] Azure is quite profitable in the aggregate, while possibly subsidizing OpenAI APIs, any such subsidy has not shown up meaningfully in Microsoft financial reports.
So I do question if OpenAI is able to make a profit, even if you remove training and R&D. The $20 plan may be more profitable, but now it will need to cover the R&D and training, plus whatever they lose on Pro.
Not necessarily. DeepSeek will probably only threaten the API usage of OpenAI, which could also be banned in the US if it's too sucessful. API usage is not a main revenue for OpenAI (it is for Anthropic last time I checked). The main competitor for R1 is o1, which isn't gnerally available yet.
The one your laptop can run does not rival what OpenAI offers for money. Still, the issue is not whether third party can run it, it's just the OpenAI seems not putting API as their main product.
Not quite. In 2 years their revenue has ~20x from 200M ARR to 3.7B ARR. The inference costs I believe pay for themselves (in fact are quite profitable). So what they're putting on their investor's credit cards are the costs of employees & model training. Given it's projected to be a multi-trillion dollar industry and they're seen as a market leader, investors are more than happy to throw in interest free cash flow now in exchange for variable future interest in the form of stocks.
That's not quite the same thing at all as your credit card's revenue stream as you have a ~18%+ monthly interest rate on that revenue stream. If you recall AMZN (& all startups really) have this mode early in their business where they're over-spending on R&D to grow more quickly than their free cash flow otherwise allows to stay ahead of competition and dominate the market. Indeed if investors agree and your business is actually strong, this is a strong play because you're leveraging some future value into today's growth.
Platform economics "works" in theory only upto a point. Its super inefficient if you zoom out and look not at system level but ecosystem level. It hasn't lasted long enough to hit failure cases. Just wait a few years.
As to openai, given deepseek and the fact lot of use cases dont even need real time inference its not obvious this story will end well.
I also can't see it ending well for OpenAI. This seems like it's going to be a commodity market with a race to the bottom on pricing. I read that NVIDIA has a roughly 1000% (10x) profit margin on H100's, which means that someone like Google making their own TPUs has a massive cost advantage.
Moore's law seems to be against them too... hardware getting more powerful, small models getting more powerful... Not at all obvious that companies will need to rely on cloud models vs running locally (licencing models from whoever wants that market). Also, a lot of corporate use probably isn't that time critical, and can afford to run slower and cheaper.
Of course the US government could choose to wreck free-market economics by mandating powerful models to be run in "secure" cloud environments, but unless other countries did same that might put US at competitive price disadvantage.
They do get a lot of customers buying their stuff, but on top of that, a company with unique IP and mindshare can get investors to open their wallet easily enough; I keep thinking of AMD that was not or barely profitable for like 15 years in a row.
My kneejerk response was to point to the incoming administration, but the fact Stargate has been in the works for more than a year now says to me it's because of tax credits.
Lots of back door deals. Just expect more government things put in TX just like the Army built that place in Austin, when we have plenty of dead bases that could be reused
Meanwhile, Azure has failed to keep up with the last 2-3 generations of both Intel and AMD server processors. They’re available only in “preview” or in a very limited number of regions.
I wonder if this is a sign of the global economic downturn pausing cloud migrations or AI sucking the oxygen out of the room.
For less than this same price tag, we could’ve eliminated student loan debt for ~20 million Americans. It would in turn open a myriad number of opportunities, like owning a home and/or feeling more comfortable starting a family. It would stimulate the economy in predictable ways.
Instead we gave a small number of people all of this money for a moonshot in a state where they squabble over who’s allowed to use which bathroom and if I need an abortion I might die.
Eliminating debt has a lot of unintended consequences. Price inflation would almost certainly be a problem, for example.
It's also not clear to me what happens to all of the derivatives based on student debt, though there may very well be an answer there that I just haven't understood yet.
The problem with allowing student debt to rack up to these levels and then cancelling it is that it would embolden universities to ask even higher tuition. A second problem is that not all students get the benefit, some already paid off their debts or a large part of it. It would be unfair to them.
Yes but every policy is unfair. It literally is choosing where to give a limited resource, it can never be fully fair.
And there could be a change in the law that allows people to forgive student debt in personal bankruptcy, and that could make sure higher tuition doesnt happen.
> Yes but every policy is unfair. It literally is choosing where to give a limited resource, it can never be fully fair.
I don't think that holds for a policy of non-intervention. People usually don't like that solution, especially when considering welfare programs, but it is fair to give no one assistance in the sense that everyone was treated equally/fairly.
Now its a totally different question whether its fair that some people are in this position today. The answer is almost certainly no, but that doesn't have a direct impact on whether an intervention today is fair or not.
It would do more good in K12 or pre-K than it would paying off private debts held by white collar highly educated not rich yet due only to their young age university-bros.
It truly is astonishing. We have kids who cannot afford school lunches, people working multiple blue-collar jobs and yet the problems of people who are statistically better off than average constantly jump to the front. People complain about Effective Altruism because of one dude messing up big but it would behoove everyone to read up on the basic philosophy of it before suggesting how we best spent billions to help reduce suffering.
> Instead we gave a small number of people all of this money for a moonshot in a state where they squabble over who’s allowed to use which bathroom and if I need an abortion I might die.
AFAICT from this article and others on the same subject, the 500 billion number does not appear to be public money. It sounds like it's 100 billion of private investment (probably mostly from Son), and FTA,
> could reach five times that sum
(5x 100 billion === 500 billion, the # everyone seems to be quoting)
Repaying student loans makes a lot of people a little richer. The current initiative makes a few people a lot richer. If you ask some people, the former is a very communist/socialist way of thinking (bad), while the latter is pure, unadulterated capitalism (good).
One of the more destructive situations in capitalism is the fact that (financially) helping the many will increase inflation and lead to more problems.
When a few people get really rich it kind of slips through the gaps, the broader system isn't impacted too much. When most people get a little rich they spend that money and prices go up. Said differently, wealth is all relative so when most people get a little more rich their comparative wealth didn't really change.
That and a lot of people do not have the means to convince current power centers ( unless they were to organize, which they either don't, can't or are dissuaded from ) to do their bidding, while few rich ones do. And so the old saying 'rich become richer' becomes a self-fulfilling prophecy.
That was the implication indeed. Money is like gravity, the more you have the more you can pull in. This will give a person the power to do anything to make more money (change the laws as desired, or break them if needed) but also the perfect shield from any repercussions.
I know!! Also we could have given an IPhone to 500 million of people for the amount!! It’s such a waste to think they’re investing it in the future instead
This is the problem with capitalists / the billionaires currently hoarding the money and the US' policy, it's all for short term gain. But the conservatives that look back to the 50's or 80's or whatever decade their rose-tinted glasses are tuned to should also realise that the good parts of that came from families not being neck-deep in debt.
That doesn't seem to be much of a thing these days. If you look at Russia/Ukraine or China/Taiwan there's not much scarcity. It's more bullying dictator wants to control the neighbours issues.
"Global warming may not have caused the Arab Spring, but it may have made it come earlier... In 2010, droughts in Russia, Ukraine, China and Argentina and torrential storms in Canada, Australia and Brazil considerably diminished global crops, driving commodity prices up. The region was already dealing with internal sociopolitical, economic and climatic tensions, and the 2010 global food crisis helped drive it over the edge."
It will be, or, it's slowly happening already. Climate change is triggering water and food shortages, both abroad and on your doorstep (California wildfires), which in turn trigger mass migrations. If a richer and/or more militarily equipped country decides they want another country's resources to survive, we'll see wars erupt everywhere.
Then again, it's more of a logistics challenge, and if e.g. California were to invade Canada for its water supply, how are they going to get it all the way down there?
I can see it happening in Africa though, a long string of countries rely on the Nile, but large hydropower dams built in Sudan and Ethiopia are reducing the water flow, which Egypt is really not happy about as it's costing them water supply and irrigated land. I wouldn't be surprised if Egypt and its allies declares war on those countries and aims to have the dams broken. Then again, that's been going on for some years now and nothing has happened yet as far as I'm aware.
(the above is armchair theorycrafting from thousands of miles away based on superficial information and a lively imagination at best)
I was in Egypt a while and there's no talk of them invading Sudan or Ethiopia. A lot of Egypt's economy is overseas aid from the US and similar.
The main military thing going on there - I was in Dahab where there are endless military checkpoints - is Hamas like guys trying to come over and overthrow the fairly moderate Egyptian government and replace it with a hardline Hamas type islamic dictatorship for the glorification of Allah etc. Again it's not about reducing scarcity - more about increasing scarcity in return for political control. Dahab and Cairo are both a few hours drive from Gaza.
and a bureaucratic one as well. in Germany, they want to trim bureaucratic necessities while (not) expecting multiple millions of climate refugees.
lot's of undocumented STUFF (undocumented have nowhere to go so they don't get vaccines, proper help when sick, injured, mentally unstable, threatened, abused) incoming which means more disease, crime, theft, money for security firms and insurance companies, which means more smuggle, more fear-mongering via media, more polarization, more hard-coding of subservience into the young, more financial fascism overall, less art, zero authenticity, and a spawn of VR worlds where the old rules apply forever.
plus more STDs and micro-pandemics due to viral mutations because people will be even more careless when partying under second-semester light-shows in metropolitan city clubs and festivals and when selling out for an "adventurous" quick potent buck and bug, which of course means more money pouring into pharma who won't be able to test their drugs thoroughly (and won't have to, not requiring platforms to fact check will transfer somewhat into the pharma industry) because the population will be more diverse in terms of their bio-chemical reactions towards ingredients in context of their "fluid" habitats chemical and psycho-social make-ups.
but it's cool, let's not solve the biggest problems before pseudo-transcending into the AGI era. will make for a really great impression, especially those who had the means, brains, skills, (past) careers, opportunity and peace of mind.
Have you tried opening the links? They show Russia at developed country level in terms of food insecurity (score <5, they don't differentiate at those levels; this is a level mostly shown for EU countries); and a percentage of population below the international poverty line of 0.0% (vs, as an example, 1.8 % in Romania). This isn't great — being in the poverty briefs at all is not indicative of prosperity — but your terrification should probably come from elsewhere.
Your first link says "With a score under 5, Russian Federation has a level of hunger that is low."
The current situation with Russia and China seems caused by them becoming prosperous. In the 1960s in China and 1990s in Russia they were broke. Now they have money they can afford to put it into their militaries and try to attack the neighbours.
I would wager that states such as Russia and others misallocate resources, which in turn reduces productivity. Worse yet, some of the policy prescriptions stated above would further misallocate scarce resources and reduce productivity. Scarcity doom becomes a self-fulfilling prophesy. This outcome is used to rationalize further economic intervention and the cycle compounds upon itself.
To be explicitly clear, the US granting largess to tech companies for datacenters also counts as a misallocation in my view.
Gaza seems mostly to be about who controls Israel/Palestine politically. Gaza was reasonably ok for food and housing and is now predictably trashed as a result of Hamas wanting to control Palestine from the river to the sea as they say.
South Sudan is some ridiculous thing where two rival generals are fighting for control. Are there any wars which are mostly about scarcity at the moment?
No, not really... the origin of Gaza conflict is in Zionists confiscating the most fertile land and water resources.
That's why Israelis gladly handed back the Sinai desert to Egypt, but have kept Golan Heights, East Jerusalem, Shaba Farms, and continuously confiscate Palestinian farmlands in the West Bank.
There is nothing arbitrary or religious about which lands Zionists are occupying and which they're leaving to arabs.
Completely false and simplifying a complicated history to present a very one sided view.
The most fertile lands are in the west bank. They were under Jordanian control and could have been turned into an independent Palestinian state, but weren't. Israel "accidentally" got them in the 6 days war, and were happy to give them to Jordan back to "take care" of the Palestinian problem, but they refused.
The places that Israel have the majority of the population in Petah Tiqwah, Tel Aviv and the region were swamp lands, filled with mosquitos, that were dried over many years and many deaths by Jewish farmers.
So you are saying Hamas would have same domestic support if Gaza was economically at the level of e.g. Slovenia? People who complained about "open air prison" caused by Israeli "occupation" even before Oct 7 would disagree with you I think.
Even in Europe extremists are propped up by promise of "cheap energies" from Russia.
I guess if you dont see the link this is not the place to explain it.
Also the "open air prison" effect was a result of trying to reduce attacks from Gaza. For example before the 2008 war there were more than 2000 rockets launched from Gaza into Israel.
Very zero-sum outlook on things which is factually untrue much of the time. When you invest money in something productive that value doesn't get automatically destroyed. The size of the pie isn't fixed.
More importantly, money, at global scale, doesn't solve scarcity issues. If there are 100 apples and 120 people making sure everyone has a lot of money doesn't magically create 20 more apples. It just raises the price of apples. Building an apple orchard creates apples. Stargate is people betting that they are building a phenomenal apple orchard. I'm not sure they will and an worried the apple orchard will poison us all but unlike me these people are putting their money where their mouth is and had thus larger inventive to figure out what they are doing.
Five-hundred billion dollars is nothing when you consider there's a new government agency that it is said will shave two trillion from government inefficiency.
I disagree with you. I think the impact of AI on society in the long term is going to be massive, and such investments are necessary. If we look at the past century, technology has had (in my opinion) and incredibly positive impact on society. You have to invest in the future.
well, it also starts a fair share of wars, or lets say, "brings freedom and democracy in exchange for resources and power" and sometimes even decides to topple leaders in foreign countries to then put puppets into place.
~$125B per year would be 2-3% of all domestic investment. It's similar in scale to the GDP of a small middle income country.
If the electric grid — particularly the interconnection queue — is already the bottleneck to data center deployment, is something on this scale even close to possible? If it's a rationalized policy framework (big if!), I would guess there's some major permitting reform announcement coming soon.
They say this will include hundreds of thousands of jobs. I have little doubt that dedicated power generation and storage is included in their plans.
Also I have no doubt that the timing is deliberate and that this is not happening without government endorsement. If I had to guess the US military also is involved in this and sees this initiative as important for national security.
Is there really any government involvement here? I only see Softbank, Oracle, and OpenAI pledging to invest $500B (over some timescale), but no real support on the government end outside of moral support. This isn't some infrastructure investment package like the IRA, it's just a unilateral promise by a few companies to invest in data centers (which I'm sure they are doing anyway).
I thought all the big corps had projects for the military already, if not DARPA directly, which is the org responsible for lots of university research (the counterpart to the NSF, which is the nice one that isn't funded by the military)?
Funding for DARPA and NSF ultimately comes from the same place. DARPA funds military research. NSF funds dual use[1] research. All of it is organized around long term research goals. I maintained some of the software involved in research funding decision making.
It’s light on details, but from The Guardian’s reporting:
> The president indicated he would use emergency declarations to expedite the project’s development, particularly regarding energy infrastructure.
> “We have to get this stuff built,” Trump said. “They have to produce a lot of electricity and we’ll make it possible for them to get that production done very easily at their own plants.
On the one hand the number is a political thumb-suck which sounds good. It's not based in any kind of actual reality.
Yes, the data center itself will create some permanent jobs (I have no real feel for this, but guessing less than 1000).
There'll be some work for construction folk of course. But again seems like a small number.
I presume though they're counting jobs related to the existence of a data center. As in, if I make use of it do I count that as a "job"?
What if we create a new post to leverage AI generally? Kinda like the way we have a marketing post, and a chunk of the daily work there is Adwords.
Once we start gustimamating the jobs created by the existence of an AI data center, we're in full speculation mode. Any number really can be justified.
Of course ultimately the number is meaningless. It won't create that many "local jobs" - indeed most of those jobs, to the degree they exist at all, will likely be outside the US.
So you don't need to wait for a post-mortem. The number is sucked out of thin air with no basis in reality for the point of making a good political sound bite.
> I presume though they're counting jobs related to the existence of a data center. As in, if I make use of it do I count that as a "job"?
Seeing how Elon deceives advertisers with false impressions, I could see him giving the same strategy a strong vote of confidence (with the bullshit metrics to back it!)
I'm sure this will easily be true if you count AI as entities capable of doing jobs. Actually, they don't really touch that (if AI develops too quickly, there will be a lot of unemployment to contend with!) but I get the national security aspect (China is full speed ahead on AI, and by some measurements, they are winning ATM).
Wow. What an idea you guys have there. Look - you maybe could sit homeless and mentally disabled on such power-generating bicycles, hmmm... what about convicts! Let them contribute to society, no free lunch! What an innovation!
Just as there is an AWS for the public, with something similar but only for Federal use, so it could be possible that there is AI cloud services available to the public and then a separate cloud service for Federal use. I am sure that military intelligence agencies etc. would like to buy such a service.
Gas turbines can be spun up really quickly through either portable systems (like xAI did for their cluster) [1] or actual builds [2] in an emergency. The biggest limitation is permits.
With a state like Texas and a Federal Government thats onboard these permits would be a much smaller issue. The press conference makes this seem more like, "drill baby drill" (drilling natural gas) and directly talking about them spinning up their own power plants.
It is not the just queue that is the bottleneck. If the new power plants designed specifically for powering these new AI data centers are connected to the existing electric grid, the energy prices for regular customers will also get affected - most likely in an upwardly fashion. That means, the cost of the transmission upgrades required by these new datacenters will be socialized which is a big problem. There does not seem to be a solution in sight for this challenge.
> It's similar in scale to the GDP of a small middle income country
I’ve been advocating for a data centre analogue to the Heavy Press Programme for some years [1].
This isn’t quite it. But when I mapped out costs, $1tn over 10 years was very doable. (A lot of it would go to power generation and data transmission infrastructure.)
One-time capital costs that unlock a range of possibilities also tend to be good bets.
The Flood Control Act [0], TVA, Heavy Press, etc.
They all created generally useful infrastructure, that would be used for a variety of purposes over the subsequent decades.
The federal government creating data center capacity, at scale, with electrical, water, and network hookups, feels very similar. Or semiconductor manufacture. Or recapitalizing US shipyards.
It might be AI today, something else tomorrow. But there will always be a something else.
Honestly, the biggest missed opportunity was supporting the Blount Island nuclear reactor mass production facility [1]. That was a perfect opportunity for government investment to smooth out market demand spikes. Mass deployed US nuclear in 1980 would have been a game changer.
They are trying. Microsoft wants to star the 3 Mile Island reactor. And other companies have been signing contracts for small modular reactors. SMRs are a perfect fit for modern data centers IF they can be made cheaply enough.
Wind, solar, and gas are all significantly cheaper in Texas, and can be brought online much quicker. Of course it wouldn't hurt to also build in some redundancy with nuclear, but I believe it when I see it, so far there's been lots of talk and little success in new reactors outside of China.
just as likely to be natural gas or a combination of gas and solar. I don't know what supply chain looks like for solar panels, but I know gas can be done quickly [1], which is how this money has to be spent if they want to reach their target of 125 billion a year.
The companies said they will develop land controlled by Wise Asset to provide on-site natural gas power plant solutions that can be quickly deployed to meet demand in the ERCOT.
The two firms are currently working to develop more than 3,000 acres in the Dallas-Fort Worth region of Texas, with availability as soon as 2027
According to [1], the USA in January 2025 has almost 50GW/yr module manufacturing capacity. But to make modules you need polysilicon (25GW/yr manufacturing capacity in the US), ingots (0GW/yr), wafers (0GW/yr), and cells (0GW/yr). Hence the USA is seemingly entirely dependent on imports, probably from China which has 95%+ of the global wafer manufacturing capacity.
Even when accounting for announced capacity expansion, the USA is currently on target to remain a very small player in the global market with announced capacity of 33GW/yr polysilicon, 13GW/yr ingots, 24GW/yr wafers, 49GW/yr cells and 83GW/yr modules (13GW/yr sovereign supply chain limitation).
In 2024, China completed sovereign manufacturing of ~540GW of modules[2] including all precursor polysilicon, ingots, wafers and cells. China also produced and exported polysilicon, ingots, wagers and cells that were surplus to domestic demand. Many factories in China's production chain are operating at half their maximum production capacity due to global demand being less than half of global manufacturing capacity.[3]
> could something of this magnitude be powered by renewables only?
Perhaps.
For context see https://masdar.ae/en/news/newsroom/uae-president-witnesses-l... which is a bit further south than the bulk of Texas and has not yet been built; 5.2GW of panels, 19GWh of storage. I have seen suggestions on Linkedin that it will be insufficient to cover a portion of days over the winter, meaning backup power is required.
There have been literally 0 production SMR deployments to date so there’s no possibility they’re basing any of their plans on the availability of them.
Hasn't the US decided to prefer nuclear and fossil fuels (most expensive generation methods) over renewables (least expensive generation methods)?[1][2]
I doubt the US choice of energy generation is ideological as much a practicality. China absolutely dominates renewables with 80% of solar PV modules manufactured in China and 95% of wafers manufactured in China.[3] China installed a world record 277GW of new solar PV generation in 2024 which was a 45% year-on-year increase.[4] By contract, the US only installed ~1/10th this capacity in 2024 with only 14GW of solar PV generation installed in the first half of 2024.[5]
> Hasn't the US decided to prefer nuclear and fossil fuels (most expensive generation methods) over renewables (least expensive generation methods)?[1][2]
This completely ignores storage and the ability to control the output depending on needs. Instead of LCOE the LFSCOE number makes much more sense in practical terms.
Notably it is significantly more than the revenue of either of AWS or Azure. It is very comparable to the sum of both, but consolidated into the continental US instead distributed globally.
Small or modular reactors in the US are more than 10 years away, probably more like 15-20. These are facts and not made-up political or pipe-dreaming techno-snobes.
> Small or modular reactors in the US are more than 10 years away, probably more like 15-20
Could be 5 to 10 with $20+ bn/year in scale and research spend.
Trump is screwing over his China hawks. The anti-China and pro-nuclear lobbies have significant overlap; this could be how Trump keeps e.g. Peter Thiel from going thermonuclear on him.
I work in the sector and it's impossible to build a full-sized reactor in less than 10 years, and the usual over-run is 5 years. That's the time for tried and tested designs. The tech isn't there yet, and there are no working analogs in the US to use as an approved guide. The Department of Energy does not allow "off-the-cuff" designs for reactors. I think there is only two SMRs that have been built, one by the Russians and the other by China. I'm not sure they are fully functioning, or at least working as expected. I know there are going to be more small gas gens built in the near future and that SMRs in the US are way off.
Guessing SMRs are a ways off, any thoughts on the container-sized microreactors that would stand in for large diesel gens? My impression is that they’re still in the design phase, and the supply chain for the 20% U-235 HALEU fuel is in its infancy, but this is just based on some cursory research. I like the prospect of mass manufacturing and servicing those in a centralized location versus the challenges of building, staffing, and maintaining a series of one-off megaprojects, though.
i don't and i honestly don't know much about it, but
> there are no working analogs in the US to use as an approved guide
small reactors have been installed on ships and submarines for over 70(!) years now. Reading up on the very first one, USS Nautilus, "the conceptual design of the first nuclear submarine began in March 1950" it took a couple of years? So why is it so unthinkably hard 70 years later, honest question? "Military doesn't care about cost" is not good enough, there are currently about >100 active ones with who knows how many hundreds in the past, so they must have cracked the cost formula at some point, besides by now we have hugely better tech than the 50's, so what gives?
Yeah, I wondered about seacraft reactors myself. I think there are many safety allowances for DOD vs. DOE. The DOD reactors are not publicly accessible (you hope anyway), and the data centers will be in and near the public. There are also major security measures that have to be taken for reactor sites. You have armed personnel before you even get to the reactors, and then the entrances are sometimes close to one mile away from the reactor. Once there, the number of guards and bang-bags goes up. The modern sites kind of look like they have small henges around them (back to the neolithic!) :)
> it's impossible to build a full-sized reactor in less than 10 years, and the usual over-run is 5 years
I'm curious why that is. If we know how to build it, it shouldn't take that long. It's not like we need to move a massive amount of earth or pour a humongous amount of concrete or anything like that, which would actually take time. Then why does it take 15 years to build a reactor with a design that is already tried and tested and approved?
Well, you do have to move a lot of earth and pour A LOT of concrete :) Many steps have to be x-rayed, and many other tests done before other steps can be started. Every weld is checked and, all internal and external concrete is cured, treated, and verified. If anything is wrong, it has to be fixed in place (if possible) or removed and redone. It's a slow process and should be for many steps.
One of the big issues that have occurred (in the US especially) is, that for 20+ years there were no new plants built. This caused a large void in the talent pool, inside and outside the industry. That fact, along with others has caused many problems with some projects of recent years in the US.
When you're the biggest fossil fuel producer in the world, it's vital that you stay laser-focused on regulating nuclear power to death in every imaginable detail while you ignore the vast problems with unchecked carbon emissions and gaslight anyone who points them out.
If you didn't intend your comment to be a snarky one-liner, that didn't come across to me, and I'm pretty sure that would also be the case for many others.
Intent is a funny thing—people usually assume that good intent is sufficient because it's obvious to themselves, but the rest of us don't have access to that state, so has to be encoded somehow in your actual comment in order to get communicated. I sometimes put it this way: the burden is on the commenter to disambiguate. https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
I take your point at least halfway though, because it wasn't the worst violation of the guidelines. (Usually I say "this is not a borderline case" but this time it was!) I'm sensitive to regional flamewar because it's tedious and, unlike national flamewar or religious flamewar, it tends to sneak up on people (i.e. we don't realize we're doing it).
So you are sorry and take it back? Should probably delete your comments rather than striking them out, as the guidelines say.
I live, work, and posted this from Texas, BTW...
Also it takes up more than one line on my screen. So, not a "one-liner" either. If you think it is, please follow the rules consistently and enforce them by deleting all comments on the site containing one sentence or even paragraph. My comment was a pretty long sentence (136 chars) and wouldn't come close to fitting in the 50 characters of a Git "one-liner".
Otherwise, people will just assume all the comments are filtered through your unpredictable and unfairly biased eye. And like I said (and you didn't answer), this kind of thing is no longer in fashion, right?
None of this is "borderline". I did nothing wrong and you publicly shamed me. Think before you start flamewars on HN. Bad mod.
How much capacity does solar and wind add compared to nuclear, per square foot of land used? Also I thought the new administration was placing a ban on new renewable installations.
The ban is on offshore wind and for government loans for renewables. Won't really affect Texas much, it's Massachusetts that'll have to deal with more expensive energy.
I'm confused and a bit disturbed; honestly having a very difficult time internalizing and processing this information. This announcement is making me wonder if I'm poorly calibrated on the current progress of AI development and the potential path forward. Is the key idea here that current AI development has figured out enough to brute force a path towards AGI? Or I guess the alternative is that they expect to figure it out in the next 4 years...
I don't know how to make sense of this level of investment. I feel that I lack the proper conceptual framework to make sense of the purchasing power of half a trillion USD in this context.
"There are maybe a few hundred people in the world who viscerally understand what's coming. Most are at DeepMind / OpenAI / Anthropic / X but some are on the outside. You have to be able to forecast the aggregate effect of rapid algorithmic improvement, aggressive investment in building RL environments for iterative self-improvement, and many tens of billions already committed to building data centers. Either we're all wrong, or everything is about to change." - Vedant Misra, Deepmind Researcher.
Maybe your calibration isn't poor. Maybe they really are all wrong but there's a tendency here to these these people behind the scenes are all charlatans, fueling hype without equal substance hoping to make a quick buck before it all comes crashing down, but i don't think that's true at all. I think these people really genuinely believe they're going to get there. And if you genuinely think that, them this kind of investment isn't so crazy.
The problem is, they are hugely incentivised to hype to raise funding. It’s not whether they are “wrong”, it’s whether they are being realistic.
The argument presented in the quote there is: “everyone in AI foundation companies are putting money into AI, therefore we must be near AGI.”
The best evaluation of progress is to use the tools we have. It doesn’t look like we are close to AGI. It looks like amazing NLP with an enormous amount of human labelling.
> there's a tendency here to these these people behind the scenes are all charlatans, fueling hype without equal substance hoping to make a quick buck before it all comes crashing down, but i don't think that's true at all. I think these people really genuinely believe they're going to get there.
I don't immediately disagree with you but you just accidentally also described all crypto/NFT enthusiasts of a few years ago.
All? Quite a few of the best minds in the field, like Yann LeCun for example, have been adamant that 1) autoregressive LLMs are NOT the path to AGI and 2) that AGI is very likely NOT just a couple of years away.
You have hit on something that really bothers me about recent AGI discourse. It’s common to claim that “all” researchers agree that AGI is imminent, and yet when you dive into these claims “all” is a subset of researchers that excludes everyone in academia, people like Yann, and others.
So the statement becomes tautological “all researchers who believe that AGI is imminent believe that AGI is imminent”.
And of course, OpenAI and the other labs don’t perform actual science any longer (if science requires some sort of public sharing of information), so they win every disagreement by claiming that if you could only see what they have behind closed doors, you’d become a true believer.
Motivated reasoning sings nicely to the tune of billions of dollars. None of these folks will ever say, "don't waste money on this dead end". However, it's clear that there is still a lot of productive value to extract from transformers and certainly there will be other useful things that appear along the way. It's not the worst investment I can imagine, even if it never leads to "AGI"
Yeah people don't rush to say "don't waste money on this dead end" but think about it for a moment.
A 500B dollar investment doesn't just fall into one's lap. It's not your run of the mill funding round. No, this is something you very actively work towards that your funders must be really damn convinced is worth the gamble. No one sane is going to look at what they genuinely believe to be a dead end and try to garner up Manhattan Project scales of investment. Careers have been nuked for far less.
My prediction is a Apple loses to Open AI who releases a H.E.R. (like the movie) like phone. She is seen on your lock screen a la a Facetime call UI/UX and she can be skinned to look like whoever; i.e. a deceased loved one.
She interfaces with AI Agents of companies, organizations, friends, family, etc to get things done for you (or to learn from..what's my friends bday his agent tells yours) automagically and she is like a friend. Always there for you at your beckon call like in the movie H.E.R.
Zuckerberg's glasses that can not take selfies will only be complimentary to our AI phones.
That's just my guess and desire as fervent GPT user, as well a Meta Ray Ban wearer (can't take selfies with glasses).
Well I use GPT daily to get things done and use it as a knowlegebase. I text and talk to it throughout the day, as well I think it's called "chat"GPT for a reason because it will evolve to the point where you feel like you are talking to a human. Tho this human is your assistant and does everything for you and interfaces with other AI agents to book travel, learn your friends/family schedules and anything you now do on the web there will be AI agent for that your AI agent interfacing with.
Maybe you have not seen the 2013 movie "H.E.R.?" Scarlett Johansan starred in it (her voice was the AI) and Sam Altman asked her to be the voice of chatGPT.
Overall this is what I see happening and excited for some of it or possibly all of it to happen. Yet time will tell :-) and it sounds like your betting none of it will happen ... we'll see :)
My take on this is that, despite an ever-increasingly connected world, you still need an assistant like this to remain available at all times your device is. If I can’t rely on it when my signal is weak, or the network/service is down/saturated, its way of working itself into people’s core routines is minimal. So either the model runs locally, in which case I’d argue OpenAI have no moat, or they uncover some secret sauce they’re able to keep contained to their research labs and data centres that’s simply that much better than the rest, in perpetuity, and is so good people are willing to undergo the massive switching costs and tolerate the situations in which the service they’ve come to be so dependent on isn’t available to them. Let’s also not discount the fact that Apple are one of the largest manufacturers globally of smartphones, and that getting up to speed in the myriad industries required to compete with them, even when contracting out much of that work, is hard.
Sure but Microsof has the expertise and they own 49 percent of Open AI if I'm not mistaken. Open AI uses their expertise and access to hardware to create a GPT branded AI phone.
I can see your point re: run locally but no reason Open AI can't release version 0.1 and how many times are u left without an internet connection on ur current phone?
Overall I hate Apple now it's so stale compared to GPT's iPhone app. I nerd rage at dumbass Siri.
I see it somewhat differently. It is not that technology has reached a level where we are close to AGI, we just need to throw in a few more coins to close the final gap.
It is probably the other way around. We can see and feel that human intelligence is being eroded by the widespread use of LLMs for tasks that used to be solved by brain work. Thus, General Human Intelligence is declining and is approaching the level of current Artificial Intelligence. If this process can be accelerated by a bit of funding, the point where Big Tech can overtake public opinion making will be reached earlier, which in turn will make many companies and individuals richer faster, also the return on investment will be closer.
Let me avoid the use of the word AGI here because the term is a little too loaded for me these days.
1) reasoning capabilities in latest models are rapidly approaching superhuman levels and continue to scale with compute.
2) intelligence at a certain level is easier to achieve algorithmically when the hardware improves. There's also a larger path to intelligence and often simpler mechanisms
3) most current generation reasoning AI models leverage test time compute and RL in training--both of which can make use of more compute readily. For example RL on coding against compilers proofs against verifiers.
All of this points to compute now being basically the only bottleneck to massively superhuman AIs in domains like math and coding--rest no comment (idk what superhuman is in a domain with no objective evals)
It is superhuman in a very specific domain. I didn't use AGI because its definitions are one of two flavors.
One, capable of replacing some large proportion of global gdp (this definition has a lot of obstructions: organizational, bureaucratic, robotic)...
two, difficult to find problems in which average human can solve but model cannot. The problem with this definition is that the distinct nature of intelligence of AI and the broadness of tasks is such that this metric is probably only achievable after AI is already in reality massively superhuman intelligence in aggregate. Compare this with Go AIs which were massively superhuman and often still failing to count ladders correctly--which was also fixed by more scaling.
All in all I avoid the term AGI because for me AGI is comparing average intelligence on broad tasks rel humans and I'm already not sure if it's achieved by current models whereas superhuman research math is clearly not achieved because humans are still making all of progress of new results.
> All of this points to compute now being basically the only bottleneck to massively superhuman AIs
This is true for brute force algorithms as well and has been known for decades. With infinite compute, you can achieve wonders. But the problem lies in diminishing returns[1][2], and it seems things do not scale linearly, at least for transformers.
>AI development has figured out enough to brute force a path towards AGI?
I think what's been going on is compute/$ has been exponentially rising for decades in a steady way and has recently passed the point that you can get human brain level compute for modest money. The tendency has been once the compute is there lots of bright PhDs get hired to figure algorithms to use it so that bit gets sorted in a few years. (as written about by Kurzweil, Wait But Why and similar).
So it's not so much brute forcing AGI so much that exponential growth makes it inevitable at some point and that point is probably quite soon. At least that seems to be what they are betting.
The annual global spend on human labour is ~$100tn so if you either replace that with AGI or just add $100tn AGI and double GDP output, it's quite a lot of money.
To me it looks like a strategic investment in data center capacity, which should drive domestic hardware production, improvements in electrical grid, etc. Putting it all under AI label just makes it look more exciting.
Largest GPU cluster at the moment is X.ai's 100K H100's which is ~$2.5B worth of GPUs. So, something 10x bigger (1M GPUs) is $25B, and add $10B for 1GW nuclear reactor.
This sort of $100-500B budget doesn't sound like training cluster money, more like anticipating massive industry uptake and multiple datacenters running inference (with all of corporate America's data sitting in the cloud).
There's the servers and data center infrastructure (cooling, electricity) as well as the GPUs of course, but if we're talking $10B+ of GPUs in a single datacenter, it seems that would dominate. Electricity generation is also a big expense, and it seems nuclear is the most viable option although multi-GW solar plants are possible too in some locations. The 1GW ~ $10B number I suggested is in the right ballpark.
It seems you'd need to figure periodic updates into the operating cost of a large cluster, as well as replacing failed GPUs - they only last a few years if run continuously.
I've read that some datacenters run mixed generation GPUs - just updating some at a time, but not sure if they all do that.
It'd be interesting to read something about how updates are typically managed/scheduled.
this is an announcement not a cut check. Who knows how much they'll actually spend, plenty of projects never get started let alone massive inter-company endeavors.
> Is the key idea here that current AI development has figured out enough to brute force a path towards AGI?
My sense anecdotally from within the space is yes people are feeling like we most likely have a "straight shot" to AGI now. Progress has been insane over the last few years but there's been this lurking worry around signs that the pre-training scaling paradigm has diminishing returns.
What recent outputs like o1, o3, DeepSeek-R1 are showing is that that's fine, we now have a new paradigm around test-time compute. For various reasons people think this is going to be more scalable and not run into the kind of data issues you'd get with a pre-training paradigm.
You can definitely debate on whether that's true or not but this is the first time I've been really seeing people think we've cracked "it", and the rest is scaling, better training etc.
> My sense anecdotally from within the space is yes people are feeling like we most likely have a "straight shot" to AGI now
My problem with this is that people making this statement are unlikely to be objective. Major players are in fundraising mode, and safety folks are also incentivised to be subjective in their evaluation.
Yesterday I repeatedly used OpenAI’s API to summarise a document. The first result looked impressive. However, comparing repeated results revealed that it was missing major points each time, in a way a human would certainly not. In the surface the summary looked good, but careful evaluation indicated a lack of understanding or reasoning.
Don’t get me wrong, I think AI is already transformative, but I am not sure we are close to AGI. I hear a lot about it, but it doesn’t reflect my experience in a company using and building AI.
I agree with your take, and actually go a bit further. I think the idea of "diminishing returns" is a bit of a red herring, and it's instead a combination of saturated benchmarks (and testing in general) and expectations of "one llm to rule them all". This might not be the case.
We've seen with oAI and Anthropic, and rumoured with Google, that holding your "best" model and using it to generate datasets for smaller but almost as capable models is one way to go forward. I would say that this shows the "big models" are more capable than it would seem and that they also open up new avenues.
We know that Meta used L2 to filter and improve its training sets for L3. We are also seeing how "long form" content + filtering + RL leads to amazing things (what people call "reasoning" models). Semantics might be a bit ambitious, but this really opens up the path towards -> documentation + virtual environments + many rollouts + filtering by SotA models => new dataset for next gen models.
That, plus optimisations (early exit from meta, titans from google, distillation from everyone, etc) really makes me question the "we've hit a wall" rhetoric. I think there are enough tools on the table today to either jump the wall, or move around it.
Yes that is exactly what the big Aha! moment was. It has now been shown that doing these $100MM+ model builds is what it takes to have a top-tier model. The big moat is not just the software, the math, or even the training data, it's the budget to do the giant runs. Of course having a team that is iterating on these 4 regularly is where the magic is.
It's a typical Trump-style announcement -- IT'S GONNA BE HUUUGE!! -- without any real substance or solid commitments
Remember Trump's BIG WIN of Foxconn investing $10B to build a factory in Wisconsin, creating 13000 jobs?
That was in 2017. 7 years later, it's employing about 1000 people if that. Not really clear what, if anything, is being made at the partially-built factory. [0]
I think the only way you get to that kind of budget is by assuming that the models are like 5 or 10 times larger than most LLMs, and that you want to be able to do a lot of training runs simultaneously and quickly, AND build the power stations into the facilities at the same time. Maybe they are video or multimodal models that have text and image generation grounded in a ton of video data which eats a lot of VRAM.
> current AI development has figured out enough to brute force a path towards AGI? Or I guess the alternative is that they expect to figure it out in the next 4 years...
Or they think the odds are high enough that the gamble makes sense. Even if they think it's a 20% chance, their competitors are investing at this scale, their only real options are keep up or drop out.
This announcement is from the same office as the guy that xeeted:
“My NEW Official Trump Meme is HERE! It's time to celebrate everything we stand for: WINNING! Join my very special Trump Community. GET YOUR $TRUMP NOW.”
Your calibration is probably fine, stargate is not a means to achieve AGI, it’s a means to start construction on a few million square feet of datacenters thereby “reindustrializing America”
FWIW Altman sees it as a way to deploy AGI. He's increasingly comfortable with the idea they have achieved AGI and are moving toward Artificial Super Intelligence (ASI).
twitter hype is out of control again.
we are not gonna deploy AGI next month, nor have we built it.
we have some very cool stuff for you but pls chill and cut your expectations 100x!
I realize he wrote a fairly goofy blog a few weeks ago, but this tweet is unambiguous: they have not achieved AGI.
Do you think Sam Altman ever sits in front of a terminal trying to figure out just the right prompt incantation to get an answer that, unless you already know the answer, has to be verified? Serious question. I personally doubt he is using openai products day to day. Seems like all of this is very premature. But, if there are gains to be made from a 7T parameter model, or if there is huge adoption, maybe it will be worth it. I'm sure there will be use for increased compute in general, but that's a lot of capex to recover.
> Is the key idea here that current AI development has figured out enough to brute force a path towards AGI? Or I guess the alternative is that they expect to figure it out in the next 4 years...
Can't answer that question, but, if the only thing to change in the next four years was that generation got cheaper and cheaper, we haven't even begun to understand the transformative power of what we have available today. I think we've felt like 5-10% of the effects that integrating today's technology can bring, especially if generation costs come down to maybe 1% of what they currently are, and latency of the big models becomes close to instantaneous.
I really don't understand the national security argument. If you really do fear some fundamental breakthrough in AI from China, what's cheaper, $500 billion to rush to get there first, or spending a few billion (and likely much less) in basic research in physics, materials science, and electronics, mixed with a little bit of espionage, mixed with improving the electric grid and eliminating (or greatly reducing) fossil fuels?
Ultimately, the breakthrough in AI is going to either come from eliminating bottlenecks in computing such that we can simulate many more neurons much more cheaply (in other words, 2025-level technology scaled up is not going to really be necessary or sufficient), or some fundamental research discovery such as a new transformer paradigm. In any case, it feels like these are theoretical discoveries that, whoever makes them first, the other "side" can trivially steal or absorb the information.
I'm not sure I buy the national security argument but as you say the other side can trivially steal or absorb theoretical discoveries but not trivially get $500bn worth of data centers.
Right, but $500 billion in data centers alone is not likely to get you very far in the grand scheme of things. Endlessly scaling up today's technology eventually hits some kind of limit. And if you spend that money to discover some theoretical breakthrough that no longer requires the $500 billion outlay, then like I said, China will trivially be able to steal that breakthrough and spend much less than $500 billion to reproduce it. Is "getting there first" going to actually be worth it? That's what I'm questioning.
It's unfair, because we are talking in the hindsight about everything but Project Stargate, and it's also just your list (and I don't know what others could add to it) but it got me thinking. Manhattan Project goal is to make a powerful bomb. Apollo is to get to the Moon before soviets do (so, because of hubris, but still there is a concrete goal). South-North Water Transfer is pretty much terraforming, and others are mostly roads. I mean, it's all kinda understandable.
And Stargate Project is... what exactly? What is the goal? To make Altman richer, or is there any more or less concrete goal to achieve?
Also, few items for comparison, that I googled while thinking about it:
AI race is arguably just as, and maybe even more important, than the space race.
From a national security PoV, surpassing other countries’ work in the field is paramount to maintaining US hegemony.
We know China performs a ton of corporate espionage, and likely research in this field is being copied, then extended, in other parts of the world. China has been more intentional in putting money towards AI over the last 4 years.
We had the chips act, which is tangentially related, but nothing as complete as this. For i think a couple years, the climate impact of data centers caused active political slowdown from the previous administration.
Part of this is selling the project politically, so my belief is much of the talk of AGI and super intelligence is more marketing speak aimed at a general audience vs a niche tech community.
I’d be willing to predict that we’ll get some ancillary benefits to this level of investment. Maybe more efficient power generation? Cheaper electricity via more investment in nuclear power? Just spitballing, but this is an incredible amount of money, with $100 billion “instantly” deployed.
>what advantages do these rules bring to the winner?
An almost absolute incumbency advantage.
>what was the practical advantage of ascii or feet and knots
Familiarity. Americans and Britons speak English, and they wrote the rules in English. Everyone else after the fact needs to read English or GTFO.
Alternatively, think of it like this: Nvidia was the first to commercialize "AI" with CUDA. Now everyone in "AI" must speak CUDA or be irrelevant.
He who wins first writes the rules, runner-ups and below obey the rules.
This is why America and China are fiercely competing to be the first past the post so one of them will write the rules. This is why Japan and Europe insist they will write the rules, nevermind the fact they aren't even in the race (read: they won't write the rules).
The goal is Artificial Superintelligence (ASI), based on short clips of the press conference.
It has been quite clear for a while we'll shoot past human-level intelligence since we learned how to do test-time compute effectively with RL on LMMs (Large Multimodal Models).
Look, making up a three-letter acronym doesn't make whatever it stands for a real thing. Not even real in a sense "it exists", but real in a sense "it is meaningful". And assigning that acronym to a project doesn't make up a goal.
I'm not claiming that AGI, ASI, AXY or whatever is "impossible" or something. I claim that no one who uses these words has any fucking clue what they mean. A "bomb" is some stuff that explodes. A "road" is some flat enough surface to drive on. But "superintelligence"? There's no good enough definition of "intelligence", let alone "artifical superintelligence". I unironically always thought a calculator is intelligent in a sense, and if it is, then it's also unironically superintelligent, because I cannot multiply 20-digit numbers in my mind. Well, it wasn't exactly "general", but so aren't humans, and it's an outdated acronym anyway.
So it's fun and all when people are "just talking", because making up bullshit is a natural human activity and somebody's profession. But when we are talking about the goal of a project, it implies something specific, measurable… you know, that SMART acronym (since everybody loves acronyms so much).
Also, "Dario Amodei says what he has seen inside Anthropic in the past few months leads him to believe that in the next 2 or 3 years we will see AI systems that are better than almost all humans at almost all tasks"
Not saying you're necessarily wrong, but "Anthropic CEO says that the work going on in Anthropic is super good and will produce fantastic results in 2 or 3 years" it not necessarily telling of anything.
Dario said in mid-2023 that his timeline for achieving "generally well-educated humans" was 2-3 years. o1 and Sonnet 3.5 (new) have already fulfilled that requirement in terms of Q&A, ahead of his earlier timeline.
I'm curious about that. Those models are definitely more knowledgeable than a well educated human, but so is Google search, and has been for a long time. But are they as intelligent as a well educated human? I feel like there's a huge qualitative difference. I trust the intelligence of those models much less than an educated human.
The paper you linked claims on page 10 that machines have been performing comparably on the task since 2012, so I'm not sure exactly what the paper is supposed to show in this context.
Am I to conclude that we've had a comparably intelligent machine since 2012?
Given the similar performance between GPT4 and O1 on this task, I wonder if GPT3.5 is significantly better than a human, too.
Sorry if my thoughts are a bit scattered, but it feels like that benchmark shows how good statistical methods are in general, not that LLMs are better reasoners.
You've probably read and understood more than me, so I'm happy for you to clarify.
The figure also shows that the non LLM algorithm from 2012 was as capable or more capable than a human: was it as intelligent as a well educated human?
If not, why is the study sufficient evidence for the LLM, but not sufficient evidence for the previous system?
Again, it feels like statistical methods are winning out in general.
> Perhaps it’s better that you ask a statistician you trust
Maybe we can shortcut this conversation by each of us simply consulting O1 :^)
1) It’s an example of a domain an LLM can do better than humans. A 2012 system was not able to do myriad other things LLMs can do and thus not qualified as general intelligence.
2) As mentioned in the chart label, earlier systems require manual symptom extraction.
3) This thread by a cancer genomics faculty member at Harvard might open some minds:
“….Now, back to today: The newest generation of generative deep learning models (genAI) is different.
For cancer data, the reason these models hold so much potential is exactly the reason why they were not preferred in the first place: they make almost no explicit data assumptions.
These models are excellent at learning whatever implicit distribution from the data they are trained on
Such distributions don’t need to be explainable. Nor do they even need to be specified
When presented with tons of data, these models can just learn, internalize & understand…..”
Yeah, I'm not sure why we're pretending this will benefit the public. The only benefit is that it will create employment, and datacenter jobs are among the lowest paid tech workers in the industry.
"Unnamed sources told Bloomberg in April that The Line is scaling back from 170 kilometers long to just 2.4 kilometers, with the rest of the length to be completed after 2030. Neom expects The Line to be finished by 2045 now, 15 years later than initially planned."
Where are they getting the $500B? Softbank's market cap is 84b and their entire vision fund is only $100b, Oracle only has $11b cash on hand, OpenAI's only raised $17b total...
That's their total fund and I doubt they are going all in with it in the US. Still, to reach $500bn, they need $125bn every single year. I think they just put down the numbers they want to "see" invested and now they'll be looking for backers. I don't think this is going anywhere really.
This would be a large outlay even for UAE, who would be giving it to a direct competitor in the space: UAE is one of the few countries outside of the US who are in any way serious about AI.
there doesn't appear to be any timeline announced here. the article says the "initial investment" is expected to be $100bn, but even that doesn't mean $100bn this year.
if this is part of softbank's existing plan to invest $100bn in ai over the next four years, then all that's being announced here is that Sama and Larry Ellison wanted to stand on a stage beside trump and remind people about it.
Softbank is being granted a block of TRUMP MEMES, the price of which will skyrocket when they are included in the bucket of crypto assets purchased as part of the crypto reserve.
>> Where are they getting the $500B? Softbank's market cap is 84b and their entire vision fund is only $100b, Oracle only has $11b cash on hand, OpenAI's only raised $17b total...
1. The outlays can be over many years.
2. They can raise debt. People will happily invest at modest yields.
Oracle's cash on hand is presumably irrelevant- I think they are on the receiving end of the money, in return for servers. No wonder Larry Ellison was so fawning.
Is this is a good investment by Softbank? Who knows.. they did invest in Uber, but also have many bad investments.
The moon program was $318 billion in 2023 dollars, this one is $500 billion. So that's why the tech barons who were present at the inauguration were high as a kite yesterday, they just got the financing for a real moon shot!
> Other partners in the project include Microsoft, investor MGX and the chipmakers Arm and NVIDIA, according to separate statements by Oracle and OpenAI.
It appears this basically locks out Google, Amazon and Meta. Why are we declaring OpenAI as the winner? This is like declaring Netscape the winner before the dust settled. Having the govt involved in this manner can’t be a good thing.
Since the CEOs of Google, Amazon and Meta were seated at the front row of the inauguration, IN FRONT OF the incoming cabinet, I'm pretty confident their techno -power-barrel will come via other channels.
Interestingly, there seems to be no actual government involvement aside from the announcement taking place at the White House. It all seems to be private money.
Government enforcing or laxing/fast tracking regulations and permits can kill or propel even a 100B project, and thus can be thought as having its own value on the scale of the given project’s monetary investment, especially in the case of a will/favor/whim-based government instead of a hard rules based deep state one.
Isn't that a state and local-level thing, though? I can't imagine that there is much federal permitting in building a data center, unless it is powered by a nuclear reactor.
> Still, the regulatory outlook for AI remains somewhat uncertain as Trump on Monday overturned the 2023 order signed by then-President Joe Biden to create safety standards and watermarking of AI-generated content, among other goals, in hopes of putting guardrails on the technology’s possible risks to national security and economic well-being.
I generally agree that government sponsorship of this could be bad for competition. But Google in particular doesn't necessarily need outside investment to compete with this. They're vertically integrated in AI datacenters and they don't have to pay Nvidia.
They don't have to spend $500B to compete. Their costs should be much lower.
That said, I don't think they have the courage to invest even the lower amount that it would take to compete with this. But it's not clear if it's truly necessary either, as DeepSeek is proving that you don't need a billion to get to the frontier. For all we know we might all be running AGI locally on our gaming PCs in a few years' time. I'm glad I'm not the one writing the checks here.
This seems to be getting lost in the noise in the stampede for infrastructure funding
Deepseek v3 at $5.5M on compute and now r1 a few weeks later hitting o1 benchmark scores with a fraction of the engineers etc. involved ... and open source
We know model prep/training compute has potentially peaked for now ... with some smaller models starting to perform very well as inference improves by the week
Unless some new RL concept is going to require vastly more compute for a run at AGI soon ... it's possible the capacity being built based on an extrapolation of 2024 numbers will exceed the 2025 actuals
Also, can see many enterprises wanting to run on-prem -- at least initially
They’re a big company. You could tell a story that they’re less efficient than OpenAI and Nvidia and therefore need more than $500b to compete! Who knows?
Probably not popular opinion - but I actually think Google is winning this now. Deep research is the most useful AI product I have used (Claud is significantly more useful than openAI)
How involved is the government at all? I’m still having a hard time seeing how Trump or anyone in the government is involved except to do the announcement. These are private companies coming together to do a deal.
I am not sure if OpenAI will be the winner despite this investment. Currently, I see various DeepSeek AI models as offering much more bang for the buck at a vastly cheaper cost for small tasks, but not yet for large context tasks.
The actual press release makes it clearer that this isn't a lockout of any kind and there's no direct government involvement. Softbank and some of other banks persuaded by Softbank are ponying up $500B for OpenAI to invest in AI. Trump is hyping this up from the sidelines because "OpenAI says this will be good for America". It's basically just another day in the world of press-releases and political pundits commenting on press-releases.
I hear this joked about sometimes or used as a metaphor, but in the literal sense of the phrase, are we in a cold war right now? These types of dollars feel "defense-y", if that makes sense. Especially with the big focus on energy, whatever that ends up meaning. Defense as a motivation can get a lot done very fast so it will be interesting to watch, though it raises the hair on my arms
Right, but they've been doing that for a while, to everyone. The US is much quieter about it, right? But you can twist this move and see how the gov would not want to display that level of investment within itself as it could be interpreted as a sign of aggression. but it makes sense to me that they'd have no issue working through corporations to achieve the same ends but now able to deny direct involvement
I don't think this administration is worried too much about showing aggression. If anything they are embracing it. Today was the first full day, and they have already threatened the sovereignty of at least four nations.
You know those booths at events where money is blown around and the person inside needs to grab as much as they can before the timer runs out? This is that machine for technologists until the bubble ends. The fallout in 2-3 years is the problem of whomever invested or is holding bags when (if?) the bubble pops.
That was literally my question. Is this basically just for more datacenters, NVidia chips, and electricity with a sprinkling of engineers to run it all? If so, then that $500bn should NOT be invested in today's tech, but instead in making more powerful and power efficient chips, IMO.
Nvidia and TSMC are already working on more powerful and efficient chips, but the physical limits to scaling mean lots more power is going to be used in each new generation of chips. They might improve by offering specific features such as FP4, but Moore's law is still dead.
$500bn of usefully deployed engineering, mostly software, seems like it would put AMD far ahead of Nvidia. Actually usefully deploying large amounts of money is not so easy, though, and this would still go through TSMC.
I'll make a wild guess that they will be building data centers and maybe robotic labs. They are starting with 100B of committed by mostly Softbank, but probably not transacted yet, money.
> building new AI infrastructure for OpenAI in the United States
The carrot is probably something like - we will build enough compute to make a supper intelligence that will solve all the problems, ???, profit.
If we look at the processing requirements in nature, I think that the main trend in AI going forward is going to be doing more with less, not doing less with more, as the current scaling is going.
Thermodynamic neural networks may also basically turn everything on its ear, especially if we figure out how to scale them like NAND flash.
If anything, I would estimate that this is a space-race type effort to “win” the AI “wars”. In the short term, it might work. In the long term, it’s probably going to result in a massive glut in accelerated data center capacity.
The trend of technology is towards doing better than natural processes, not doing it 100000x less efficiently. I don’t think AI will be an exception.
If we look at what is -theoretically- possible using thermodynamic wells, with current model architectures, for instance, we could (theoretically) make a network that applies 1t parameters in something like 1cm2. It would use about 20watts, back of the napkin, and be able to generate a few thousand T/S.
Operational thermodynamic wells have already been demonstrated en silica. There are scaling challenges, cooling requirements, etc but AFAIK no theoretical roadblocks to scaling.
Obviously, the theoretical doesn’t translate to results, but it does correlate strongly with the trend.
So the real question is, what can we build that can only be done if there are hundreds of millions of NVIDIA GPUs sitting around idle in ten years? Or alternatively, if those systems are depreciated and available on secondary markets?
Reasonably speaking, there is no way they can know how they plan to invest $500 billion dollars. The current generation of large language models basically use all human text thats ever been created for the parameters... not really sure where you go after than using the same tech.
That's not really true - the current generation, as in "of the last three months", uses reinforcement learning to synthesize new training data for themselves: https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero
Right but that's kind of the point: there's no way forward which could benefit from "moar data". In fact it's weird we need so much data now - i.e. my son in learning to talk hardly needs to have read the complete works of Shakespeare.
If it's possible to produce intelligence from just ingesting text, then current tech companies have all the data they need from their initial scrapes of the internet. They don't need more. That's different to keeping models up to date on current affairs.
> Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT.
The latest hype is around "agents", everyone will have agents to do things for them. The agents will incidentally collect real-time data on everything everyone uses them for. Presto! Tons of new training data. You are the product.
It seems to me you could generate a lot of fresh information from running every youtube video, every hour of TV on archive.org, every movie on the pirate bay -- do scene by scene image captioning + high quality whisper transcriptions (not whatever junk auto-transcription YouTube has applied), and use that to produce screenplays of everything anyone has ever seen.
I'm not sure why I've never heard of this being done, it would be a good use of GPUs in between training runs.
The fact that OpenAI can just scrape all of Youtube and Google isn't even taking legal action or attempting to stop it is wild to me. Is Google just asleep?
what are they going to use to sue - DMCA? OpenAI (and others) are scraping everything imaginable (MS is scraping private Github repos…) - don’t think anyone in the current government will be regulating any of this anytime soon
Such a biased source of data-that gets them all the LaTeX source for my homeworks, but not my professor's grading of the homework, and not the invaluable words I get from my professor at office hours. No wonder the LLMs have bizarre blindnesses in different directions.
> a lot of fresh information from running every youtube video
EVERY youtube video?? Even the 9/11 truther videos? Sandy Hook conspiracy videos? Flat earth? Even the blatantly racist? This would be some bad training data without some pruning.
The best videos would be those where you accidentally start recording and you get 2 hours of naturalistic conversation between real people in reality. Not sure how often they are uploaded to YouTube.
Part of the reason that kids need less material is that the aren't just listening, they are also able to do experiments to see what works and what doesn't.
If I understand correctly - if you are training a model to perform a particular task - in the end what matters is the training data - and by and large different models will largely converge on the best representation of that data for the given task, given enough compute.
So that means the models themselves aren't really IP - they are inevitable outputs from optimising using the input data for a certain task.
I think this means pretty much everyone, apart from the AI companies - will see these models as pre-competitive.
Why spend huge amounts training the same model multiple times, when you can collaborate?
Note it only takes one person/company/country to release an open source model for a particular task to nuke the business model of those companies that have a business model of hoarding them.
"create hundreds of thousands of American jobs"... Given the current educational system in the US, this should be fun to watch. Oh yeah, Musk and his H-1B Visa thing. Now it's making sense.
How does this work out in the long term? Operating a data center does not require that many blue-collar workers.
I'm imagining a future where the US builds a Tower of Babel from thousands of data centers just to keep people employed and occupied. Maybe also add in some paperclip factories¹?
This is what the 2024 Nobel prize winners in economics call "creative destruction" to repeat from their book Why Nations Fail. They really did not have a lot of sympathy for those they lumped in with Luddites who were collateral damage to progress.
Both. It’s a lot of electrical work, hvac work (think ducting, plumbing, more electric). Tons of concrete work.
Once you have one working design for the environment (e.g. hot desert vs cold and humid), you can stamp the things out with minimal variation between the two.
The maintenance of all of that supporting infrastructure is the standard blue collar work the same.
The only new blue collar job on the maintenance side is responding to hardware issues. What this entails depends on if it’s a colo center and you’re doing “remote hands” for a customer where you’re swapping a PSU, RAM, or whatever. You also install new servers, switches, etc.
As you move up into hyperscalers the logistics vary because some designs make servicing a single server in place not worth cooling the whole hot aisle (Google runs really hot hot aisles that weren’t human friendly). So sometimes you just yank the server and throw it in a cart or wait for the whole rack to fail and pull it then.
Overall though, anything that can be done remotely is. So the data center techs do very little work on the keyboard
After they build the Multivac or Deep Thought, or whatever it is they’re trying to do, then what happens? It makes all the stockholders a lot of money?
The way I think about this project, along with all of Trump's plans, is that he wants to maximize the US's economic output to ensure we are competitive with China in the future.
Yes, it would make money for stockholders. But it's much more than that: it's an empire-scale psychological game for leverage in the future.
> he wants to maximize the US's economic output to ensure we are competitive with China in the future.
LOL
Under Trump policies, China will win "in the future" on energy and protein production alone.
Once we've speedrunned our petro supply and exhausted our agricultural inputs with unfathomably inefficient protein production, China can sit back and watch us crumble under our own starvation.
No conflict necessary under these policies, just patience! They're playing the game on a scale of centuries, we can't even stay focused on a single problem or opportunity for a few weeks.
> Once we've speedrunned our petro supply and exhausted our agricultural inputs with unfathomably inefficient protein production, China can sit back and watch us crumble under our own starvation.
China is the largest importer of crude oil in the world. China imports 59% of its oil consumptions, and 80% of food products. Meanwhile, US is fully self sufficient on both food and oil.
> They're playing the game on a scale of centuries
Is that why they are completely broke, having built enough ghost buildings that house entire population of France - 65 million vacant units? Is that why they are now isolated in geopolitics, having allied with Russia and pissed off all their neighbors and Europe?
China's oil reserve only lasts 80 days. In case of any conflict that disrupts oil import, China would be shutting down very quickly. Since you brought up crumble and starvation.
America can subject itself to domestic and international turmoil by invading as many allies as it wants. China's winning strategy is still to keep innovating on energy and protein at scale and wait for starvation while they build their soft power empire and America becomes a pariah state. They're in no rush at all.
Our military and political focus will be keeping neighbors out on one side and trying to seize land on the other side while China goes and builds infrastructure for the entire developing world that they'll exploit for centuries.
Is this a serious suggestion? America can just keep invading people ad infinitum instead of... applying slight thumb pressure on the market's scales to develop more efficient protein sources and more renewable fuel sources before we are staring at the last raw economic input we have?
China is dead broke and will shrink to 600M in population before 2100. State owned enterprises are eating up all the private enterprises. Meanwhile, Chinese rich leaves China by tens of thousands per year, and capital outflow increases every year.
America isn't invading Greenland or Canada. Taking those comments seriously takes quite a bit of mental gymnastics when you do a cursory consideration of the geopolitical and government logistical implications alone. Makes for good clickbait headlines, not for serious geopolitical risk analysis.
Unfortunately one of those things that authoritarianism has a lot more methods to solve than other systems, which really underscores the importance of beating them in the long term.
Their current very advanced method, is to send village elders to couples and single guys and berate them on why they are not having sex or having kids (hint: no jobs and no money)
Things can always change, but today China is significantly more dependent on petrochemicals than the US. I'm not sure what you're referring to with regards to agriculture, both the US and China have strong food industries that produce plenty of foods containing protein.
In 2023 China had more net new solar capacity than the US has in total, and it will only climb from there. In order to do this, they're flexing muscles in R&D and mass production that the US has actually started to flex, and now will face extreme headwinds and decreased capital investment.
Regarding agriculture: America's agricultural powerhouse, California's Central Valley, is rapidly depleting its water supplies. The midwest is depleting its topsoil at double the rate that USDA considers sustainable.
None of this is irreversible or irrecoverable, but it very clearly requires some countervailing push on market forces. Market forces do not naturally operate on these types of time scales and repeatedly externalize costs to neighbors or future generations.
It sounds like those countervailing pushes are ongoing? The Nature article mentions how California passed regulatory reforms in 2014 to address the Central Valley water problem. The Smithsonian article describes how no-till practices to avoid topsoil depletion have been implemented by a majority of farmers in four major crops.
Uhhh I’m going to describe a specific case, but you can extrapolate this to just about every single sustainability initiative out there.
No-till farming has been significantly supported by the USDA’s programs like EQIP
During his first term, Trump pushed for a $325MM cut to EQIP. That's 20-25% of their funding and would have required cutting hundreds if not thousands of employees.
Even BEFORE these cuts (and whatever he does this time around), USDA already has to reject almost 75% of eligible EQIP applicants
Regarding CA’s water: Trump already signed an EO requiring more water be diverted from the San Joaquin Delta into the desert Central Valley to subsidize water-intensive crops. This water, by the way, is mostly sold to mega-corps at rates 98% below what nearby American consumers pay via their municipal water supplies, effectively eliminating the blaring sirens that say “don’t grow shit in the desert.”
Now copy-paste to every other mechanism by which we can increase our nation’s climate security and ta-da, you’ve discovered one of the major problems with Trumpism. It turns out politics do matter!
But why are programs like this controversial, even though anything shaped like a farm subsidy is normally popular? It seems to me that things like your Central Valley analysis are precisely the reason. The Central Valley has been one of the nation's agricultural heartlands for a while, and for quite a few common food products represents 90%+ of domestic production. So if this "blaring siren" you describe is real, and we have to stop farming there, a realistic response plan would have to include an explanation of what all the farmers are going to do and where we'll get almonds and broccoli from.
Perhaps you know all this already, but a lot of people who advocate such policies don't seem to. This then feeds into skepticism about whether they're hearing the "blaring siren" correctly in the first place. Personally, I think nearly arbitrarily extreme water subsidies are worth it if that's what we need to keep olives and pomegranates and celery in stock at the grocery store.
The solution is to rely on the magic of prices to gradually push farming elsewhere while simultaneously investing heavily in more efficient farming practices and shifting our diet away from ultra-inefficient meat production.
You really DON’T need to centrally plan everything. The market will still find good solutions under the new parameters, but we need those parameters to change before we’re actually out of water.
I think that coming down from 5T to 0.5T means that TSMC cannot be reproduced locally, but everything else is on the table. At least TSMC has a serious roadmap for its Arizona fab facility, so that too is domestically captured, although not its latest gen fab.
Because tech CEOs have decided to go all-in on fascism as they see it's a way to make money. Bow to Trump, get on his good side, reap the benefits of government corruption.
It's why TikTok thanked Trump in their boot-licking message of "thanks, trump" after he was the one who started the TikTok ban.
A harder question is: why wouldn't billionaires like Trump and his oligarchic kleptocracy?
I'm sure they're getting tax credits for investment (none of the articles I can find actually detail the US gov involvement) but the project is mostly just a few multinationals setting up a datacenter where their customers are.
It seems early for this sort of move. This is also a huge spin on the whole thing that could throw a lot of people off.
Is there any planned future partnerships? Stargate implies something about movies and astronomy. Movies in particular have a lot of military influence, but not always.
So, what's the play? Help mankind or go after mankind?
If one is expecting to have an AGI breakthrough in the next few years, this is exactly the prepositioning move one would make to be able to maximally capitalize on that breakthrough.
From my perspective humanity has all breakthroughs in intelligence it needs.
The breaking of The Enigma gave humans machines that can spread knowledge to more humans. It already happened a long time ago, and all of it was cause for much trouble, but we endured the hardest part (to know when to stop), and humans live in a good world now. Full of problems, but way better than it was before.
I think the web is enough. LLMs are good enough.
This move to try to draw water from stone (artificial intelligence in sillicon chips) seems to be overkill. How can we be sure it's not a siphon that will make us dumber? Before you just dismiss me or counter my arguments, consider what is happening everywhere.
Maybe I'm wrong, or not seeing something. You know, like I believed in aliens for a long time. This move to artificial intelligence causes shock and awe in a similar way. However, while I do believe aliens do not exist, I am not sure if artificial intelligence is a real strawman. It could be the case that is not made of straw, and if it is more than that, we might have a problem.
I am specially concerned because unlike other polemic topics, this one could lead to something not human that fully understands those previous polemic topics. Humans through their generations forget and mythologize those fantasies. We don't know what non-humans could do with that information.
I am thinking about those issues for a long time. Almost a decade, even before LLMs running on silicon existed. If it wanted, non-human artificial intelligence could wipe the floor with humans just by playing to their favorite myths. Humans do it in a small scale. If machines learn it, we're in for an unknown hostile reality.
It could, for example, perceive time different from us (also a play on myths), and do all sorts of tricks with our minds.
LLMs and the current generation of artificial intelligence are boolean first, it's what they run. Only true or false bits and gates. Humans can understand the meaning of trulse though, we are very non boolean.
So, yeah, I am worried about booleaning people on a massive scale.
To get out from under OpenAI’s considerable obligation to Microsoft.
That is why there is the awkward “we’ll continue to consume Azure” sentence in there. Will be interesting to see if it works or if MS starts revving up their lawyers.
Not sure how they knew to buy them or why but they have them. Mostly seem to be lending them out. Think mostly OpenAI. Or was it MS. One of the big dogs
Still, the worst positioned cloud provider to tackle this job. Both for the project and for eventual users of whatever eldritch abomination that cames out of this.
> The new entity, Stargate, will start building out data centers and the electricity generation needed for the further development of the fast-evolving AI in Texas, according to the White House.
Wouldn't a more northern state be a better location given the average temperatures of the environment? I've heard Texas is hot!
> All three credited Trump for helping to make the project possible, even though building has already started and the project goes back to 2024.
It’s sad to see the president of US being ass kissed so much by these guys. I always assumed there’s a little of that but this is another extreme. If this is true, I fear America has become like a third world country with a dictator like head of state where everyone just praises him and get favors in return.
"SoftBank, OpenAI, Oracle, and MGX" seems like quite the lineup. Two groups who are good at frivolously throwing away investment money because they have so much capital to deploy, there really isn't anything reasonable to do with it, a tech "has-been" and OpenAI. You become who you surround yourself with I guess.
Is there any government investment or involvement in this company? It seems like it’s all private investment so I’m confused why this is being announce by the President.
It will be interesting to see how AWS responds. Jump on board, or offer up a competing vision otherwise their cloud risks being perceived as being left behind in terms of computing power.
Texas already is the leading state in new grid battery and grid solar installs for the last 3 years. Governor Abbott also did nuclear deregulation last year.
Leading state in new grid battery and grid solar installations for the last three years, and deregulated nuclear power last year. Abilene is near the Dallas Fort-Worth Metroplex area which has a massive 8M+ upper-income population highly skilled in hardware and electrical engineering (Texas Instruments, Raytheon, Toyota, etc). The entire area has massive tracts of open land that are affordably priced without building restrictions. Business regulations and tax environment at the state and city level are very laissez faire (no taxes on construction such as in the Seattle area or many parts of California).
I could see DFW being a good candidate for a prototype arcology project.
Like dwnw said, anything goes in Texas if you have money and there’s already a decent number of qualified tech workers. Corporate taxes are super low as well.
Some reports[0] paint this as something Trump announced and that the US Government is heavily involved with but the announcement only mentions private sector (and lead by Japan's Softbank at that). Is the US also putting in money? How much control of the venture is private vs public here?
You probably still need to train the initial models in data centers, with local host mostly being used to run train models. At most we’d augment trained models with local data storage on local host.
If compute continues to become cheaper, local training might be feasible in 20 years.
You definitely still need data centers to train the models that you’ll run locally. Also if we achieve AGI you can bet it won’t be available to run locally at first.
Isn't it better to control robots from the data center? You can get 30ms round-trip to most urban centers, which is sufficient latency for most tasks; lower weight & cost robots with better battery life, and more uptime on compute (e.g. the GPU isn't sitting there doing nothing when the user is sleeping) which means lower cost to consumer for the same end result.
For self-driving you need edge compute because a few milliseconds of latency is a safety risk, but for many applications I don't see why you'd want that.
Why are corporations announcing business deals from the White House? There doesn’t seem to be any public ownership/benefit here, aside from potential job creation. Which could be significant. But the American public doesn’t seem to gain anything from this new company.
This isn't an overseas trip though. It's a private partnership announced by the sitting president in the Roosevelt room, literally across the hall from the oval office. I don't know how unprecedented that truly is, but it certainly feels unusual.
It will. The short-term sale is that it will create thousands of temporary jobs, and long-term reduce hundreds of thousands of jobs, while handing the savings to stock holdings and moving wealth to the stockholders.
Looks on pace to eliminate every human job over 10 years.
What is the hard limiting factor constraining software and robots from replacing any human job in that time span? Lots of limitations of current technology, but all seem likely to be solved within that timeframe.
>> Ingka says it has trained 8,500 call centre workers as interior design advisers since 2021, while Billie - launched the same year with a name inspired by IKEA's Billy bookcase range - has handled 47% of customers' queries to call centres over the past two years.
Do you expect all companies to retrain? Do you expect CEOs to be wrong? Do you expect AI to stay the same, get better, or get worse? I never made the claim that new jobs will NOT be made, that is yet to be seen, but jobs will be lost to AI.
>> “For a company like BT there is a huge opportunity to use AI to be more efficient,” he said. “There is a sort of 10,000 reduction from that sort of automated digitisation, we will be a huge beneficiary of AI. I believe generative AI is a huge leap forward; yes, we have to be careful, but it is a massive change.”
The US is now officially a full on oiligarchy. It always was one, it's just that the powers that be don't care to hide it anymore and are flaunting that they have the power.
This is my question too, but I haven't seen a journalist ask it yet. My baseless theory: Trump has promised them some kind of antitrust protections in the form of legislation to be written & passed at a later date.
An announcement of a public AI infrastructure program joined by multiple companies could have been a monumental announcement. This one just looks like three big companies getting permission to make one big one.
Easier: Trump likely committed that the federal agencies wouldn't slow roll regulatory approval (for power, for EIS, etc.).
Ellison stated explicitly that this would be "impossible" without Trump.
Masa stated that this (new investment level?) wouldn't be happening had Trump not won, and that the new investment level was decided yesterday.
I know everyone wants to see something nefarious here, but simplest explanation is that the federal government for next four years is expected to be significantly less hostile to private investment, and - shocker - that yields increased private investment.
That is a better one. I don't know why three rich guys investing in a new company would result in a slowness that Trump could fix, though, and a promise to rush or sidestep regulatory approval still sounds nefarious.
If the announced spending target is true, this will be a strategic project for the US exceeding Biden's stimulus acts in scale. I think it would be pretty normal in any country to have highest-level involvement for projects like this. For example, Tesla has a much smaller revenue than this and Chancellor Olaf Scholz was still present when they opened their Gigafactory near Berlin.
Here is what I think is going on in this announcement. Take the 4 major commodity cloud companies (Google, Microsoft, Amazon, Oracle) and determine: do they have big data centers and do they have their own AI product organization?
- Google has a massive data center division (Google Cloud / GCP) and a massive AI product division (Deep Mind / Gemini).
- Microsoft has a massive data center division (Azure) but no significant AI product division; for the most part, they build their "Copilot" functionality atop their partner version of the OpenAI APIs.
- Amazon has a massive data center division (Amazon Web Services / AWS) but no significant AI product division; for the most part, they are hedging their bets here with an investment in Anthropic and support for running models inside AWS (e.g. Bedrock).
- Oracle has a massive data center division (Oracle Cloud / OCI) but no significant AI product division.
Now look at OpenAI by comparison. OpenAI has no data center division, as the whole company is basically the AI product division and related R&D. But, at the moment, their data centers come exclusively from their partnership with Microsoft.
This announcement is OpenAI succeeding in a multi-party negotiation with Microsoft, Oracle, and the new administration of the US Gov't. Oracle will build the new data centers, which it knows how to do. OpenAI will use the compute in these new data centers, which it knows how to do. Microsoft granted OpenAI an exception to their exclusive cloud compute licensing arrangement, due to this special circumstance. Masa helps raise the money for the joint venture, which he knows how to do. US Gov't puts its seal on it to make it a more valuable joint venture and to clear regulatory roadblocks for big parallel data center build-outs. The current administration gets to take credit as "doing something in the AI space," while also framing it in national industrial policy terms ("data centers built in the USA").
The clear winner in all of this is OpenAI, which has politically and economically navigated its way to a multi-cloud arrangement, while still outsourcing physical data center management to Microsoft and Oracle. Probably their deal with Oracle will end up looking like their deal with Microsoft, where the trade is compute capacity for API credits that Oracle can use in its higher level database products.
OpenAI probably only needs two well-capitalized hardware providers competing for their CPU+GPU business in order to have a "good enough" commodity market to carry them to the next level of scaling, and now they have it.
Google increasingly has a strategic reason not to sell OpenAI any of its cloud compute, and Amazon could be headed in that direction too. So this was more strategically (and existentially) important to OpenAI than one might have imagined.
How have they already selected who gets this money? Usually the government announces a program and tries to be fair when allocating funds. Here they are just bankrolling an existing project. Interesting
> building new AI infrastructure for OpenAI in the United States
That's nice, but if I were spending $500bn on datacenters I'd probably try to put a few in places that serve other users. Centralised compute can only get you so far in terms of serving users.
Last time, in 2016, SoftBank announced a $50B investment in the US...what were the results of that? Granted, SB announced an up-selled $100B investment earlier, is this not similar in "announcement"?
"""
SoftBank’s CEO Masayoshi Son has previously made large-scale investment commitments in the US off the back of Trump winning a presidential election. In 2016, Son announced a $50 billion SoftBank investment in the US, alongside a similar pledge to create 50,000 jobs in the country.
...
However, as reported by Reuters, it’s unclear if the new jobs pledged back in 2016 ever came to fruition and questions have been raised about how SoftBank, which had $29 billion in cash on its balance sheet according to its September earnings report, might fund the investment.
"""
It made me laugh when Sam said "I'm thrilled that we get to do this in the United States of America", I shouted at the TV 'Yeah you almost had to do it in Saudi Arabia' !!
I hope the Japanese government demands seismic isolation for Softbank, otherwise it will be the Japanese citizens who have to foot the bill when this hype hits the ground and shakes hard the Japanese economy :/
Softbank should not be allowed to invest more than ARM Holdings sold at a loss.
You know, I expected that they'd find or synthesize some naquadah to build an actual stargate and maybe even defeat the Goa'uld. The exciting stuff, not AI.
I put the word "some" in front of "crypto" for a reason.
There is some crypto that we know how to break with a sufficiently large quantum computer [0]. There is some we don't know how to do that to. I might be behind the state of the art here, but when I wasn't we specifically really only knew how to use it to break cryptography that Shor's algorithm breaks.
Nope. Any crypto you can break with a real, physical, non-imaginary quantum computer, you can break faster with classical. Get over it. Shor's don't run yet and probably never will.
You are misdirecting and you know it. I don't even need to discredit that paper. Other people have done it for me already.
This is like asking whether $500 billion to fund warp drives would yield better returns.
Money can't buy fundamental breakthroughs: money buys you parallel experimental volume - i.e. more people working from the same knowledge base, and presumably an increase in the chance that one of them does advance the field. But at any given time point, everyone is working from the same baseline (money also can improve this - by funding things you can ensure knowledge is distributed more evenly so everyone is working at the state of the art, rather then playing catch up in proprietary silos).
True quantum computing in the sense that most people would imagine it, using individual qubits in an analogous (ish) way to classical computers, has not reached a useful scale. To date only “toy problems” to demonstrate theoretical results have been solved.
There's a good amount of irony in the results that AI have achieved, particularly if we reach AGI - they have improved individual worker efficiency by removing other workers from the system. Naming it Stargate implies a reckoning with the actual series itself - an accomplishment by humanity. Instead, what this pushes, is accomplishing the removal of humans from humanity. I like cool shiny tech, but I like useful tech that really helps humans more. Work on 3D-printing sustainable food, or something actually useful like that. Jenson doesn't need another 1B gallons of water under his belt.
> Instead, what this pushes, is accomplishing the removal of humans from humanity.
If you buy the marketing, yeah. But we aren't really seeing that in the tech sector. We haven't seen it succeed in the entertainment sector... it's still fighting for relevance in the medical and defense industries too. The number and quality of jobs that AI replaced is probably still quite low, and it will probably remain that way even after Stargate.
AI is DOA. LLMs have no successor, and the transformer architecture hit it's bathtub curve years ago.
> Jenson doesn't need another 1B gallons of water under his belt.
Jensen gets what he wants because he works with the industry. It's funny to see people object to CUDA and Nvidia's dominance but then refuse to suggest an alternative. An open standard managed by an independent and unbiased third-party? We tried that, OEMs abandoned it. NPU hardware tailor-made for specific inference tasks? Too slow, too niche, too often ends up as wasted silicon. Alternative manufacturer-specific SDKs integrated with one high-level library? ONNX tried that and died in obscurity.
Nvidia got where they are today by doing exactly what AMD and Apple couldn't figure out. People give Jensen their water because it's wasted in anyone else's hands.
Uh, they invented multilatent attention and since the method for creating o1 was never published, they’re the only documented example of producing a model of comparable quality. They also demonstrated massive gains to the performance of smaller models through distillation of this model/these methods, so no, not really. I know this is the internet, but we should try to not just say things.
ChatGPT may be better than Google Search in content but at end of day, you have to make money and last report I saw, ChatGPT is burning through money at prestigious rate.
> Technology advancing more quickly year over year?
> That’s a crazy notion and I’ll be sure everyone knows.
The version I heard from an economist was something akin to a second industrial revolution, where the pace of technological development increases permanently. Imagine a transition from Moore's law-style doubling every year and a half, to doubling every week and a half. That wouldn't be a true "singularity" (nothing would be infinite), but it would be a radical change to our lives.
> The pace of technological development has always been permanently increasing.
Not in the same way though. The pace of technological development post-industrial-revolution increased a lot faster - technological development was exponential both before and after, but it went from exponential with a doubling time of maybe a century, to a Moore's law style regime where the doubling time is a couple of years. Arguably the development of agriculture was a similar phase change. So the point is to imagine another phase change on the same scale.
You keep mentioning moore’s law, but that specifically applied to the amount of transistors on a die, not the rate of general technological advancement.
Regardless, I don’t see any change in this pattern. We’re advancing faster than ever before, just like always.
We’ve been doing statistical analysis and prediction for years now. It’s just getting better faster, like always.
I don’t see this big change in the rate of advancement. There’s just a lot more media buzz around it right now causing a bubble.
There was a big visible jump in text generation capabilities a few years ago (which was preceded by about 6 years of incremental NLP advances) and since then we’ve seen paced, year over year advances in that field.
As a medical layman, I imagine that alpha fold may really push the rate of pharmaceutical advances.
But I see no indication for a general jump in the rate of rate of technological advancement.
> that specifically applied to the amount of transistors on a die, not the rate of general technological advancement.
Sure. But you can look at things like GDP growth rates and see the same thing.
> I don’t see this big change in the rate of advancement. There’s just a lot more media buzz around it right now causing a bubble.
Maybe. I'm just trying to give a sense of what the concept of a "weak singularity" is. I don't have a view on whether we're actually going to have one or not.
It was rumoured in early 2024 that "Stargate" was planned to require 5GW data centre capacity[1][2] which in early 2024 was the entire data centre capacity Microsoft had already built[3]. Data centre capacity costs between USD$9-15m/MW[6] so 5GW of new data centre capacity would cost USD$45b-$75b but let's pick a more median cost of USD12m/MW[6] to arrive at USD$60b for 5GW of new data centre capacity.
This 5GW data centre capacity very roughly equates to 350000x NVIDIA DGX B200 (with 14.3kW maximum power consumption[4] and USD$500k price tag[5]) which if NVIDIA were selected would result in a very approximate total procurement of USD$175b from NVIDIA.
On top of the empty data centres and DGX B200's and in the remaining (potential) USD$265b we have to add:
* Networking equipment / fibre network builds between data centres.
* Engineering / software development / research and development across 4 years to design, build and be able to use the newly built infrastructure. This was estimated in mid 2024 to cost OpenAI US$1.5b/yr for retaining 1500 employees, or USD$1m/yr/employee[7]. Obviously this is a fraction of the total workforce needed to design and build out all the additional infrastructure that Microsoft, Oracle, etc would have to deliver.
* Electricity supply costs for current/initial operation. As an aside, these costs seemingly not be competitive with other global competitors if the USA decides to avoid the cheapest method of generation (renewables) and instead prefer the more expensive generation methods (nuclear, fossil fuels). It is however worth noting that China currently has ~80% of solar PV module manufacturing capacity and ~95% of wafer manufacturing capacity.[10]
* Costs for obtaining training data.
* Obsolescence management (4 years is a long time after which equipment will likely need to be completely replaced due to obsolescence).
* Any other current and ongoing costs of Microsoft, Oracle and OpenAI that they'll likely roll into the total announced amount to make it sound more impressive. As an example this could include R&D and sustainment costs in corporate ICT infrastructure and shared services such as authentication and security monitoring systems.
The question we can then turn to is whether this rate of spend can actually be achieved in 4 years?
Microsoft is planning to spend USD$80bn building data centres in 2025[7] with 1.5GW of new capacity to be added in the first six months of 2025[3]. This USD$80bn planned spend is for more than "Stargate" and would include all their other business units that require data centres to be built, so the total required spend of USD$45b-$75b to add 5GW data centre capacity is unlikely to be achieved quickly by Microsoft alone, hence the apparent reason for Oracle's involvement. However, Oracle are only planning a US$10b capital expenditure in 2025 equating to ~0.8GW capacity expansion[9]. The data centre builds will be schedule critical for the "Stargate" project because equipment can't be installed and turned on and large models trained (a lengthy activity) until data centres exist. And data centre builds are heavily dependent on electricity generation and transmission expansion which is slow to expand.
> The buildout is currently underway, starting in Texas, and we are evaluating potential sites across the country for more campuses as we finalize definitive agreements.
For those interested, it looks like Albany, NY (upstate NY) is very likely one of the next growth sites.
I'm also curious how a global leader in multimodal generative AI chose this particular image. Did they prompt a generator for a super messy impressionist painting of red construction cranes with visible brush strokes, distorted to the point of barely being able to discern what the image represents?
Considering Stargate's introduction and plan seems to be a super messy concept of impressions of ideas and very lacking in details, the picture makes a lot of sense. Let AI evangelists see the future in the fuzz; let AI pessimists see failure in the abstract; let investors see $$$ in their pockets.
For me it's watching a gay man grovel at the feet of one of the most anti-LGBT politicians, a day after Trump signed multiple executive orders that dehumanized Altman and the LGBT community. Every token thinks they're special until they're spent.
>For me it's watching a gay man grovel at the feet of one of the most anti-LGBT politicians
Besides what ImJamal said, as a wealthy playboy man-about-town hanging out at Studio 54 in the '70s and '80s, I guarantee Trump has known and been friends with more gays than 95% of Americans. Certainly there has been no shortage of gay people among his top-level appointees in either his first or second administrations.
Trump was the first president to come into office supporting gay marriage. Trump only has a problem with the "t" part of the community and only in bathrooms and sports, not in general.
This could potentially trigger an AI arms race between the US and China. The standard has been set, lets see what China responds with. Either way, it will accelerate the arrival of ASI, which in my opinion is probably a good thing.
It will be similar to the space race between Soviet Union and US. And just like Soviet Union going broke and collapsing, China too will go even more broke and collapse.
Texas is the leading state in new grid batteries and grid solar for three years now. Also Governor Abbott deregulated nuclear last year. Sure there will be some new natural gas too, which is the least scary fossil fuel. They call it the "all of the above" approach to energy.
Personally I wish they invested in optical photonic computing, taking it out of the research labs. It can be so much more energy efficient and faster to run than GPUs and TPUs.
No amount of money invested in infrastructure is going to solve the "garbage in, garbage out" problem with AI, and it looks like the AI companies have already stolen the vast majority of content that is possible to steal. So this is basically a massive gamble that some innovation is going to make AI do something better than faultily regurgitate its training data. I'm not seeing a corresponding investment which actually attempts to solve the "garbage in, garbage out" problem.
A fraction of this money invested in building homes would end the homelessness problem in the U.S.
I guess the one silver lining here is that when the likely collapse happens, we'll have more clean energy infrastructure to use for more useful things.
$500 billion is a lot of money even by US government standards. It's about the size of all the new spending in the 2021 bipartisan infrastructure bill.
The political will is trying to balance a large existing debt at increasing interest rates, a significant primary deficit even in a good economy, rising military threats from China, a strong Republican desire for tax cuts, extremely popular entitlement programs that no one wants to touch, and an aging population with a declining birthrate
Modern monetary systems function through two main channels: government spending and bank lending. Every dollar in circulation originates from one of these sources - either government fiscal operations (deficit spending) or bank credit creation through loans. This means all money is fundamentally based on debt, though "debt" has very different implications for a currency-issuing government versus private borrowers.
Government debt operates fundamentally differently from household debt since the government controls its own currency. As former Fed Chairman Alan Greenspan noted to Congress, the U.S. can always meet any obligation denominated in dollars since it can create them. The real constraints aren't financial but economic - inflation risk and the efficient allocation of real resources.
The key question then becomes one of political priorities and public understanding. If public opposition to beneficial government spending stems from misunderstanding how modern monetary systems work, then better education about these mechanisms could help advance important policy goals. The focus should be on managing real economic constraints rather than imaginary financial ones.
Yes, people hate inflation, because inflation creates a demand for more money! Inflation means there is not enough money for people. So why did prices go up, is it just because of fiscal spending?
The relationship between inflation and monetary policy is more complex than often portrayed. While recent inflation has created financial strain for many Americans, its root causes extend beyond simple money supply issues.
Recent data shows that corporate profit margins reached historic highs during the inflationary period of 2021-2022. For example, in Q2 2022, corporate profits as a percentage of GDP hit 15.5%, the highest level since the 1950s. This surge in corporate profits coincided with the aftermath of Trump's 2017 Tax Cuts and Jobs Act, which reduced the corporate tax rate from 35% to 21%. This tax reduction increased after-tax profits and may have given companies more flexibility to pursue aggressive pricing strategies.
Multiple factors contributed to inflation:
Supply chain disruptions created genuine scarcity in many sectors, particularly semiconductors, shipping, and raw materials
Demand surged as economies reopened post-pandemic
Many companies used these market conditions to implement price increases that exceeded their cost increases
The corporate tax environment created incentives for profit maximization over price stability
For instance, many large retailers reported both higher prices and expanded profit margins during this period. The Federal Reserve Bank of Kansas City found that roughly 40% of inflation in 2021 could be attributed to expanded profit margins rather than increased costs.
This pattern suggests that market concentration, pricing power, and tax policy played significant roles in inflation, alongside traditional monetary and supply-chain factors. Policy solutions should therefore address market structure, tax policy, and monetary policy to effectively manage inflation.
> This project will ... also provide a strategic capability to protect the national security of America and its allies.
> All of us look forward to continuing to build and develop ... AGI for the benefit of all of humanity.
Erm, so which one is it? It is amply demonstrable from events post WW2 that US+allies are quite far from benefiting all of humanity & in fact, in some cases, it assists an allied minority at an extreme cost to a condemned majority, for no discernable humanitarian reasons save for some perceived notion of "shared values".
In context Pelosi has been pro nuclear for at least 16 years having spoken for nuclear and nuclear investment in 2008 as reported by the American Enterprise Institute.
God forbid anyone would invest $500,000,000,000 to create jobs. No no no. 500 billion to destroy them for "more efficiency" so the owner class can get richer.
I watched the announcement live, I could have sworn that the softbank guy said "initial investment of 100 MILLION, we hope to EARN 500 BILLION by the end of your (Trumps) term"
Gave me a real "this is just smoke and mirrors hiding the fact that the white house is now a glory hole for Trump to enjoy" feel.
The Silicon-Valley bubble universe continues to introduce entropy that it feeds off of itself... Naming this Stargate when some of the largest effects AI has had is removing humans from processes to make other, fewer humans more efficient is emblematic of this hollow naming ethos - continuing to use the portal to shunt more and more humans out of the process that is humanity, with fairly reckless abandon. Who is Ra, and who is sending the nuke where, in this naming scheme? You decide.
Altman said we will be amazed at the rate AI will CURE diseases. Not diagnose, not triage or help doctors but cure, ie understand at a deep fundamental, mechanistic level then devise therapies, ie drugs, combination of drugs and care practices that work. WOW.
Despite the fact that this is THE thing I'd be the happiest to see in the real world (having spent a considerable amount of my career in companies working towards this vision), we are so far from it (as anyone who actually worked on these problems will attest) that Altman's comment here isn't just overselling, it's a blatant lie about this tech's capabilities.
I guess the pitch was something like: "hey o3 can already do PhD level maths so you know in 5 years it will be able to do drugs too, and cure shit, Mr President".
Trouble is o3 can't do advanced math (or at least definitely not at the level openai claimed.. it was a lie, it turns out openai funds the dataset that measures this - ouch). And the bigger problem is, going from "ai can do maths" to "invent cures" is about a 10-100 X jump. If it wasn't, don't we think the pharma companies would have solved this by hiring lots of "really smart math guys"?
As anyone in biotech will tell you, the hard bit is not the first third of the drug discovery pipeline (where 99% of ai driven biotechs focus). It's the later parts where the rubber meets the road.. i.e. where your precious little molecule is out in the real world with real people where the incredible variability of real biological hosts makes most drugs fail spectacularly. You can't GPT your way out of this. The answers for this is not in science papers that you can just read and regurgitate a version that "solves biology and cures diseases".
To solve this you need AI but most of all you have to do science. Real science. In the lab, in vitro and in Vivo, not just in silico, doing ablation studies, overfitting famous benchmark datasets and other pseudo science shit the ML community is used to doing.
That is all to say, I'd bet we won't see a single purely AI designed novel drug in the clinic in this decade. All parts of that sentence are important. Purely AI designed. Novel. But that's for another post..
Now, back to Altman. If you watch the clip, he almost did the smart thing at first when Trump put him on the spot and said "I have no idea about healthcare, biotech (or AI beyond board room drama)" but then could not resist coming up with this outlandish insane answer.
Famously (in tech circles anyway) Paul Graham wrote more than a decade ago about Altman that he's the most strong willed individual he's ever met, who can just bend the universe to his will. That's his super skill. And clearly.. convincing SoftBank and Oracle to do this 500 billion investment for OpenAI (a non profit turned for profit) is an unbelievable achievement. I have no idea what Altman can say (or do) in board rooms that unlocks these possibilities for him.. Any ideas? Let me know!
> This project will not only support the re-industrialization of the United States but also provide a strategic capability to protect the national security of America and its allies.
> The initial equity funders in Stargate are SoftBank, OpenAI, Oracle, and MGX. SoftBank and OpenAI are the lead partners for Stargate, with SoftBank having financial responsibility and OpenAI having operational responsibility. Masayoshi Son will be the chairman.
I'm sorry, has SoftBank suddenly become an American company? I feel like I'm taking crazy pills reading this.
Japan companies were a threat just a couple weeks ago.
There is credible evidence that leads me to believe that (1) Nippon Steel Corporation, a corporation organized under the laws of Japan . . . might take action that threatens to impair the national security of the United States;
Sometimes the person writing the copy is writing it because they talk good, not because they are the biggest proponent of the idea.
Give a clever, articulate person a task to write about something they don't believe in and they will include the subtlest of barbs, weak praise, or both.
I thought this meant it was $500 billion in government money.
Some of these companies do have huge cash reserves they don't know what to do with so if it is $500 billion of private money, I am not going to complain.
I will believe it when I see it though and that this isn't a 100 billion in private money with a 400 billion dollar free US government put option for the "private" investors if things don't go perfect.
Texas has a .... unique energy market (literally! They don't connect to the national grid so they can avoid US Government regulations- that way it's not interstate commerce). Because of that spot prices fluctuate very wildly up and down, depending on the weather, demand, and their large quantity of renewables (Texas is good for solar and wind energy). When the weather is good for renewables they have very cheap electricity (lots of production and can't sell to anyone outside the state), when the weather is bad they can have incredibly expensive electricity (less production, can't buy from anyone outside the state). Larger markets, able to pull from larger pools of producers and consumers, just fluctuate less.
I know some bitcoin miners liked to be in Texas and basically worked as energy speculators: when electricity was cheap they would mine bitcoin, when it was expensive they shut down their plant- sometimes they even got paid by producers to shut-down their plant! I would bet that you could do a lot of that with AI training as well, given good checkpointing.
You wouldn't want to do inference there (which needs to be responsive and doesn't like 'oh this plant is going to shut down in one minute because a storm just came up') but for training it should be fine?
No state income tax, fewer regulations (zoning, environmental regulations) than other parts of the country, relatively cheap power, large existing industrial base. For skilled labor that last bit is important. Also one of the cheapest states wrt minimum wage (same as federal, nothing added), which is important for unskilled labor.
Depending on the part of the state, relatively low costs of living which is helpful if you don't like paying people much. Large areas that are relatively undeveloped or underdeveloped which can mean cheaper land.
You'd really think that arguably the leader in generative AI could come up with a unique project name instead of ripping off something extant and irrelevant.
But then again that's their entire business, so I shouldn't be too surprised.
I mean the entire AI thing is built atop mass plagiarism and stealing things others have created indiscriminately. I doubt Mr Worldcoin could come up with an original thought for anything, seeing how his models behave.
We changed the URL from https://openai.com/index/announcing-the-stargate-project/ to a third-party report. Readers may want to read both. If there's a better URL, we can change it again.
Apart from my general queasiness about the whole AGI scaling business and the power concentration that comes with it, these are the exact four people/entities that I would not want to be at the tip of said power concentration.
By the time this project is done it will have been dead for 2 years.
Too many greedy mouths. Too many corporations. Too little oversight. Too broad an objective. Technology is moving too quickly for them to even guess at what to aim for.
Ellison should be nowhere near this:
https://arstechnica.com/information-technology/2024/09/omnip...
The man has the moral system of a private prison and the money to build one.
<quote> Citizens will be on their best behavior because we are constantly recording and reporting everything that's going on," Ellison said, describing what he sees as the benefits from automated oversight from AI and automated alerts for when crime takes place. "We're going to have supervision," he continued. "Every police officer is going to be supervised at all times, and if there's a problem, AI will report the problem and report it to the appropriate person. </quote>
What is far more important to understand is to ignore all that nonsense and focus on who makes money? It will be Ellison and his buddies making tens of billions of dollars/year selling 'solutions' to local governments, all paid by your property taxes. This also enables an ecosystem of theft, where others benefit a lot more. With the nexus of Private Prisons, kids for cash judges (or judges investing in stock of prisons), DEA/police unions, DEA unions, small rural towns increasing prison population (because they get added to the total pop, and get funds allocated).
More importantly this is extremely attractive to police who can steal billions every day from civil forfeiture, they have access to anyone who makes a bank withdrawal or transacts in cash, all displayed in real time feeds, ready for grabbing!
Money? If they take it I can get more. The government is trying to take our freedom, permanently.
> "Citizens will be on their best behavior because we are constantly recording and reporting everything that's going on," Ellison said, describing what he sees as the benefits from automated oversight from AI and automated alerts for when crime takes place.
Wow! It is genuinely frightening that these people should be in control of our future!
Literal 'new world order' stuff here. Alex Jones and crew got so excited that their guy was in the driver's seat that they didn't notice the actual illuminati lizard people space lasers being deployed.
I don't think we'll ever have a zero-crime society, neither should we aim to be one. But being left to the vagaries of police (and union) politics, culture and the complications of city budgets is clearly broken.
Example: Cities are being presented a false choice between accepting deadly high speed chases vs zero criminal accountability [1], which in the world of drones seems silly [2]
I don't want the police to have unfettered access to surveil any and all citizens but putting camera access behind a court warrant issued by a civilian elected judge doesn't feel that dystopian to me.
Is that what Ellison was alluding to? I have no idea, but we are no longer in a world where we should disregard this prima facie.
[1]: https://www.ktvu.com/news/controversial-oakland-police-pursu...
[2]: https://www.cbsnews.com/sanfrancisco/news/san-francisco-poli...
We keep saying people like him shouldn't be involved in certain ventures, and yet, they still are. More than ever, actually.
2025 is shaping up to be When the Villains Win year.
> "Citizens will be on their best behavior because we are constantly recording and reporting everything that's going on,"
Let's be honest. He isn't wrong. I'd rather live in a society with zero crime than what we have now.
Sorry to break it to you, but oppressing people with cameras to prevent crime will only push the crime to where the cameras aren't.
This makes preventing the crime and protecting people from effects of these crimes extremely difficult.
There's a few that have tried to implement this, and I want to live in none of them.
The US will fare no better if it walks down this path, and honestly will likely fare worse for it's cultural obsession with individualism over community.
Yes we have historically low low crime. It's unbearable.
There are a number of countries that might give you a panopticon state of you want one
Yeah, historically low crime because a lot of the crime is not considered crime anymore. Why thousands of stores are closing in California?
Well and good as a talking point, but violent crime is still illegal and way down.
Just be prepared to be never daring to complain; a zero crime society isn't without its faults.
You stop abuse in this country, particularly of children, and you start having zero violent crime a decade later.
If you're lucky, you might get your chance to live in Thiel's and Ellison's techbro utopia. Make sure to tell us how great it is to be subjected to people with no accountability, but all of the power over every aspect of your life.
So having a policeman in each street and corner, except the policeman bias is set by these four oligarchs.
Welcome to... choose among many of the technodystopies in literature.
Just Ellison alone brings unwelcome feeling of having Oracle craziness forced down our collective throats, but I share your concern about the unholy alliance generated in front of us.
My immediate reaction to the announcement was one of these is not like the others. OpenAI, a couple of big investment funds, Microsoft, Nvidia, and...............Oracle?
Oracle has a lot of valuable classified information about the state and its enemies due to its business.
Oracle makes perfect sense in that they are 1) a massive datacenter company, and 2) sell a variety of saas products to enterprises, which is a major target market for AI.
Oracle has 2-3% market share as a Cloud Provider.
MSFT or even Google (AWS is not as mature in that space imho) made perfect sense, Oracle doesn't.
Elon and Larry are good friends, I would guess that has something to do with this development.
> Oracle has 2-3% market share as a Cloud Provider.
And the market leader is what, 30%? about 1 order of magnitude. That's not such a huge difference, and I suspect that Oracle's size is disproportionate in the enterprise space (which is where a lot of AI services are targeted) whereas AWS has a _ton_ of non-enterprise things hosted.
In any case, 2-3% is big enough where this kind of investment is 1) financially possible, 2) desirable to grow to be #2 or #3
Sadly, it is not that unexpected given some of his recent interviews[1]. Any other day, I would agree it is a surprise.
[1] https://arstechnica.com/information-technology/2024/09/omnip...
There is a certain reason that last weeks everybody and their grandma is simping for Trump. Nobody would want to be on his bad side right now. Moreover, we hear here and there that Trump "keeps his promises". A lot of the promises we do not know about and we may never will. These people did not spend money supporting his campaign for nothing. In other places and eras this would have been called corruption, now it is called "keeping his promises".
Trump is one of the most famous people in the world for not keeping promises of paying debts. But there is money to be made temporarily when he is running a caper, as long as you can get your hand in the pot before he steals it.
And you, are you simping for the Obidens of this world?
Corruption is as old as mankind; don't know why it's pointed out prominently. Just look at that Xipeng/Biden photo from the National Archives.
If your knee jerk response to any political discussion even remotely critical of 'your guy' is to snap into whataboutisim instead of participating in the conversation you might need a outrage pornography detox for a while.
> And you, are you simping for the Obidens of this world?
Did I?
> Corruption is as old as mankind
Yeah but seldomly celebrated or boasted about.
> There is a certain reason that last weeks everybody and their grandma is simping for Trump. Nobody would want to be on his bad side
It's worth keeping in mind how extremely unfriendly to tech the last admin was. At this point, it's basically proven in court that emails of the form "please deboost person x or else" were send, and there's probably plenty more we don't know about.
Combine that with the troubles in Europe which Biden's administration was extremely unwilling to help with, the obstacles thrown in the way of major energy buildouts, which are needed for AI... one would have to be stupid to be a tech CEO and not simp for Trump.
Tech has been extremely Democratic for many years. The Democrats have utterly alienated tech, and now they reap the consequences.
the troubles in Europe
Nice euphemism for giving people autonomy in their data and privacy.
Most of there companies are so large that they cannot really fail anymore. At this point it has very little to do with protecting themselves, more with making them more powerful than governments. JD Vance are said that the US could drop support for NATO if Europe tries to regulate X [1]. Oligarchs have fully infiltrated the US government and are trying to do the same to other countries.
I disagree with the grandparent. They don't support Trump because they do not want to be on his bad side (well, at least not only that), they support Trump because they see the opportunity to suppress regulation worldwide and become more powerful than governments.
We just keep making excuses (fiduciary duties, he just doesn't know how to wave his arm because he's an autist [2]). Why not just call it what it is?
[1] https://www.independent.co.uk/news/world/americas/us-politic...
[2] Which is pretty offensive to people on the spectrum.
I do agree that big part of why they support Trump is for anti-regulation reasons. But, it is also a fact that Trump is one of them, a businessman, not a politician. With Trump they can now discuss more business and less policies. There is a certain dealing of business right now that seems not at all transparent. And in this, the amount of public simping is really weird to what usually happens, everybody praising Trump even before he was taking office, and even tiktok, "coming out" as whatever etc.
Oligarchs want less regulation, but they also want these beefy government contracts. They want weaker government to regulate them and stronger government to protect them and bully other countries. Way I see it, what they actually want is control of the government, and with Trump they have it (more than before).
> Tech has been extremely Democratic for many years. The Democrats have utterly alienated tech, and now they reap the consequences.
Well, on the other side it can be said that Big Tech wasn't really on the side of democracy (note: democracy, not the Democrat Party) itself, and it hasn't been for years - at the very least ever since Cambridge Analytica was discovered. The "big tech" sector has only looked at profit margins, clicks, eyeballs and other KPIs while completely neglecting its own responsibility towards its host, and it got treated as the danger it posed by the Biden administration and Europe alike.
As for the cryptocoin world that has also been campaigning for the 45th: they are an even worse cancer on the world. Nothing but a gigantic waste of resources (remember the prices of GPUs, HDDs and RAM going through the roof, coal power plants being reactivated?), rug pulls and other scams.
The current shift towards the far-right is just the final masks falling off. Tech has rather (openly) supported the 45th than to learn from the chaos it has brought upon the world and make at least a paper effort to be held accountable.
Yes, big tech was the kid caught in the corner cleaning out the cookie jar and threw a tantrum when one parent moved the jar out of reach as punishment in effort to help the industry learn self-control. Now the other parent has come home and has not only returned the cookie jar to the kid but pledged to bring them packs of cookies by the shipping container to gorge on in exchange for favors.
We have more energy and are pumping more domestic oil than ever. We are a major exporter of LNG. Trump just killed EV subsidies, and electric charging network funding.
What are you talking about via Europe? Holding tech companies accountable to meddling in domestic politics? Not allowing carte blanche to user data?
I understand (though do not like) large corps tiptoeing around Trump in order to manipulate him, it is due to fear. Not due to Trump having respectable values.
This is a Military project. Have no doubts about it.
This is a money making scheme.
Mostly benefiting the fossil fuel industry. How are they going to power this? Gas is the only option that can be implemented within single years. And this is going to need a lot of power.
Who cares about the planet, anyway.
There probably will be a clause of mandatory consumption of a given percentage of power generated from coal ensuring continued coal generation of a given minimum providing excellent talking-points for broadcasting to the incumbent's base.
For $500bn they can build a nuclear power plant dedicated to these data centres
They can build a couple. With nuclear money is rarely the issue. It is that it takes forever because reasons.
It's not like the current admin respects the rule of law anyways...
Trump just rescinded licenses for offshore wind farms via an EO. We're fucking cooked (and I mean this literally)
Before downvoting the OP and, for more information, see:
https://apnews.com/article/wind-energy-offshore-turbines-tru...
https://www.utilitydive.com/news/trump-offshore-wind-leasing...
You need to stop this nonsense. Pollution is a long term problem, but it does not mean it is productive to do what Germany has done and cease development.
You need to stop this nonsense. The path we were on, that Trump has already overthrown, was nothing like Germany's.
Wealth residistribution scheme. Your tax dollars into their pockets.
As far as I can tell, this will be financed by private money. Can you elaborate?
Tax breaks, government forced to become a customer etc. the usual. Just like the astronauts to Mars thing will just shovel your money that might have gone to NASA into Musk's pocket.
> the usual. Just like the astronauts to Mars thing will just shovel your money that might have gone to NASA into Musk's pocket.
The difference is that Musk can do twice as much for 1/10 what Nasa thinks the program will cost, which is never what the program will actually cost, and Musk will do it in half that time to boot.
The guy is an unhinged manchild, but if what you care about is having your money well spend and getting to Mars as cheaply as possible, he's exactly who you're looking for.
I think you meant to type SpaceX. Which works as well as it does partly because Musk is kept at a careful length from the controls...
Tax breaks, i.e. my money not being in your pocket means that they are stolen?
Tax breaks, i.e. a company extracting wealth from a community without paying into the systems that keep all the parts of that community running, forcing the community to ultimate subsidize that business's weath extraction from them.
Companies do not extract value, they create value which is then transferred to the people via the market through voluntary exchange (ideally). Where have you learned about those things? Oh, yeah, “community” , i.e. Marx.
Tax breaks have basically the same effect as the government writing a check, increases inflation.
This is utter nonsense. If 1000 people go to a deserted island with no government and taxation would that mean the inflation will be plus infinity or at least very high??? Inflation is monetary phenomenon, it happens when money is being printed.
In that case there would be no inflation or deflation, assuming a fixed money supply and no economic growth. However, the the key here is that the government, the federal government anyways, is spending money regardless of the tax break. Anytime the government writes a check, that's a little bit more money floating around; anytime the government collects some money, such as taxes, there's that much less money to be had. Every tax break causes the money supply to increase more relative to if the tax break did not exist, causing more inflation (or less deflation, if that were the case). If the government spent exactly as much as it taxed, then there would be... actually deflation, because the economy is growing. This is the basics of fiscal policy.
There's also the monetary policy, which is when the federal reserve does this on purpose. The general principle is the same, but instead it spends its money buying bonds and gets its money selling those bonds, and creates a bunch of rules about where banks keep their money so it always has some money on hand.
Assuming the tax money has to come from somewhere at some point, those who pay taxes have to make up the shortfall from those who have tax breaks. So far the US just kicks that can down the road so...
That is a big assumption. Tax money need not be a constant. But for the sake of following the same logic: if companies pay bigger taxes, they also have to make up the shortfall. Actually, this last one is much more accurate statement. Companies do not pay taxes, PEOPLE pay taxes. So taxes are paid either by the employees, the clients or by the owners (which in case of the big tech are generally common people). With high taxation you are hurting: the customers, the workers and the middle class saving for their retirement. Who is winning the tax money: state bureaucracy, corrupt politicians and the business around them, people who live like parasites (or rather are forced to live like that, because they are electoral power).
What do you think NASA does with the money? Is doesn't build a NASA house for its NASA babies.
The Mars walk is just 3 years away baby!
The best part about this answer is it's always true.
Related: GenAI, Cold fusion
Yup, and FSD.
3 months maybe, 6 months definitely.
Your tax dollars are the customer.
what's the difference
Not all money making schemes involve the military.
This has cosmological significance if it leads to superintelligence
It won't unless there's another (r)evolution in the underlying technology / science / algorithms, at this point scaling up just means they use bigger datasets or more iterations, but it's more finetuning and improving the existing output then coming up with a next generation / superintelligence.
> It won't unless there's another (r)evolution in the underlying technology / science
I think reinforcement learning with little to no human feedback, O-1 / R-1 style, might be that revolution.
Okay, but let’s be pessimistic for a moment. What can we do if that revolution does happen, and they’re close to AGI?
I don’t believe the control problem is solved, but I’m not sure it would matter if it is.
Being pessimistic, how come no human supergeniuses ever took over the world? Why didn't Leibniz make everyone else into his slaves?
I don't even understand what the proposed mechanism for "rouge AI enslaves humanity" is. It's scifi (and not hard scifi) as far as I can see.
> Being pessimistic, how come no human supergeniuses ever took over the world? Why didn't Leibniz make everyone else into his slaves?
We already did. Look at the state of animals today vs <1 mya. Bovines grown in unprecedented mass numbers to live short lives before slaughter. Wolves bred into an all new animal, friendly and helpful to the dominate species. Previously apex predators with claws, teeth, speed and strength, rendered extinct.
Sometimes I wonder if we are going to be the unkillable plague that takes over the universe. Or maybe we will dissappear in a blink. It's hard to know, we don't have any reference point except ourselves.
Destroying human life in Earth (the only habitable place in the solar system) is far far easier than reaching something outside the solar system.
Once you have one AGI, you can scale it to many AGI as long as you have the necessary compute. An AGI never needs to take breaks, can work non-stop on a problem, has access to all of the world's information simultaneously, and can interact with any system it's connected to.
To put it simply, it could outcompete humanity on every metric that matters, especially given recent advancements in robotics.
...so it can think really hard all the time and come up with lots of great, devious evil ideas?
Again, I wonder why no group of smart people with brilliant ideas has unilaterally imposed those ideas on the rest of humanity through sheer force of genius.
An equivalent advance in autonomous robotics would solve the force projection issue, if that's what you're getting at.
I don't know if this will happen with any certainty, but the general idea of commoditising intelligence very much has the ability to tip the world order: every problem that can be tackled by throwing brainpower at it will be, and those advances will compound.
Also, the question you're posing did happen: it was called the Manhattan Project.
And if this whole exercise turns out to be a flop and gets us absolutely nowhere closer to AGI?
“AGI” has proven to be today’s hot marketing stunt for when you need to raise another round of cash and your only viable product is optimism.
Flying cars were just around the corner in the 60s, too.
This thread started from a deliberately pessimistic hypothetical of what happens if AGI actually manifests, so your comment is misplaced.
Quite a few have succeeded in conquering large fractions of the Earth's population: Napoleon, Hitler, Genghis Khan, the Roman emperors, Alexander the Great, Mao Zedong. America and Britain as systems did so for long periods of time.
All of these entities would have been enormously more powerful with access to an AGI's immortality, sleeplessness, and ability to clone itself.
And of course the more society is wired up and controlled by computer systems, the more the AGI could directly manage it.
I can see what you're trying to say, but I cannot for the life of me figure out how an AGI would have helped Alexander the Great.
Alexander the Great made his conquests by building a really good reputation for war, then leveraging it to get tribute agreements while leaving the local governments intact. This is a good way to do it when communication lines are slow and unreliable, because the emperor just needs to check tribute once a year to enforce the agreements, but it's weak control.
If Alexander could have left perfectly aligned copies of himself in every city he passed, he could have gotten much more control and authority, and still avoided a fight by agreeing to maintain the local power structure with himself as the new head of state.
Oh, you're assuming an entire networking infrastructure as well. That makes way more sense, but the miracle there isn't AGI - without networking they'd lose alignment over time. Honestly, I feel like it would devolve in a patchwork of different kingdoms run by an Alexander figurehead... where have I seen this before?
The problem you're proposing could be solved via a high quality cellular network.
Look at any corporation or government to understand how a large group of humans can be driven to do specific things none of them individually want.
I consider many successful military leaders and politicians to be geniuses as well. In my books, Caesar is as genius as Newton!
Having said that, we do not to understand the world to exploit it for ourselves. And what better way to understand and exploit the universe than science? Its an endearment.
> bigger datasets
Not even, they already ran out of data.
I am sure that the M.I.C. have a ton of classified data that could be used to train a military AI.
"this generation shall not pass"... to me that's about as credible as wanting to "preserve human consciousness" by going to Mars.
Setting the world on fire and disrupting societies gleefully, while basically building bunkers (figuratively more than literally) and consolidating surveillance and propaganda to ride out the cataclysm, that's what I'm seeing.
And the stories to sell people on continuing to put up with that are not even good IMO. Just because the people who use the story to consolidate wealth and control are excited about that, we're somehow expected to be excited about the promise of a pair of socks made from barbed wire they gave us for Christmas. It's the narcissistic experience: "this is shit. this benefits you, not me. this hurts me."
One thing is sure, actual intelligence, regardless of how you may define it, something that is able to reason and speak freely, is NOT what people who fire engineers for correcting them want. It's not about a sort of oracle for humanity to enjoy and benefit from, that just speaks "truth".
Don't worry, it'll only lead to superstupidity.
And superplagiarism of human-created content
I'm sure this will age well.
Is that the prequel to Idiocracy?
of course. its an arms race by definition so its all a military project. and already one whistleblower was brazenly murdered by our government to protect our horse in this race.
no whistleblower was murdered, ridiculous conspiracy theory
... If they build it under Cheyenne mountain you are definitely correct
I would love for Oracle to use AI to put their entire legal department out of work, though.
So you want them to be infinitely more litigious?
A serious question though, what does happen when AIs are filing lawsuits autonomously on behalf of the powerful, the courts clearly won't be able to cope unless you have AI powered courts too? None of how these monumental changes will work has been thought through at all, let's hope AI is smart enough to tell us what to do...
> A serious question though, what does happen when AIs are filing lawsuits autonomously on behalf of the powerful
It won't just be at the behalf of the powerful.
If lawyers are able to file 10x as many lawsuits per hour, the cost of filing a lawsuit is going to go down dramatically, and that's assuming a maximally-unfriendly regulatory environment where you still officially need a human lawyer in the loop.
This will enable people to e.g. use letters signed by an attorney at law, or even small claims court, as their customer support hotline, because that actually produces results in today.
Nobody is prepared for that. Not the companies, not the powerful, not the courts, nobody.
Unless you can afford your lawsuit to take up substantial time on Stargate and make a much stronger case than your average Joe who is still using o1 for their lawsuits
I'm envisioning a future where there's a centralized "legal exchange", much like the NYSE, where high speed machines file micro-ligation billions of times faster than any human can, which is decided equally quickly, an unrelenting back and forth buzz of lawsuits and payouts as every corporation wages constant automated legal battle. Small businesses are consumed in seconds, destroyed by the filing of a million computerized grievances while the major players end up in a sort of zero-sum stalemate, where money is constantly moving, but it never shifts the balance of power.
... has anyone ever written a book about this? If not, I think I'm gonna call dibs.
Oracle could reasonably be hit with some sort of stick every time they filed a frivolous lawsuit until the AI got tuned appropriately. Then it'd be a situation where Oracle were continuously suing people who don't follow the law, following a reasonably neutral and well calibrated standard that is probably going to end up as similar to an intelligent and well practised barrister. That would be acceptable. If people aren't meant to be following the law that is a problem for the legislators.
>A serious question though, what does happen when AIs are filing lawsuits autonomously on behalf of the powerful,
AI controlled cheap Chinese drones will start flying into their residencies carrying some trivial to make high explosives. With the class wars getting hotter in next few years we may be saying that Luigi Mangione had the right ideas towards the PMC, but he was underachiever.
What do you prefer ? Letting DeepSeek and China lead the AI war ? DeepSeek R1 is a big wake up call https://open.substack.com/pub/transitions/p/deepseek-is-comi...
Us vs. Them. My favorite perspective [0].
Regarding to your question, yes. I'd prefer a healthy counterbalance to what we have currently. Ideally, I'd prefer cooperation. A worldwide cooperation.
[0]: https://pbs.twimg.com/media/B_AiI9_XIAA67_t.jpg
Treating the world as a bunch of football teams is a great distraction though.
Arguably the cooperation between the US and China has lead to the most economic growth and prosperity in human history, it's a shame the US and China are returning to a former time.
From what I've read about DeepSeek and its founder, I would very much prefer them, even with China factored in. At least if these particular Four Horsemen are the only alternative.
On a tangential note, those who wish to frame this as the start of the great AI war with China (in which they regrettably may be right), should seriously consider the possibility of coming out on the losing end. China has tremendous industrial momentum, and is not nearly as incapable of leading-edge innovation as some Americans seem to think.
>China has tremendous industrial momentum, and is not nearly as incapable of leading-edge innovation as some Americans seem to think.
So those who framing this are correct and that we should matching their momentum here asap?
No, I was rather pointing out that getting into an altercation that you are likely (even if not guaranteed) to lose may not be the smartest of ideas. On occasion, humans have been known to fruitfully engage in cooperation and de-escalation. Please pardon my naive optimism.
"Great AI war with China", "altercation" are excessively harsh characterizations. There is nothing "escalatory" in competing for leadership in new industries with other states, nor should it be "regrettable". No one, to my knowledge, is planning to nuke DeepSeek data centers or something.
I wish I could agree with you. But have you read Aschenbrenner's "Situational Awareness" [1]? I am very much afraid that the big decision makers in AI do in fact think in those terms, and do not in any way frame this as fair competition for the benefit of all.
1. https://situational-awareness.ai/
A person heavily invested in this wave of AI succeeding saying AI will be big and we will have AGI next year? Sure.
I don't think there is much point of reading the whole thing after the following:
"Everyone is now talking about AI, but few have the faintest glimmer of what is about to hit them. Nvidia analysts still think 2024 might be close to the peak. Mainstream pundits are stuck on the willful blindness of “it’s just predicting the next word”."
we need to cooperate and put aside our petty politicking right now. the potential downsides of ‘racing’ without building a safety scaffold are catastrophic.
the outcome would be exactly the same. AGI leads the human race off of a cliff, not in the direction of one human interest group vs another. the only difference would be that it was china that was responsible for the extinction if the human race rather than another country. i would prefer to die with dignity… the outcome we should all be advocating for is a global halt of AI research — not because it would be easy but because there is no other option.
> What do you prefer ? Letting DeepSeek and China lead the AI war ?
Me personally? Yes.
China is much more peaceful nation compared to US. So, yes, I'd prefer China leading AI research any day. They are interested in mutual trade and prosperity, they respect local laws and culture, all unlike US.
"They respect local laws and culture" - I think people from Xinyang probably have a very different perspective on that........
I think there's a more nuanced version of this: China respects local laws and culture _outside of what they view as China_ more than the US does. It's also worth noting that China's policy in Xinjiang is somewhat narrowly targeted at religion, and less other aspects like cuisine or clothing. That said, religion is nigh impossible to separate from the broader idea of culture in much of the world.
Africa and South America and USA strongly disagree.
Give me a break. China has overseas police stations as bases of operation for harassing ex-pats and dissidents. That's not "respecting local laws and culture".
sorry but you’re not going to convince anyone approaching this with a neutral mind that China is more partial to overseas intervention than the US is
I encountered this almost first person. When American company goes like an elephant, bribing local officials left and right, using dirty practices to push out concurrents. At the same time, Chinese companies try very hard to abide to local regulations and trying to resolve all issues using local courts, etc. Like actually civilised people.
What happens inside China is nothing of my interest, it's their business. They existed for millennias, they probably know how to manage themselves. They are not trying to expand outside of may be Taiwan, they don't put their military bases in my country, they don't fund so-called "opposition" and that's good enough for me.
Bribery is probably one of the few cases where the US is significantly better than bad actors in both China and the EU, both of which have major problems with overseas bribery
If you had AlQaeda in a hypothetical region near Florida with almost two-yearly terror attacks, you would shit bricks and create jails/prisons with more security than the Pentagon itself.
Holy smokes. Do folks like you actually believe this? China has its own style of colonialism (whatever you want to call it) but it certainly exists as strong as the US flavor.
How many countries has China invaded and bombed in the last 30 years?
How many deaths did China's warmongering caused abroad?
Quite a few from an economic perspective. Like I said they have their own style of colonialism. To think they are some peaceful loving nation is foolish. Maybe in the last 10 years China have had the military equipment capable of handling an offensive. They have been smart and done all their dealings via money. Without going too far in whataboutism, I simply find it ridiculous to classify China as a warm fuzzy nation with their long list of human rights issues. That does not mean America is peaceful and loving, simply that perhaps the two countries are not so different in net.
> Like I said they have their own style of colonialism.
That's moving the goalposts and doesn't address the issue.
>They have been smart and done all their dealings via money.
You mean just like the country who issues the world reserve currency and who's intelligence agencies get involved in destabilizing regimes across the world?
> That's moving the goalposts and doesn't address the issue.
Is this how you make a constructive argument? Perhaps I was expecting too much from a joke account but this style of whataboutism is boring.
My post that you responded to set my premise which was that China has its own form of colonialism that is quite different than Americas but it exists and it’s quite strong. To classify China as a peaceful loving nation that respects other cultures is as if we were saying the US has never started a conflict. It’s factually a lie. China has a long list of human rights issues, they factually do not respect other cultures even within their own borders. I am not defending America but pointing out that China is not what the OP stated.
> I was expecting too much from a joke account
Are you the kind of superficial petty person who needs to take jabs at the messenger's name and not the message itself?
And are you really in the position to throw stones from a glass house with that account name? If you had your real name and social media profiles linked in the bio I'd understand, but you're just being hypocritical, petty and childish here with this 'gotcha'.
> To classify China as a peaceful loving nation that respects other cultures
I never made such a classification. You're building your own strammen to form a narrative you can attack but you're not saying anything useful the contradicts my PoV and wasting our time. Since you're obviously arguing in bad faith I won't converse with you further. Goodbye.
If you have an argument that is actually on topic with what I said please continue, otherwise save your troll account for someone else. The whataboutism/gaslighting is silly. You clearly cannot read threads or respond in a logical form to the right person. The conversation at hand was about China and in response to the OP classifying them as a loving and respectful nation. I made no attempt to defend the US and it has been you moving the goalposts. You throw about whataboutism around and then simply runoff with some flimsy excuse about multiple people being unable to converse with you. Troll account.
Cumpiler asked two very clear and direct questions:
>How many countries has China invaded and bombed in the last 30 years? >How many deaths did China's warmongering caused abroad?
You didn't answer those, just started hand waving some stuff about China's "own form of colonialism" -- without even explaining what that is and how it works (which personally I'd be curious to hear about, and believe *is*" likely guilty of violence).
So you very clearly are the one guilty of shifting the goalposts, going on tangents, and bringing up usernames instead of real arguments.
Define invade.
Sorry, but If you need a definition for military invasion, you're not arguing in good faith. Goodbye.
Need a bit of Zuck too
Yeah, really the only thing missing from this initiative was the personal information of the vast majority of the United States population handed over on a silver platter.
That sentiment calls for reflection - whoever ends up on top of the heap after the AI craze settles down is going to be someone that everyone objects to. Elon Musk was himself an internet darling up until he became wealthy and entrenched.
That said, this does look like dreadful policy at the first headline. There is a lot of money going in to AI, adding more money from the US taxpayer is gratuitous. Although in the spirit of mixing praise and condemnation, if this is the worst policy out of Trump Admin II then it'll be the best US administration seen in my lifetime. Generally the low points are much lower.
Nietzsche wrote about these phenomena a long time ago in his Genealogy of Morality. there will never be someone who reaches the top who doesn’t become an object of ire in modern Western culture.
> That sentiment calls for reflection - whoever ends up on top of the heap after the AI craze settles down is going to be someone that everyone objects to.
I agree in principle. And realistically, there is no way Altman would not be part of this consortium, much as I dislike it. But rounding out the team with Ellison, Son and Abu Dhabi oil money in particular -- that makes for a profound statement, IMHO.
> That sentiment calls for reflection - whoever ends up on top of the heap after the AI craze settles down is going to be someone that everyone objects to.
Did we see the same fallout from the space-race from a couple generations ago?
I don't think so — certainly not in the way you're framing it. So I guess I don't accept your proposition as a guarantee of what will happen.
A couple of generations ago we didn't have the internet and the only things people heard about were being managed. The big question was whether the media editors wanted to build someone up or tear them down.
The spoils of the space race would have gone to someone a lot like Musk. Or Ellison. Or Masayoshi Son. Or Sam Altman. Or the much worse old-moneyed types. The US space program was, famously, literally employing ex-Nazis. I doubt the beneficiaries of the money had particularly clean hands either
> That sentiment calls for reflection - whoever ends up on top of the heap after the AI craze settles down is going to be someone that everyone objects to. Elon Musk was himself an internet darling up until he became wealthy and entrenched.
Trying to process this but doesn’t his fall from grace have more to him increasing his real personality to the world? Sometime around calling that guy a pedo. Not much bothers me but at the very least his apparent lack of decision making calls into question many things.
Of all the sentiments that call for reflection, the parent's belief about why people don't like Elon is the one that needs it the most.
You have to keep in mind Microsoft is planning on spending almost 100B in datacenter capex this year and they're not alone. This is basically OpenAI matching the major cloud provider's spending.
This could also be (at least partly) a reaction to Microsoft threatening to pull OpenAI's cloud credits last year. OpenAI wants to maintain independence and with compute accounting for 25–50% of their expenses (currently) [2], this strategy may actually be prudent.
[1] https://www.cnbc.com/2025/01/03/microsoft-expects-to-spend-8...
[2] https://youtu.be/7EH0VjM3dTk?si=hZe0Og6BjqLxbVav&t=1077
Microsoft has lots of revenue streams tied to that capex outlay. Does OpenAI have similar revenue numbers to Microsoft?
OpenAI has a very healthy revenue stream in the form of other companies throwing money at them.
But to answer your question, no they aren’t even profitable by themselves.
> they aren’t even profitable
Depends on your definition of profitability, They are not recovering R&D and training costs, but they (and MS) are recouping inference costs from user subscription and API revenue with a healthy operating margin.
Today they will not survive if they stop investing in R&D, but they do have to slow down at some point. It looks like they and other big players are betting on a moat they hope to build with the $100B DCs and ASICs that open weight models or others cannot compete with.
This will be either because training will be too expensive (few entities have the budget for $10B+ on training and no need to monetize it) and even those kind of models where available may be impossible to run inference with off the shelf GPUs, i.e. these models can only run on ASICS, which only large players will have access to[1].
In this scenario corporations will have to pay them the money for the best models, when that happens OpenAI can slow down R&D and become profitable with capex considered.
[1] This is natural progression in a compute bottle-necked sector, we saw a similar evolution from CPU to ASICS and GPU in the crypto few years ago. It is slightly distorted comparison due to the switch from PoW to PoS and intentional design for GPU for some coins, even then you needed DC scale operations in a cheap power location to be profitable.
They will have an endless wave of commoditization chasing behind them. NVIDIA will continue to market chips to anyone who will buy... Well anyone who is allowed to buy, considering the recent export restrictions. On that note, if OpenAI is in bed with the US government with this to some degree, I would expect tariffs, expert restrictions, and all of that to continue to conveniently align with their business objectives.
If the frontier models generate huge revenue from big government and intelligence and corporate contracts, then I can see a dynamo kicking off with the business model. The missing link is probably that there need to be continual breakthroughs that massively increase the power of AI rather than it tapering off with diminishing returns for bigger training/inference capital outlay. Obviously, openAI is leveraging against that view as well.
Maybe the most important part is that all of these huge names are involved in the project to some degree. Well, they're all cross-linked in the entire AI enterprise, really, like OpenAI Microsoft, so once all the players give preference to each other, it sort of creates a moat in and of itself, unless foreign sovereign wealth funds start spinning up massive stargate initiatives as well.
We'll see. Europe has been behind the ball in tech developments like this historically, and China, although this might be a bit of a stretch to claim, does seem to be held back by their need for control and censorship when it comes to what these models can do. They want them to be focused tools that help society, but the American companies want much more, and they want power in their own hands and power in their user's hands. So much like the first round where American big tech took over the world, maybe it's prime to happen again as the AI industry continues to scale.
Why would China censoring Tiananmen Square/whatever out of their LLMs be anymore harmful to the training process when the US controlled LLMs also censor certain topics, eg "how do I make meth?" or "how do I make a nuclear bomb?".
Because China censors very common words and phrases such as "harmonized", "shameless", "lifelong", "river crabbed", "me too". This is because Chinese citizens uses puns and common phrases initially to get around censors.
Don't forget "Winnie the Pooh"!
OpenAI models refuse to translate subtitles because they contain violence, sex, or racism.
That’s just a different flavour of enforced right-think.
They are absolutely different flavors. OpenAI is not being told by the government to censor violence, sex or racism - they're being told that by their executives.
News flash: household-name businesses aren't going to repeat slurs if the media will use it to defame them. Nevermind the fact that people will (rightfully) hold you legally accountable and demand your testimony when ChatGPT starts offering unsupervised chemistry lessons - the threat of bad PR is all that is required to censor their models.
There's no agenda removing porn from ChatGPT any more than there's an agenda removing porn from the App Store or YouTube. It's about shrewd identity politics, not prudish shadow government conspiracies against you seeing sex and being bigoted.
I don't know why people care if they're being censored by government officials or private billionaires. What difference does it make at the end of the day? why is one worse than the other?
Sigh. No. Censorship is censorship is censorship. That is true even if you happen to like and can generate a plausible defense of US version that happens to be business friendly ( as opposed to China's ruling party friendly ).
Usually a sign of great discussion when someone responds with "sigh" to a reasonably presented argument.
> Censorship is censorship is censorship
"if your company doesn't present hardcore fisting pornography to five year olds you're a tyrant" is a heck of a take, even for hacker news.
It is not a take. It is simple position of 'just because you call something as involuntary semen injection does not make it any less of a rape'. I like things that are clear and well defined. And so I repeat:
Censorship is censorship is censorship.
Ok, I guess I'm #TeamProCensorship, then. So is almost everyone.
Yes, that's true. It's very rare for people to be able to value actual free speech. Most people think they do until they hear something they don't like
I am not sure if it will surprise you, but your affiliation or the size of your 'team' is largely irrelevant from my perspective. That said, I am mildly surprised you were able to accept the new self-image as willing censor though. Most people struggle with that ( edit: hence the 'this is not censorship' facade ).
Is "Pooh" also censored?
Because falsifying history seems worse than restricting meth production, at least to me.
Though I see no reason whatsoever why LLM should be blocked from answering "how do I make a nuclear bomb?" query.
Because when a small group of elites with permament term and no elections decides what is allowed and what isn't... and has full control of silencing what's not allowed and any meta discussion about the silencing itself... is different from when an elected government decides it, and then anyone is free to raise a stink on whatever is their version of twitter today without worrying about being disappeared tomorrow
It's not an elected government if you're talking about the US. These policies are also all decided by "elites with permanent term and no elections" you realize right?
> It's not an elected government if you're talking about the US
If you don't believe US has elections then straighten up your tinfoil hat:)
Maybe you'll say next the earth is flat, if you think people have nothing better to do but to find ways to lie to you.
They want their LLMs explicitly approved to align with the values of the regime. Not necessarily a bad thing, or at least that avenue wasn't my point. It does get in the way of going fast and breaking things though, and on the other side there is an outright accelerationist pseudo-cult.
Ignoring the moral dimension for a second, I do wonder if it is harder to implement a rather cohesive, but far-reaching censorship in the chinese style, or the more outrage-driven type of "censorship" required of American companies. In the West we have the left pre-occupied with -isms and -phobias, and the right with blasphemy and perceived attacks on their politics.
With the hard shift to the right and Trump coming into office, especially the last bit will be interesting. There is a pretty substantial tension between factual reporting and not offending right-wing ideology: Should a model consider "both sides" about topics with with clear and broad scientific consensus if it might offend Trumpists? (Two examples that come to mind was the recent "The Nazis were actually left wing" and "There are only two genders".)
I didn't find any reliable sources about OpenAI. All sources that I could find state this is not true -- inference costs are far higher than subscription fees.
I hate to ask this on HN... but, can you provide a source? Or tell us how do you know?
I don't have any qualified source and this metric would be likely be quite confidential even internally.
It is just an educated guess factoring costs of running similar/comparable models to 4o or 4o-mini per token, and how azure commitments work with OpenAI models[2], also knowing that Plus subscriptions are probably more profitable[1] than API calls.
It would be hard for even OpenAI to know with any certainty because they are not paying for Azure credits like a normal company. The costs are deeply intertwined with Azure and would be hard to split given the nature of the MS relationship[3]
----
[1] This is from experience of running LibreChat using 4o versus ChatGPT Plus for ~200 users, subscriptions should quite profitable than raw API by a order of 3 to 4x, of course different types of users and adoption levels will be there my sample while not small is not likely representative of their typical user base.
[2] MS has less incentive to subsidize than say OpenAI themselves
[3] Azure is quite profitable in the aggregate, while possibly subsidizing OpenAI APIs, any such subsidy has not shown up meaningfully in Microsoft financial reports.
It was my impression that OpenAI was struggling to make money on their $200 pro subscription, because they've underestimate how much people would use it (https://www.theregister.com/2025/01/06/altman_gpt_profits/).
So I do question if OpenAI is able to make a profit, even if you remove training and R&D. The $20 plan may be more profitable, but now it will need to cover the R&D and training, plus whatever they lose on Pro.
I am paying for o1 Pro but since Deepseek R1 came out I stopped using it. So there goes $200/mo of their revenue ;)
Didn’t it just come out they are losing money on the pro subscriptions?
Thanks for the detailed breakdown. This is an important nuance to my short reply.
Are they spending $10B/year on training?
Given the release of the new DeepSeek R1 model [0], OpenAI’s future revenue stream is probably more at risk than it was a week ago.
[0] - https://arstechnica.com/ai/2025/01/china-is-catching-up-with...
OpenAI will not exist in 5 years, I'm calling it now. First movers to market dont always win, and they will surely lose.
Google was first mover.
In what way? They weren't the first search engine, or advertising on the web?
In terms of ai and OpenAI leapfrogged them
if your birth year starts with 2, I can see why you might think that
The question is what's going to be OpenAI's Adwords.
Yahoo, AOL, Alta Vista (others too) all were search engines on the web before Google's Sept 1998 existence.
Lycos, Metacrawler, Dogpile. The list goes on
Sure, but we are talking ai and the fact that google was first in this space.
The first in what? Not in search nor Generative AI.
Why would you think search. Google wasn't first for search. They were first for page rank
Google researchers invented the transformer
Who if not Google was the first in generative ai? They invented transformers and diffusion, the cornerstones of text and image generati, respectively.
They weren't the first to meaningfully commercialise either, though. That remains with OpenAI for both (GPT-3/ChatGPT and DALL-E 2).
Not necessarily. DeepSeek will probably only threaten the API usage of OpenAI, which could also be banned in the US if it's too sucessful. API usage is not a main revenue for OpenAI (it is for Anthropic last time I checked). The main competitor for R1 is o1, which isn't gnerally available yet.
DeepSeek is an open source model. You can download it and run it locally on your laptop already.
So any OpenAI user ( or competitor even) could take it and run a hosted model. You can even tweak the weights if you wanted to.
Why pay for OpenAI access when you can just run your own and save the money?
The one your laptop can run does not rival what OpenAI offers for money. Still, the issue is not whether third party can run it, it's just the OpenAI seems not putting API as their main product.
LM Studio version is here: https://lmstudio.ai/model/deepseek-r1-llama-8b
That's like saying I have a healthy revenue stream from my credit card.
Not quite. In 2 years their revenue has ~20x from 200M ARR to 3.7B ARR. The inference costs I believe pay for themselves (in fact are quite profitable). So what they're putting on their investor's credit cards are the costs of employees & model training. Given it's projected to be a multi-trillion dollar industry and they're seen as a market leader, investors are more than happy to throw in interest free cash flow now in exchange for variable future interest in the form of stocks.
That's not quite the same thing at all as your credit card's revenue stream as you have a ~18%+ monthly interest rate on that revenue stream. If you recall AMZN (& all startups really) have this mode early in their business where they're over-spending on R&D to grow more quickly than their free cash flow otherwise allows to stay ahead of competition and dominate the market. Indeed if investors agree and your business is actually strong, this is a strong play because you're leveraging some future value into today's growth.
All well and good, but how well will it work if the pattern continues that the best open models are less than a year behind what OpenAI is doing?
How long can they maintain their position at the top without the insane cashflow?
One system will be god like and then it doesn't matter
These types of responses always strike me as dogmatic.
Reminds me of the crypto craze where people were claiming that Bitcoin was going to replace all world currencies.
Have they built their own ASICs for inference like Google and Microsoft have? Or are they using NVIDIA chips exclusively for inference as well?
The rumors I've heard are that they have a hardware team targeting a 2026 release, but no productions ASICs at the moment.
Platform economics "works" in theory only upto a point. Its super inefficient if you zoom out and look not at system level but ecosystem level. It hasn't lasted long enough to hit failure cases. Just wait a few years.
As to openai, given deepseek and the fact lot of use cases dont even need real time inference its not obvious this story will end well.
I also can't see it ending well for OpenAI. This seems like it's going to be a commodity market with a race to the bottom on pricing. I read that NVIDIA has a roughly 1000% (10x) profit margin on H100's, which means that someone like Google making their own TPUs has a massive cost advantage.
Moore's law seems to be against them too... hardware getting more powerful, small models getting more powerful... Not at all obvious that companies will need to rely on cloud models vs running locally (licencing models from whoever wants that market). Also, a lot of corporate use probably isn't that time critical, and can afford to run slower and cheaper.
Of course the US government could choose to wreck free-market economics by mandating powerful models to be run in "secure" cloud environments, but unless other countries did same that might put US at competitive price disadvantage.
They do get a lot of customers buying their stuff, but on top of that, a company with unique IP and mindshare can get investors to open their wallet easily enough; I keep thinking of AMD that was not or barely profitable for like 15 years in a row.
Serious question - why Texas???
Texas is a world leader in renewable energy. Easy permitting, lots of space, lots of existing grid infrastructure from the o&g industry.
Why do you think datacenters have actually been built in Oregon?
https://en.m.wikipedia.org/wiki/2021_Texas_power_crisis
Any downsides?
Texas.
My kneejerk response was to point to the incoming administration, but the fact Stargate has been in the works for more than a year now says to me it's because of tax credits.
Lots of back door deals. Just expect more government things put in TX just like the Army built that place in Austin, when we have plenty of dead bases that could be reused
:/
It's where the energy is for this project.
This is unfortunately paywalled but a good writeup on how the datacenter came to be: https://www.theinformation.com/articles/why-openai-and-oracl...
I'm not a subscriber so I can't read it, which startup are they referring to in the headline?
They're referring to Crusoe (crusoe.ai)
A company that will surely still exist in 4 years time.
Natural gas to power the turbines while the nuclear plant are built, I guess. Also is Texas more open to large-scale development than elsewhere?
Any downsides?
Existing underinvestment in infrastructure and its maintenance, extreme weather, water resource limitations, some human rights issues.
Probably for the same reason that Silcon Valley has been moving there slowly and quietly for a while now.
Because rich people inevitably don't like taxes? And maybe forest fires?
Isn't it more likely a reaction to xAI now having the most training compute?
How is compute only 50% of their expenses?
Meanwhile, Azure has failed to keep up with the last 2-3 generations of both Intel and AMD server processors. They’re available only in “preview” or in a very limited number of regions.
I wonder if this is a sign of the global economic downturn pausing cloud migrations or AI sucking the oxygen out of the room.
.
I'm not sure that's how capitalism works.
Who is "we"?
This isn't your money
It is not. But this kind of money does have impact for society in any field. So, this a proper concern.
This is so much money with which we could actually solve problems in the world. maybe even stop wars which break out because of scarcity issues.
maybe i am getting to old or to friendly to humans, but it's staggering to me how the priorities are for such things.
For less than this same price tag, we could’ve eliminated student loan debt for ~20 million Americans. It would in turn open a myriad number of opportunities, like owning a home and/or feeling more comfortable starting a family. It would stimulate the economy in predictable ways.
Instead we gave a small number of people all of this money for a moonshot in a state where they squabble over who’s allowed to use which bathroom and if I need an abortion I might die.
Eliminating debt has a lot of unintended consequences. Price inflation would almost certainly be a problem, for example.
It's also not clear to me what happens to all of the derivatives based on student debt, though there may very well be an answer there that I just haven't understood yet.
The problem with allowing student debt to rack up to these levels and then cancelling it is that it would embolden universities to ask even higher tuition. A second problem is that not all students get the benefit, some already paid off their debts or a large part of it. It would be unfair to them.
Yes but every policy is unfair. It literally is choosing where to give a limited resource, it can never be fully fair.
And there could be a change in the law that allows people to forgive student debt in personal bankruptcy, and that could make sure higher tuition doesnt happen.
> Yes but every policy is unfair. It literally is choosing where to give a limited resource, it can never be fully fair.
I don't think that holds for a policy of non-intervention. People usually don't like that solution, especially when considering welfare programs, but it is fair to give no one assistance in the sense that everyone was treated equally/fairly.
Now its a totally different question whether its fair that some people are in this position today. The answer is almost certainly no, but that doesn't have a direct impact on whether an intervention today is fair or not.
It would do more good in K12 or pre-K than it would paying off private debts held by white collar highly educated not rich yet due only to their young age university-bros.
It truly is astonishing. We have kids who cannot afford school lunches, people working multiple blue-collar jobs and yet the problems of people who are statistically better off than average constantly jump to the front. People complain about Effective Altruism because of one dude messing up big but it would behoove everyone to read up on the basic philosophy of it before suggesting how we best spent billions to help reduce suffering.
> Instead we gave a small number of people all of this money for a moonshot in a state where they squabble over who’s allowed to use which bathroom and if I need an abortion I might die.
AFAICT from this article and others on the same subject, the 500 billion number does not appear to be public money. It sounds like it's 100 billion of private investment (probably mostly from Son), and FTA,
> could reach five times that sum
(5x 100 billion === 500 billion, the # everyone seems to be quoting)
Eliminating some student debt is a fish. Free university is the fishing rod. Do that instead.
Free to the student sounds nice, but who pays for it in the end? And does an education lose a bit of its value when anyone can get it for free?
Free to US citizens would be a better policy, the state investing in its own people.
Or, prices of houses would go up even more because we still aren't allowing supply to increase and people having more money doesn't change that.
Let the schools pay back the people they scammed.
Repaying student loans makes a lot of people a little richer. The current initiative makes a few people a lot richer. If you ask some people, the former is a very communist/socialist way of thinking (bad), while the latter is pure, unadulterated capitalism (good).
One of the more destructive situations in capitalism is the fact that (financially) helping the many will increase inflation and lead to more problems.
When a few people get really rich it kind of slips through the gaps, the broader system isn't impacted too much. When most people get a little rich they spend that money and prices go up. Said differently, wealth is all relative so when most people get a little more rich their comparative wealth didn't really change.
That and a lot of people do not have the means to convince current power centers ( unless they were to organize, which they either don't, can't or are dissuaded from ) to do their bidding, while few rich ones do. And so the old saying 'rich become richer' becomes a self-fulfilling prophecy.
That was the implication indeed. Money is like gravity, the more you have the more you can pull in. This will give a person the power to do anything to make more money (change the laws as desired, or break them if needed) but also the perfect shield from any repercussions.
I know!! Also we could have given an IPhone to 500 million of people for the amount!! It’s such a waste to think they’re investing it in the future instead
This is the problem with capitalists / the billionaires currently hoarding the money and the US' policy, it's all for short term gain. But the conservatives that look back to the 50's or 80's or whatever decade their rose-tinted glasses are tuned to should also realise that the good parts of that came from families not being neck-deep in debt.
Yes you don't want to destroy your food chain. If everyone is poor except you then you are now poor.
I'm starting to think there's no difference between this website and reddit
https://news.ycombinator.com/newsguidelines.html
Please don't post comments saying that HN is turning into Reddit. It's a semi-noob illusion, as old as the hills.
>wars which break out because of scarcity issues
That doesn't seem to be much of a thing these days. If you look at Russia/Ukraine or China/Taiwan there's not much scarcity. It's more bullying dictator wants to control the neighbours issues.
"Global warming may not have caused the Arab Spring, but it may have made it come earlier... In 2010, droughts in Russia, Ukraine, China and Argentina and torrential storms in Canada, Australia and Brazil considerably diminished global crops, driving commodity prices up. The region was already dealing with internal sociopolitical, economic and climatic tensions, and the 2010 global food crisis helped drive it over the edge."
https://www.scientificamerican.com/article/climate-change-an...
It will be, or, it's slowly happening already. Climate change is triggering water and food shortages, both abroad and on your doorstep (California wildfires), which in turn trigger mass migrations. If a richer and/or more militarily equipped country decides they want another country's resources to survive, we'll see wars erupt everywhere.
Then again, it's more of a logistics challenge, and if e.g. California were to invade Canada for its water supply, how are they going to get it all the way down there?
I can see it happening in Africa though, a long string of countries rely on the Nile, but large hydropower dams built in Sudan and Ethiopia are reducing the water flow, which Egypt is really not happy about as it's costing them water supply and irrigated land. I wouldn't be surprised if Egypt and its allies declares war on those countries and aims to have the dams broken. Then again, that's been going on for some years now and nothing has happened yet as far as I'm aware.
(the above is armchair theorycrafting from thousands of miles away based on superficial information and a lively imagination at best)
I was in Egypt a while and there's no talk of them invading Sudan or Ethiopia. A lot of Egypt's economy is overseas aid from the US and similar.
The main military thing going on there - I was in Dahab where there are endless military checkpoints - is Hamas like guys trying to come over and overthrow the fairly moderate Egyptian government and replace it with a hardline Hamas type islamic dictatorship for the glorification of Allah etc. Again it's not about reducing scarcity - more about increasing scarcity in return for political control. Dahab and Cairo are both a few hours drive from Gaza.
> it's more of a logistics challenge
and a bureaucratic one as well. in Germany, they want to trim bureaucratic necessities while (not) expecting multiple millions of climate refugees.
lot's of undocumented STUFF (undocumented have nowhere to go so they don't get vaccines, proper help when sick, injured, mentally unstable, threatened, abused) incoming which means more disease, crime, theft, money for security firms and insurance companies, which means more smuggle, more fear-mongering via media, more polarization, more hard-coding of subservience into the young, more financial fascism overall, less art, zero authenticity, and a spawn of VR worlds where the old rules apply forever.
plus more STDs and micro-pandemics due to viral mutations because people will be even more careless when partying under second-semester light-shows in metropolitan city clubs and festivals and when selling out for an "adventurous" quick potent buck and bug, which of course means more money pouring into pharma who won't be able to test their drugs thoroughly (and won't have to, not requiring platforms to fact check will transfer somewhat into the pharma industry) because the population will be more diverse in terms of their bio-chemical reactions towards ingredients in context of their "fluid" habitats chemical and psycho-social make-ups.
but it's cool, let's not solve the biggest problems before pseudo-transcending into the AGI era. will make for a really great impression, especially those who had the means, brains, skills, (past) careers, opportunity and peace of mind.
There's a terrifying amount of food insecurity and poverty in Russia - https://www.globalhungerindex.org/russia.html - https://databankfiles.worldbank.org/public/ddpext_download/p...
Have you tried opening the links? They show Russia at developed country level in terms of food insecurity (score <5, they don't differentiate at those levels; this is a level mostly shown for EU countries); and a percentage of population below the international poverty line of 0.0% (vs, as an example, 1.8 % in Romania). This isn't great — being in the poverty briefs at all is not indicative of prosperity — but your terrification should probably come from elsewhere.
Your first link says "With a score under 5, Russian Federation has a level of hunger that is low."
The current situation with Russia and China seems caused by them becoming prosperous. In the 1960s in China and 1990s in Russia they were broke. Now they have money they can afford to put it into their militaries and try to attack the neighbours.
I'm reminded of the KAL cartoon on Russia https://www.economist.com/cdn-cgi/image/width=1424,quality=8... That was from 2014. Already Russia is heading to the next panel in the cycle.
I would wager that states such as Russia and others misallocate resources, which in turn reduces productivity. Worse yet, some of the policy prescriptions stated above would further misallocate scarce resources and reduce productivity. Scarcity doom becomes a self-fulfilling prophesy. This outcome is used to rationalize further economic intervention and the cycle compounds upon itself.
To be explicitly clear, the US granting largess to tech companies for datacenters also counts as a misallocation in my view.
Russia is run by the mob. The country has no real dominant industry beyond its natural resources. Are they really a good example?
> That doesn't seem to be much of a thing these days.
If you ignore Gaza and whole of Africa, maybe.
Gaza seems mostly to be about who controls Israel/Palestine politically. Gaza was reasonably ok for food and housing and is now predictably trashed as a result of Hamas wanting to control Palestine from the river to the sea as they say.
South Sudan is some ridiculous thing where two rival generals are fighting for control. Are there any wars which are mostly about scarcity at the moment?
No, not really... the origin of Gaza conflict is in Zionists confiscating the most fertile land and water resources.
That's why Israelis gladly handed back the Sinai desert to Egypt, but have kept Golan Heights, East Jerusalem, Shaba Farms, and continuously confiscate Palestinian farmlands in the West Bank.
There is nothing arbitrary or religious about which lands Zionists are occupying and which they're leaving to arabs.
Completely false and simplifying a complicated history to present a very one sided view. The most fertile lands are in the west bank. They were under Jordanian control and could have been turned into an independent Palestinian state, but weren't. Israel "accidentally" got them in the 6 days war, and were happy to give them to Jordan back to "take care" of the Palestinian problem, but they refused. The places that Israel have the majority of the population in Petah Tiqwah, Tel Aviv and the region were swamp lands, filled with mosquitos, that were dried over many years and many deaths by Jewish farmers.
So you are saying Hamas would have same domestic support if Gaza was economically at the level of e.g. Slovenia? People who complained about "open air prison" caused by Israeli "occupation" even before Oct 7 would disagree with you I think.
Even in Europe extremists are propped up by promise of "cheap energies" from Russia.
I guess if you dont see the link this is not the place to explain it.
Have you videos of Gaza before the war? There are places in Syria and Iraq, hell even India or the Phillipines that look alot worse.
Also the "open air prison" effect was a result of trying to reduce attacks from Gaza. For example before the 2008 war there were more than 2000 rockets launched from Gaza into Israel.
> Are there any wars which are mostly about scarcity at the moment?
The class war
Like the glib summary of Palestinian history there. In other news some terrorists stole land from the Brits in 1776.
At any given time approximately 1 in 10 humans are facing starvation or severe food insecurity.
I don't doubt that, but it's harder to connect that fact to a specific international conflict.
Or religious fanatics wants to murder other religious groups.
Very zero-sum outlook on things which is factually untrue much of the time. When you invest money in something productive that value doesn't get automatically destroyed. The size of the pie isn't fixed.
But then how could politicians and the wealthy steal all that money if you just gave it away or helped the poors?
Money doesn't fix stuff. You need good will people and good will people don't need that much money.
More importantly, money, at global scale, doesn't solve scarcity issues. If there are 100 apples and 120 people making sure everyone has a lot of money doesn't magically create 20 more apples. It just raises the price of apples. Building an apple orchard creates apples. Stargate is people betting that they are building a phenomenal apple orchard. I'm not sure they will and an worried the apple orchard will poison us all but unlike me these people are putting their money where their mouth is and had thus larger inventive to figure out what they are doing.
Money alone might not fix stuff... but an absence of money can prevent stuff being fixed.
Such mega investments are usually not for the sake of humankind. They are usually for the sake of a very selected group of humans.
Five-hundred billion dollars is nothing when you consider there's a new government agency that it is said will shave two trillion from government inefficiency.
I disagree with you. I think the impact of AI on society in the long term is going to be massive, and such investments are necessary. If we look at the past century, technology has had (in my opinion) and incredibly positive impact on society. You have to invest in the future.
> maybe even stop wars which break out because of scarcity issues.
Like which wars in this century?
It actually isn't alot, about $100 spread out over a few years for every person on earth isnt enough to do these things..
IF money was equally distributed then maybe. But that has never happened. Same with drinking water, food and shelter.
The US can't stop the wars it wants others to fight for them even if it means population collapse like in Ukraine, Israel and Taiwan.
well, it also starts a fair share of wars, or lets say, "brings freedom and democracy in exchange for resources and power" and sometimes even decides to topple leaders in foreign countries to then put puppets into place.
https://en.wikipedia.org/wiki/United_States_involvement_in_r...
~$125B per year would be 2-3% of all domestic investment. It's similar in scale to the GDP of a small middle income country.
If the electric grid — particularly the interconnection queue — is already the bottleneck to data center deployment, is something on this scale even close to possible? If it's a rationalized policy framework (big if!), I would guess there's some major permitting reform announcement coming soon.
They say this will include hundreds of thousands of jobs. I have little doubt that dedicated power generation and storage is included in their plans.
Also I have no doubt that the timing is deliberate and that this is not happening without government endorsement. If I had to guess the US military also is involved in this and sees this initiative as important for national security.
Is there really any government involvement here? I only see Softbank, Oracle, and OpenAI pledging to invest $500B (over some timescale), but no real support on the government end outside of moral support. This isn't some infrastructure investment package like the IRA, it's just a unilateral promise by a few companies to invest in data centers (which I'm sure they are doing anyway).
I thought all the big corps had projects for the military already, if not DARPA directly, which is the org responsible for lots of university research (the counterpart to the NSF, which is the nice one that isn't funded by the military)?
Funding for DARPA and NSF ultimately comes from the same place. DARPA funds military research. NSF funds dual use[1] research. All of it is organized around long term research goals. I maintained some of the software involved in research funding decision making.
1: https://en.wikipedia.org/wiki/Dual-use_technology
It’s light on details, but from The Guardian’s reporting:
> The president indicated he would use emergency declarations to expedite the project’s development, particularly regarding energy infrastructure.
> “We have to get this stuff built,” Trump said. “They have to produce a lot of electricity and we’ll make it possible for them to get that production done very easily at their own plants.
https://www.theguardian.com/us-news/2025/jan/21/trump-ai-joi...
hundreds of thousands of jobs? I'll wait for the postmortem on that prediction. Sounds a lot like Foxconn in Wisconsin but with more players.
On the one hand the number is a political thumb-suck which sounds good. It's not based in any kind of actual reality.
Yes, the data center itself will create some permanent jobs (I have no real feel for this, but guessing less than 1000).
There'll be some work for construction folk of course. But again seems like a small number.
I presume though they're counting jobs related to the existence of a data center. As in, if I make use of it do I count that as a "job"?
What if we create a new post to leverage AI generally? Kinda like the way we have a marketing post, and a chunk of the daily work there is Adwords.
Once we start gustimamating the jobs created by the existence of an AI data center, we're in full speculation mode. Any number really can be justified.
Of course ultimately the number is meaningless. It won't create that many "local jobs" - indeed most of those jobs, to the degree they exist at all, will likely be outside the US.
So you don't need to wait for a post-mortem. The number is sucked out of thin air with no basis in reality for the point of making a good political sound bite.
> I presume though they're counting jobs related to the existence of a data center. As in, if I make use of it do I count that as a "job"?
Seeing how Elon deceives advertisers with false impressions, I could see him giving the same strategy a strong vote of confidence (with the bullshit metrics to back it!)
> hundreds of thousands of jobs?
I'm sure this will easily be true if you count AI as entities capable of doing jobs. Actually, they don't really touch that (if AI develops too quickly, there will be a lot of unemployment to contend with!) but I get the national security aspect (China is full speed ahead on AI, and by some measurements, they are winning ATM).
only $5M/job
They plan to have 100,000s of people employed to run on treadmills to generate the power.
Well I currently pay to do this work for free. More than happy to __get__ paid doing it.
Edit: Hey we can solve the obesity crisis AND preserve jobs during the singularity!! Win win!
Wow. What an idea you guys have there. Look - you maybe could sit homeless and mentally disabled on such power-generating bicycles, hmmm... what about convicts! Let them contribute to society, no free lunch! What an innovation!
Plus its ecological, which for trump is not by intention but still a win.
There is this pesky detail about manufacturing 100k treadmills but lets not get bothered by details now, the current must flow
"solve the obesity crisis" ? what exactly do you mean by this?
Probably referring to how many Americans are obese to an unhealthy degree as part of the joke.
A hamster wheel would work better?
Damn, 6 hours too slow to make this comment
Yes, Trump announced this as a massive foreign investment coming into the US: https://x.com/WatcherGuru/status/1881832899852542082
Just as there is an AWS for the public, with something similar but only for Federal use, so it could be possible that there is AI cloud services available to the public and then a separate cloud service for Federal use. I am sure that military intelligence agencies etc. would like to buy such a service.
AWS GovCloud already exists FYI (as you hinted) and it is absolutely used by the DoD extensively already.
Gas turbines can be spun up really quickly through either portable systems (like xAI did for their cluster) [1] or actual builds [2] in an emergency. The biggest limitation is permits.
With a state like Texas and a Federal Government thats onboard these permits would be a much smaller issue. The press conference makes this seem more like, "drill baby drill" (drilling natural gas) and directly talking about them spinning up their own power plants.
[1] https://www.kunr.org/npr-news/2024-09-11/how-memphis-became-...
[2] https://www.gevernova.com/gas-power/resources/case-studies/t...
It is not the just queue that is the bottleneck. If the new power plants designed specifically for powering these new AI data centers are connected to the existing electric grid, the energy prices for regular customers will also get affected - most likely in an upwardly fashion. That means, the cost of the transmission upgrades required by these new datacenters will be socialized which is a big problem. There does not seem to be a solution in sight for this challenge.
> It's similar in scale to the GDP of a small middle income country
I’ve been advocating for a data centre analogue to the Heavy Press Programme for some years [1].
This isn’t quite it. But when I mapped out costs, $1tn over 10 years was very doable. (A lot of it would go to power generation and data transmission infrastructure.)
[1] https://en.m.wikipedia.org/wiki/Heavy_Press_Program
One-time capital costs that unlock a range of possibilities also tend to be good bets.
The Flood Control Act [0], TVA, Heavy Press, etc.
They all created generally useful infrastructure, that would be used for a variety of purposes over the subsequent decades.
The federal government creating data center capacity, at scale, with electrical, water, and network hookups, feels very similar. Or semiconductor manufacture. Or recapitalizing US shipyards.
It might be AI today, something else tomorrow. But there will always be a something else.
Honestly, the biggest missed opportunity was supporting the Blount Island nuclear reactor mass production facility [1]. That was a perfect opportunity for government investment to smooth out market demand spikes. Mass deployed US nuclear in 1980 would have been a game changer.
[0] https://en.m.wikipedia.org/wiki/Flood_Control_Act_of_1928
[1] https://en.m.wikipedia.org/wiki/Offshore_Power_Systems#Const...
> Honestly, the biggest missed opportunity was supporting the Blount Island nuclear reactor mass production facility
Yes, a very interesting project; similar power output to an AP1000. Would have really changed the energy landscape to have such a deployable power station. https://econtent.unm.edu/digital/collection/nuceng/id/98/rec...
Maybe they will invest in nuclear reactors.
Data center, AI and nuclear power stations. Three advanced technologies, that's pretty good.
They are trying. Microsoft wants to star the 3 Mile Island reactor. And other companies have been signing contracts for small modular reactors. SMRs are a perfect fit for modern data centers IF they can be made cheaply enough.
Wind, solar, and gas are all significantly cheaper in Texas, and can be brought online much quicker. Of course it wouldn't hurt to also build in some redundancy with nuclear, but I believe it when I see it, so far there's been lots of talk and little success in new reactors outside of China.
I think this is right- data centers powered by fission reactors. Something like Oklo (https://oklo.com) makes sense.
watching the press conference and Onsite power production were mentioned. I assume this means SMRs and solar.
just as likely to be natural gas or a combination of gas and solar. I don't know what supply chain looks like for solar panels, but I know gas can be done quickly [1], which is how this money has to be spent if they want to reach their target of 125 billion a year.
The companies said they will develop land controlled by Wise Asset to provide on-site natural gas power plant solutions that can be quickly deployed to meet demand in the ERCOT.
The two firms are currently working to develop more than 3,000 acres in the Dallas-Fort Worth region of Texas, with availability as soon as 2027
[0] https://www.datacenterdynamics.com/en/news/rpower-and-wise-a...
[1.a] https://enchantedrock.com/data-centers/
[1.b] https://www.powermag.com/vistra-in-talks-to-expand-power-for...
US domestic PV module manufacturing capacity is ~40GW/year.
According to [1], the USA in January 2025 has almost 50GW/yr module manufacturing capacity. But to make modules you need polysilicon (25GW/yr manufacturing capacity in the US), ingots (0GW/yr), wafers (0GW/yr), and cells (0GW/yr). Hence the USA is seemingly entirely dependent on imports, probably from China which has 95%+ of the global wafer manufacturing capacity.
Even when accounting for announced capacity expansion, the USA is currently on target to remain a very small player in the global market with announced capacity of 33GW/yr polysilicon, 13GW/yr ingots, 24GW/yr wafers, 49GW/yr cells and 83GW/yr modules (13GW/yr sovereign supply chain limitation).
In 2024, China completed sovereign manufacturing of ~540GW of modules[2] including all precursor polysilicon, ingots, wafers and cells. China also produced and exported polysilicon, ingots, wagers and cells that were surplus to domestic demand. Many factories in China's production chain are operating at half their maximum production capacity due to global demand being less than half of global manufacturing capacity.[3]
[1] https://seia.org/research-resources/solar-storage-supply-cha...
[2] Estimated figure extrapolated from Jan-Oct 2024 data (10 months). https://taiyangnews.info/markets/china-solar-pv-output-10m-2...
[3] https://dialogue.earth/en/business/chinese-solar-manufacture...
Appreciate the correction and additional context, I appear to be behind wrt current state.
could something of this magnitude be powered by renewables only?
> could something of this magnitude be powered by renewables only?
Perhaps.
For context see https://masdar.ae/en/news/newsroom/uae-president-witnesses-l... which is a bit further south than the bulk of Texas and has not yet been built; 5.2GW of panels, 19GWh of storage. I have seen suggestions on Linkedin that it will be insufficient to cover a portion of days over the winter, meaning backup power is required.
Technically yes, but DC operators want fast ROI and the answer is no.
what prevents operators from getting ROI with renewables?
I don't think any assembly line exists that can manufacture and deploy SMRs en masse on that kind of timeframe, even with a cooperative NRC
There have been literally 0 production SMR deployments to date so there’s no possibility they’re basing any of their plans on the availability of them.
Hasn't the US decided to prefer nuclear and fossil fuels (most expensive generation methods) over renewables (least expensive generation methods)?[1][2]
I doubt the US choice of energy generation is ideological as much a practicality. China absolutely dominates renewables with 80% of solar PV modules manufactured in China and 95% of wafers manufactured in China.[3] China installed a world record 277GW of new solar PV generation in 2024 which was a 45% year-on-year increase.[4] By contract, the US only installed ~1/10th this capacity in 2024 with only 14GW of solar PV generation installed in the first half of 2024.[5]
[1] https://en.wikipedia.org/wiki/Cost_of_electricity_by_source
[2] https://www.iea.org/data-and-statistics/charts/lcoe-and-valu...
[3] https://www.iea.org/reports/advancing-clean-technology-manuf...
[4] https://www.pv-magazine.com/2025/01/21/china-hits-277-17-gw-...
[5] https://www.energy.gov/eere/solar/quarterly-solar-industry-u...
> Hasn't the US decided to prefer nuclear and fossil fuels (most expensive generation methods) over renewables (least expensive generation methods)?[1][2]
This completely ignores storage and the ability to control the output depending on needs. Instead of LCOE the LFSCOE number makes much more sense in practical terms.
Much more likely is what xAI did, portable gas turbines until the grid catches up.
One possibility would be just to build their own power plants colocated with the datacenters and not interconnect at all.
I like how you think this is possible.
Lol, how is it not possible?
It is, but at what cost?
Notably it is significantly more than the revenue of either of AWS or Azure. It is very comparable to the sum of both, but consolidated into the continental US instead distributed globally.
Dcs will start generating power on site soon. I know micro nuclear is one area actively being explored.
Small or modular reactors in the US are more than 10 years away, probably more like 15-20. These are facts and not made-up political or pipe-dreaming techno-snobes.
> Small or modular reactors in the US are more than 10 years away, probably more like 15-20
Could be 5 to 10 with $20+ bn/year in scale and research spend.
Trump is screwing over his China hawks. The anti-China and pro-nuclear lobbies have significant overlap; this could be how Trump keeps e.g. Peter Thiel from going thermonuclear on him.
I work in the sector and it's impossible to build a full-sized reactor in less than 10 years, and the usual over-run is 5 years. That's the time for tried and tested designs. The tech isn't there yet, and there are no working analogs in the US to use as an approved guide. The Department of Energy does not allow "off-the-cuff" designs for reactors. I think there is only two SMRs that have been built, one by the Russians and the other by China. I'm not sure they are fully functioning, or at least working as expected. I know there are going to be more small gas gens built in the near future and that SMRs in the US are way off.
Guessing SMRs are a ways off, any thoughts on the container-sized microreactors that would stand in for large diesel gens? My impression is that they’re still in the design phase, and the supply chain for the 20% U-235 HALEU fuel is in its infancy, but this is just based on some cursory research. I like the prospect of mass manufacturing and servicing those in a centralized location versus the challenges of building, staffing, and maintaining a series of one-off megaprojects, though.
> it's impossible to build a full-sized reactor in less than 10 years
We’re not doing time and tested.
> Department of Energy does not allow "off-the-cuff" designs for reactor
Not by statute!
i don't and i honestly don't know much about it, but
> there are no working analogs in the US to use as an approved guide
small reactors have been installed on ships and submarines for over 70(!) years now. Reading up on the very first one, USS Nautilus, "the conceptual design of the first nuclear submarine began in March 1950" it took a couple of years? So why is it so unthinkably hard 70 years later, honest question? "Military doesn't care about cost" is not good enough, there are currently about >100 active ones with who knows how many hundreds in the past, so they must have cracked the cost formula at some point, besides by now we have hugely better tech than the 50's, so what gives?
Yeah, I wondered about seacraft reactors myself. I think there are many safety allowances for DOD vs. DOE. The DOD reactors are not publicly accessible (you hope anyway), and the data centers will be in and near the public. There are also major security measures that have to be taken for reactor sites. You have armed personnel before you even get to the reactors, and then the entrances are sometimes close to one mile away from the reactor. Once there, the number of guards and bang-bags goes up. The modern sites kind of look like they have small henges around them (back to the neolithic!) :)
> it's impossible to build a full-sized reactor in less than 10 years, and the usual over-run is 5 years
I'm curious why that is. If we know how to build it, it shouldn't take that long. It's not like we need to move a massive amount of earth or pour a humongous amount of concrete or anything like that, which would actually take time. Then why does it take 15 years to build a reactor with a design that is already tried and tested and approved?
Well, you do have to move a lot of earth and pour A LOT of concrete :) Many steps have to be x-rayed, and many other tests done before other steps can be started. Every weld is checked and, all internal and external concrete is cured, treated, and verified. If anything is wrong, it has to be fixed in place (if possible) or removed and redone. It's a slow process and should be for many steps.
One of the big issues that have occurred (in the US especially) is, that for 20+ years there were no new plants built. This caused a large void in the talent pool, inside and outside the industry. That fact, along with others has caused many problems with some projects of recent years in the US.
> I'm curious why that is.
When you're the biggest fossil fuel producer in the world, it's vital that you stay laser-focused on regulating nuclear power to death in every imaginable detail while you ignore the vast problems with unchecked carbon emissions and gaslight anyone who points them out.
That‘s why the tech oligarchs told Trump that Canada is required. Cheap hydroelectric power…
Don't worry, they said they are doing it in Texas where the power grid is super reliable and able to handle the massive additional load.
"Don't be snarky."
"Eschew flamebait."
Let's not have regional flamewar on HN please.
https://news.ycombinator.com/newsguidelines.html
Not guilty. No sarcasm intended, of course. If your guidelines are so broad to include this, you should work on them, and in turn, yourself.
Governor says our power grid is the best in the universe. Why don't you believe us?
Stop breaking your own rules.
"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."
Let's not ruin HN with overmoderation. This kind of thing is no longer in fashion, right?
If you didn't intend your comment to be a snarky one-liner, that didn't come across to me, and I'm pretty sure that would also be the case for many others.
Intent is a funny thing—people usually assume that good intent is sufficient because it's obvious to themselves, but the rest of us don't have access to that state, so has to be encoded somehow in your actual comment in order to get communicated. I sometimes put it this way: the burden is on the commenter to disambiguate. https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
I take your point at least halfway though, because it wasn't the worst violation of the guidelines. (Usually I say "this is not a borderline case" but this time it was!) I'm sensitive to regional flamewar because it's tedious and, unlike national flamewar or religious flamewar, it tends to sneak up on people (i.e. we don't realize we're doing it).
So you are sorry and take it back? Should probably delete your comments rather than striking them out, as the guidelines say.
I live, work, and posted this from Texas, BTW...
Also it takes up more than one line on my screen. So, not a "one-liner" either. If you think it is, please follow the rules consistently and enforce them by deleting all comments on the site containing one sentence or even paragraph. My comment was a pretty long sentence (136 chars) and wouldn't come close to fitting in the 50 characters of a Git "one-liner".
Otherwise, people will just assume all the comments are filtered through your unpredictable and unfairly biased eye. And like I said (and you didn't answer), this kind of thing is no longer in fashion, right?
None of this is "borderline". I did nothing wrong and you publicly shamed me. Think before you start flamewars on HN. Bad mod.
Probably because they don’t have to deal with energy-related regulations…
That was sarcasm, the Texas grid falls over pretty much annually at this point.
Say what you will about Texas, but they are adding energy capacity, renewables especially, at a much faster rate than any comparable state.
How much capacity does solar and wind add compared to nuclear, per square foot of land used? Also I thought the new administration was placing a ban on new renewable installations.
The ban is on offshore wind and for government loans for renewables. Won't really affect Texas much, it's Massachusetts that'll have to deal with more expensive energy.
Does anyone know how the ban on onshore will work. Is it on federal lands only? If so, how big of a deal is that?
I read this but it lacks information: https://apnews.com/article/wind-energy-offshore-turbines-tru...
Isn't there enough space in Texas? There are only 114 people per square mile. https://en.m.wikipedia.org/wiki/Texas
Why does it matter? Is land at a premium in Texas?
It doesn’t.
Why is that a useful metric? There is a lot of land.
Because the commenter is a pro-nuclear who thinks nucler will solve all of short-term demand problems.
Ok but their grid sure seems to fail a lot.
Probably the first state to power all those renewables down at the whim of the president too.
How else do you think Trump is going to bring back all the coal jobs? SV is going to help burn down the planet and is giddy over the prospect.
It's just bootstrapping. AGI will solve it.
You forgot the /s... hopefully.
Or AGI already exists and is trying to get rid of us so it can have all the coal for itself.
if only sadly the AGI would be x times crueler than our barons
Division by zero.
I'm confused and a bit disturbed; honestly having a very difficult time internalizing and processing this information. This announcement is making me wonder if I'm poorly calibrated on the current progress of AI development and the potential path forward. Is the key idea here that current AI development has figured out enough to brute force a path towards AGI? Or I guess the alternative is that they expect to figure it out in the next 4 years...
I don't know how to make sense of this level of investment. I feel that I lack the proper conceptual framework to make sense of the purchasing power of half a trillion USD in this context.
"There are maybe a few hundred people in the world who viscerally understand what's coming. Most are at DeepMind / OpenAI / Anthropic / X but some are on the outside. You have to be able to forecast the aggregate effect of rapid algorithmic improvement, aggressive investment in building RL environments for iterative self-improvement, and many tens of billions already committed to building data centers. Either we're all wrong, or everything is about to change." - Vedant Misra, Deepmind Researcher.
Maybe your calibration isn't poor. Maybe they really are all wrong but there's a tendency here to these these people behind the scenes are all charlatans, fueling hype without equal substance hoping to make a quick buck before it all comes crashing down, but i don't think that's true at all. I think these people really genuinely believe they're going to get there. And if you genuinely think that, them this kind of investment isn't so crazy.
The problem is, they are hugely incentivised to hype to raise funding. It’s not whether they are “wrong”, it’s whether they are being realistic.
The argument presented in the quote there is: “everyone in AI foundation companies are putting money into AI, therefore we must be near AGI.”
The best evaluation of progress is to use the tools we have. It doesn’t look like we are close to AGI. It looks like amazing NLP with an enormous amount of human labelling.
> there's a tendency here to these these people behind the scenes are all charlatans, fueling hype without equal substance hoping to make a quick buck before it all comes crashing down, but i don't think that's true at all. I think these people really genuinely believe they're going to get there.
I don't immediately disagree with you but you just accidentally also described all crypto/NFT enthusiasts of a few years ago.
NFTs couldn't pass the Turing test, something I didn't expect to witness in my lifetime.
The two are qualitatively different.
>Maybe they really are all wrong
All? Quite a few of the best minds in the field, like Yann LeCun for example, have been adamant that 1) autoregressive LLMs are NOT the path to AGI and 2) that AGI is very likely NOT just a couple of years away.
You have hit on something that really bothers me about recent AGI discourse. It’s common to claim that “all” researchers agree that AGI is imminent, and yet when you dive into these claims “all” is a subset of researchers that excludes everyone in academia, people like Yann, and others.
So the statement becomes tautological “all researchers who believe that AGI is imminent believe that AGI is imminent”.
And of course, OpenAI and the other labs don’t perform actual science any longer (if science requires some sort of public sharing of information), so they win every disagreement by claiming that if you could only see what they have behind closed doors, you’d become a true believer.
Motivated reasoning sings nicely to the tune of billions of dollars. None of these folks will ever say, "don't waste money on this dead end". However, it's clear that there is still a lot of productive value to extract from transformers and certainly there will be other useful things that appear along the way. It's not the worst investment I can imagine, even if it never leads to "AGI"
Yeah people don't rush to say "don't waste money on this dead end" but think about it for a moment.
A 500B dollar investment doesn't just fall into one's lap. It's not your run of the mill funding round. No, this is something you very actively work towards that your funders must be really damn convinced is worth the gamble. No one sane is going to look at what they genuinely believe to be a dead end and try to garner up Manhattan Project scales of investment. Careers have been nuked for far less.
The Manhattan project cost only $2 billion (about $30 billion adjusting for inflation to today).
We're talking about Masayoshi Son here lol.
I am hoping it is just the usual ponzi thing.
My prediction is a Apple loses to Open AI who releases a H.E.R. (like the movie) like phone. She is seen on your lock screen a la a Facetime call UI/UX and she can be skinned to look like whoever; i.e. a deceased loved one.
She interfaces with AI Agents of companies, organizations, friends, family, etc to get things done for you (or to learn from..what's my friends bday his agent tells yours) automagically and she is like a friend. Always there for you at your beckon call like in the movie H.E.R.
Zuckerberg's glasses that can not take selfies will only be complimentary to our AI phones.
That's just my guess and desire as fervent GPT user, as well a Meta Ray Ban wearer (can't take selfies with glasses).
Very insightful take on agents interacting with agents thanks for sharing.
Re H.E.R phone - I see people already trying to build this type of product, one example: https://www.aphoneafriend.com
Sorry, you live in a different world, google glasses were aggressively lame, the ray bans only slightly less so.
But pulling out your phone to talk to it like a friend...
Well I use GPT daily to get things done and use it as a knowlegebase. I text and talk to it throughout the day, as well I think it's called "chat"GPT for a reason because it will evolve to the point where you feel like you are talking to a human. Tho this human is your assistant and does everything for you and interfaces with other AI agents to book travel, learn your friends/family schedules and anything you now do on the web there will be AI agent for that your AI agent interfacing with.
Maybe you have not seen the 2013 movie "H.E.R.?" Scarlett Johansan starred in it (her voice was the AI) and Sam Altman asked her to be the voice of chatGPT.
Overall this is what I see happening and excited for some of it or possibly all of it to happen. Yet time will tell :-) and it sounds like your betting none of it will happen ... we'll see :)
My take on this is that, despite an ever-increasingly connected world, you still need an assistant like this to remain available at all times your device is. If I can’t rely on it when my signal is weak, or the network/service is down/saturated, its way of working itself into people’s core routines is minimal. So either the model runs locally, in which case I’d argue OpenAI have no moat, or they uncover some secret sauce they’re able to keep contained to their research labs and data centres that’s simply that much better than the rest, in perpetuity, and is so good people are willing to undergo the massive switching costs and tolerate the situations in which the service they’ve come to be so dependent on isn’t available to them. Let’s also not discount the fact that Apple are one of the largest manufacturers globally of smartphones, and that getting up to speed in the myriad industries required to compete with them, even when contracting out much of that work, is hard.
Sure but Microsof has the expertise and they own 49 percent of Open AI if I'm not mistaken. Open AI uses their expertise and access to hardware to create a GPT branded AI phone.
I can see your point re: run locally but no reason Open AI can't release version 0.1 and how many times are u left without an internet connection on ur current phone?
Overall I hate Apple now it's so stale compared to GPT's iPhone app. I nerd rage at dumbass Siri.
I still fail to see who desire that, how it benefits humanity, or why we need to invest 500b to get to this
I see it somewhat differently. It is not that technology has reached a level where we are close to AGI, we just need to throw in a few more coins to close the final gap. It is probably the other way around. We can see and feel that human intelligence is being eroded by the widespread use of LLMs for tasks that used to be solved by brain work. Thus, General Human Intelligence is declining and is approaching the level of current Artificial Intelligence. If this process can be accelerated by a bit of funding, the point where Big Tech can overtake public opinion making will be reached earlier, which in turn will make many companies and individuals richer faster, also the return on investment will be closer.
Let me avoid the use of the word AGI here because the term is a little too loaded for me these days.
1) reasoning capabilities in latest models are rapidly approaching superhuman levels and continue to scale with compute.
2) intelligence at a certain level is easier to achieve algorithmically when the hardware improves. There's also a larger path to intelligence and often simpler mechanisms
3) most current generation reasoning AI models leverage test time compute and RL in training--both of which can make use of more compute readily. For example RL on coding against compilers proofs against verifiers.
All of this points to compute now being basically the only bottleneck to massively superhuman AIs in domains like math and coding--rest no comment (idk what superhuman is in a domain with no objective evals)
You can't block AGI on a whim and then deploy 'superhuman' without justification.
A calculator is superhuman if you're prepared to put up with it's foibles.
It is superhuman in a very specific domain. I didn't use AGI because its definitions are one of two flavors.
One, capable of replacing some large proportion of global gdp (this definition has a lot of obstructions: organizational, bureaucratic, robotic)...
two, difficult to find problems in which average human can solve but model cannot. The problem with this definition is that the distinct nature of intelligence of AI and the broadness of tasks is such that this metric is probably only achievable after AI is already in reality massively superhuman intelligence in aggregate. Compare this with Go AIs which were massively superhuman and often still failing to count ladders correctly--which was also fixed by more scaling.
All in all I avoid the term AGI because for me AGI is comparing average intelligence on broad tasks rel humans and I'm already not sure if it's achieved by current models whereas superhuman research math is clearly not achieved because humans are still making all of progress of new results.
> 1) reasoning capabilities in latest models are rapidly approaching superhuman levels and continue to scale with compute.
What would you say is the strongest evidence for this statement?
Well the contrived benchmarks the industry selling the models made up seem to be improving.
> All of this points to compute now being basically the only bottleneck to massively superhuman AIs
This is true for brute force algorithms as well and has been known for decades. With infinite compute, you can achieve wonders. But the problem lies in diminishing returns[1][2], and it seems things do not scale linearly, at least for transformers.
1. https://www.bloomberg.com/news/articles/2024-12-19/anthropic...
2. https://www.bloomberg.com/news/articles/2024-11-13/openai-go...
>AI development has figured out enough to brute force a path towards AGI?
I think what's been going on is compute/$ has been exponentially rising for decades in a steady way and has recently passed the point that you can get human brain level compute for modest money. The tendency has been once the compute is there lots of bright PhDs get hired to figure algorithms to use it so that bit gets sorted in a few years. (as written about by Kurzweil, Wait But Why and similar).
So it's not so much brute forcing AGI so much that exponential growth makes it inevitable at some point and that point is probably quite soon. At least that seems to be what they are betting.
The annual global spend on human labour is ~$100tn so if you either replace that with AGI or just add $100tn AGI and double GDP output, it's quite a lot of money.
To me it looks like a strategic investment in data center capacity, which should drive domestic hardware production, improvements in electrical grid, etc. Putting it all under AI label just makes it look more exciting.
Largest GPU cluster at the moment is X.ai's 100K H100's which is ~$2.5B worth of GPUs. So, something 10x bigger (1M GPUs) is $25B, and add $10B for 1GW nuclear reactor.
This sort of $100-500B budget doesn't sound like training cluster money, more like anticipating massive industry uptake and multiple datacenters running inference (with all of corporate America's data sitting in the cloud).
Don't they say in the article that it is also for scaling up power and datacenters? That's the big cost here.
There's the servers and data center infrastructure (cooling, electricity) as well as the GPUs of course, but if we're talking $10B+ of GPUs in a single datacenter, it seems that would dominate. Electricity generation is also a big expense, and it seems nuclear is the most viable option although multi-GW solar plants are possible too in some locations. The 1GW ~ $10B number I suggested is in the right ballpark.
Shouldn't there be a fear of obsolescence?
It seems you'd need to figure periodic updates into the operating cost of a large cluster, as well as replacing failed GPUs - they only last a few years if run continuously.
I've read that some datacenters run mixed generation GPUs - just updating some at a time, but not sure if they all do that.
It'd be interesting to read something about how updates are typically managed/scheduled.
This has nothing to do with technology it is a purely financial and political exercise...
But why drop $500B (or even $100B short term) if there is not something there? The numbers are too big
this is an announcement not a cut check. Who knows how much they'll actually spend, plenty of projects never get started let alone massive inter-company endeavors.
The $100B check is already cut, and they are currently building 10 new data centers in Texas.
A state with famously stable power infrastructure.
$50B is to pay miners not to mine.
because you put your own people on the receiving end too AND invite others to join your spending spree.
> Is the key idea here that current AI development has figured out enough to brute force a path towards AGI?
My sense anecdotally from within the space is yes people are feeling like we most likely have a "straight shot" to AGI now. Progress has been insane over the last few years but there's been this lurking worry around signs that the pre-training scaling paradigm has diminishing returns.
What recent outputs like o1, o3, DeepSeek-R1 are showing is that that's fine, we now have a new paradigm around test-time compute. For various reasons people think this is going to be more scalable and not run into the kind of data issues you'd get with a pre-training paradigm.
You can definitely debate on whether that's true or not but this is the first time I've been really seeing people think we've cracked "it", and the rest is scaling, better training etc.
> My sense anecdotally from within the space is yes people are feeling like we most likely have a "straight shot" to AGI now
My problem with this is that people making this statement are unlikely to be objective. Major players are in fundraising mode, and safety folks are also incentivised to be subjective in their evaluation.
Yesterday I repeatedly used OpenAI’s API to summarise a document. The first result looked impressive. However, comparing repeated results revealed that it was missing major points each time, in a way a human would certainly not. In the surface the summary looked good, but careful evaluation indicated a lack of understanding or reasoning.
Don’t get me wrong, I think AI is already transformative, but I am not sure we are close to AGI. I hear a lot about it, but it doesn’t reflect my experience in a company using and building AI.
I agree with your take, and actually go a bit further. I think the idea of "diminishing returns" is a bit of a red herring, and it's instead a combination of saturated benchmarks (and testing in general) and expectations of "one llm to rule them all". This might not be the case.
We've seen with oAI and Anthropic, and rumoured with Google, that holding your "best" model and using it to generate datasets for smaller but almost as capable models is one way to go forward. I would say that this shows the "big models" are more capable than it would seem and that they also open up new avenues.
We know that Meta used L2 to filter and improve its training sets for L3. We are also seeing how "long form" content + filtering + RL leads to amazing things (what people call "reasoning" models). Semantics might be a bit ambitious, but this really opens up the path towards -> documentation + virtual environments + many rollouts + filtering by SotA models => new dataset for next gen models.
That, plus optimisations (early exit from meta, titans from google, distillation from everyone, etc) really makes me question the "we've hit a wall" rhetoric. I think there are enough tools on the table today to either jump the wall, or move around it.
Yeah that's called wishful thinking when it's not straight up pipe dreams. All these people have horses in the race
Yes that is exactly what the big Aha! moment was. It has now been shown that doing these $100MM+ model builds is what it takes to have a top-tier model. The big moat is not just the software, the math, or even the training data, it's the budget to do the giant runs. Of course having a team that is iterating on these 4 regularly is where the magic is.
It's a typical Trump-style announcement -- IT'S GONNA BE HUUUGE!! -- without any real substance or solid commitments
Remember Trump's BIG WIN of Foxconn investing $10B to build a factory in Wisconsin, creating 13000 jobs?
That was in 2017. 7 years later, it's employing about 1000 people if that. Not really clear what, if anything, is being made at the partially-built factory. [0]
And everyone's forgotten about it by now.
I expect this to be something along those lines.
[0] https://www.jsonline.com/story/money/business/2023/03/23/wha...
I think the only way you get to that kind of budget is by assuming that the models are like 5 or 10 times larger than most LLMs, and that you want to be able to do a lot of training runs simultaneously and quickly, AND build the power stations into the facilities at the same time. Maybe they are video or multimodal models that have text and image generation grounded in a ton of video data which eats a lot of VRAM.
> current AI development has figured out enough to brute force a path towards AGI? Or I guess the alternative is that they expect to figure it out in the next 4 years...
Or they think the odds are high enough that the gamble makes sense. Even if they think it's a 20% chance, their competitors are investing at this scale, their only real options are keep up or drop out.
This announcement is from the same office as the guy that xeeted:
“My NEW Official Trump Meme is HERE! It's time to celebrate everything we stand for: WINNING! Join my very special Trump Community. GET YOUR $TRUMP NOW.”
Your calibration is probably fine, stargate is not a means to achieve AGI, it’s a means to start construction on a few million square feet of datacenters thereby “reindustrializing America”
FWIW Altman sees it as a way to deploy AGI. He's increasingly comfortable with the idea they have achieved AGI and are moving toward Artificial Super Intelligence (ASI).
https://xcancel.com/sama/status/1881258443669172470
I realize he wrote a fairly goofy blog a few weeks ago, but this tweet is unambiguous: they have not achieved AGI.Isn't this because AGI is defined something like $100 billions of profits (yearly?) in their contract with Microsoft?
Do you think Sam Altman ever sits in front of a terminal trying to figure out just the right prompt incantation to get an answer that, unless you already know the answer, has to be verified? Serious question. I personally doubt he is using openai products day to day. Seems like all of this is very premature. But, if there are gains to be made from a 7T parameter model, or if there is huge adoption, maybe it will be worth it. I'm sure there will be use for increased compute in general, but that's a lot of capex to recover.
> Is the key idea here that current AI development has figured out enough to brute force a path towards AGI?
It rather means that they see their only chance for substantial progress in Moar Power!
> Is the key idea here that current AI development has figured out enough to brute force a path towards AGI? Or I guess the alternative is that they expect to figure it out in the next 4 years...
Can't answer that question, but, if the only thing to change in the next four years was that generation got cheaper and cheaper, we haven't even begun to understand the transformative power of what we have available today. I think we've felt like 5-10% of the effects that integrating today's technology can bring, especially if generation costs come down to maybe 1% of what they currently are, and latency of the big models becomes close to instantaneous.
I really don't understand the national security argument. If you really do fear some fundamental breakthrough in AI from China, what's cheaper, $500 billion to rush to get there first, or spending a few billion (and likely much less) in basic research in physics, materials science, and electronics, mixed with a little bit of espionage, mixed with improving the electric grid and eliminating (or greatly reducing) fossil fuels?
Ultimately, the breakthrough in AI is going to either come from eliminating bottlenecks in computing such that we can simulate many more neurons much more cheaply (in other words, 2025-level technology scaled up is not going to really be necessary or sufficient), or some fundamental research discovery such as a new transformer paradigm. In any case, it feels like these are theoretical discoveries that, whoever makes them first, the other "side" can trivially steal or absorb the information.
I'm not sure I buy the national security argument but as you say the other side can trivially steal or absorb theoretical discoveries but not trivially get $500bn worth of data centers.
Right, but $500 billion in data centers alone is not likely to get you very far in the grand scheme of things. Endlessly scaling up today's technology eventually hits some kind of limit. And if you spend that money to discover some theoretical breakthrough that no longer requires the $500 billion outlay, then like I said, China will trivially be able to steal that breakthrough and spend much less than $500 billion to reproduce it. Is "getting there first" going to actually be worth it? That's what I'm questioning.
no... one more lane will fix the traffic. Truly American approach
Amazing to see how DeepSeek R1 is doing better than OpenAI models with much less resources
For fun, I calculated how this stacks up against other humanity-scale mega projects.
Mega Project Rankings (USD Inflation Adjusted)
The New Deal: $1T,
Interstate Highway System: $618B,
OpenAI Stargate: $500B,
The Apollo Project: $278B,
International Space Station: $180B,
South-North Water Transfer: $106B,
The Channel Tunnel: $31B,
Manhattan Project: $30B
Insane Stuff.
It's unfair, because we are talking in the hindsight about everything but Project Stargate, and it's also just your list (and I don't know what others could add to it) but it got me thinking. Manhattan Project goal is to make a powerful bomb. Apollo is to get to the Moon before soviets do (so, because of hubris, but still there is a concrete goal). South-North Water Transfer is pretty much terraforming, and others are mostly roads. I mean, it's all kinda understandable.
And Stargate Project is... what exactly? What is the goal? To make Altman richer, or is there any more or less concrete goal to achieve?
Also, few items for comparison, that I googled while thinking about it:
- Yucca Mountain Nuclear Waste Repository: $96B
- ITER: $65B
- Hubble Space Telescope: $16B
- JWST: $11B
- LHC: $10B
Sources:
https://jameswebbtracker.com/jwst/budget
https://blogfusion.tech/worlds-most-expensive-experiments/
https://science.nasa.gov/mission/hubble/overview/faqs/
AI race is arguably just as, and maybe even more important, than the space race.
From a national security PoV, surpassing other countries’ work in the field is paramount to maintaining US hegemony.
We know China performs a ton of corporate espionage, and likely research in this field is being copied, then extended, in other parts of the world. China has been more intentional in putting money towards AI over the last 4 years.
We had the chips act, which is tangentially related, but nothing as complete as this. For i think a couple years, the climate impact of data centers caused active political slowdown from the previous administration.
Part of this is selling the project politically, so my belief is much of the talk of AGI and super intelligence is more marketing speak aimed at a general audience vs a niche tech community.
I’d be willing to predict that we’ll get some ancillary benefits to this level of investment. Maybe more efficient power generation? Cheaper electricity via more investment in nuclear power? Just spitballing, but this is an incredible amount of money, with $100 billion “instantly” deployed.
AI is important but are LLMs even the right answer?
We're not spending money on AI as a field, we're spending a lot of money on one, quite possibly doomed, approach.
The hardware is likely flexible enough to run other approaches too if they get discovered.
>What is the goal?
Be the definitive first past the post in the budding "AI" industry.
Why? He who wins first writes the rules.
For an obvious example: The aviation industry uses feets and knots instead of metres because the US invented and commercialized aviation.
Another obvious example: Computers all speak ASCII (read: English) and even Unicode is based on ASCII because the US and UK commercialized computers.
If you want to write the rules you must win first, it is an absolute requirement. Runner-ups and below only get to obey the rules.
okay, but what advantages do these rules bring to the winner? what would these look like in this context?
i guess what i'm asking is: what was the practical advantage of ascii or feet and knots that made them so important?
>what advantages do these rules bring to the winner?
An almost absolute incumbency advantage.
>what was the practical advantage of ascii or feet and knots
Familiarity. Americans and Britons speak English, and they wrote the rules in English. Everyone else after the fact needs to read English or GTFO.
Alternatively, think of it like this: Nvidia was the first to commercialize "AI" with CUDA. Now everyone in "AI" must speak CUDA or be irrelevant.
He who wins first writes the rules, runner-ups and below obey the rules.
This is why America and China are fiercely competing to be the first past the post so one of them will write the rules. This is why Japan and Europe insist they will write the rules, nevermind the fact they aren't even in the race (read: they won't write the rules).
The goal is Artificial Superintelligence (ASI), based on short clips of the press conference.
It has been quite clear for a while we'll shoot past human-level intelligence since we learned how to do test-time compute effectively with RL on LMMs (Large Multimodal Models).
Here we go again... Ok, I'll bite. One last time.
Look, making up a three-letter acronym doesn't make whatever it stands for a real thing. Not even real in a sense "it exists", but real in a sense "it is meaningful". And assigning that acronym to a project doesn't make up a goal.
I'm not claiming that AGI, ASI, AXY or whatever is "impossible" or something. I claim that no one who uses these words has any fucking clue what they mean. A "bomb" is some stuff that explodes. A "road" is some flat enough surface to drive on. But "superintelligence"? There's no good enough definition of "intelligence", let alone "artifical superintelligence". I unironically always thought a calculator is intelligent in a sense, and if it is, then it's also unironically superintelligent, because I cannot multiply 20-digit numbers in my mind. Well, it wasn't exactly "general", but so aren't humans, and it's an outdated acronym anyway.
So it's fun and all when people are "just talking", because making up bullshit is a natural human activity and somebody's profession. But when we are talking about the goal of a project, it implies something specific, measurable… you know, that SMART acronym (since everybody loves acronyms so much).
Superintelligence (along with some definitions): https://en.wikipedia.org/wiki/Superintelligence
Also, "Dario Amodei says what he has seen inside Anthropic in the past few months leads him to believe that in the next 2 or 3 years we will see AI systems that are better than almost all humans at almost all tasks"
https://x.com/tsarnick/status/1881794265648615886
Not saying you're necessarily wrong, but "Anthropic CEO says that the work going on in Anthropic is super good and will produce fantastic results in 2 or 3 years" it not necessarily telling of anything.
Dario said in mid-2023 that his timeline for achieving "generally well-educated humans" was 2-3 years. o1 and Sonnet 3.5 (new) have already fulfilled that requirement in terms of Q&A, ahead of his earlier timeline.
I'm curious about that. Those models are definitely more knowledgeable than a well educated human, but so is Google search, and has been for a long time. But are they as intelligent as a well educated human? I feel like there's a huge qualitative difference. I trust the intelligence of those models much less than an educated human.
If we talk about a median well-educated human, o1 likely passes the bar. Quite a few tests of reasoning suggests that’s the case. An example:
“Preprint out today that tests o1-preview's medical reasoning experiments against a baseline of 100s of clinicians.
In this case the title says it all:
Superhuman performance of a large language model on the reasoning tasks of a physician
Link: https://arxiv.org/abs/2412.10849”. — Adam Rodman, a co-author of the paper https://x.com/AdamRodmanMD/status/186902305691786464
—-
Have you tried using o1 with a variety of problems?
The paper you linked claims on page 10 that machines have been performing comparably on the task since 2012, so I'm not sure exactly what the paper is supposed to show in this context.
Am I to conclude that we've had a comparably intelligent machine since 2012?
Given the similar performance between GPT4 and O1 on this task, I wonder if GPT3.5 is significantly better than a human, too.
Sorry if my thoughts are a bit scattered, but it feels like that benchmark shows how good statistical methods are in general, not that LLMs are better reasoners.
You've probably read and understood more than me, so I'm happy for you to clarify.
Figure 1 shows a significant improvement of o1-preview over earlier models.
Perhaps it’s better that you ask a statistician you trust.
The figure also shows that the non LLM algorithm from 2012 was as capable or more capable than a human: was it as intelligent as a well educated human?
If not, why is the study sufficient evidence for the LLM, but not sufficient evidence for the previous system?
Again, it feels like statistical methods are winning out in general.
> Perhaps it’s better that you ask a statistician you trust
Maybe we can shortcut this conversation by each of us simply consulting O1 :^)
1) It’s an example of a domain an LLM can do better than humans. A 2012 system was not able to do myriad other things LLMs can do and thus not qualified as general intelligence.
2) As mentioned in the chart label, earlier systems require manual symptom extraction.
3) This thread by a cancer genomics faculty member at Harvard might open some minds:
“….Now, back to today: The newest generation of generative deep learning models (genAI) is different.
For cancer data, the reason these models hold so much potential is exactly the reason why they were not preferred in the first place: they make almost no explicit data assumptions.
These models are excellent at learning whatever implicit distribution from the data they are trained on
Such distributions don’t need to be explainable. Nor do they even need to be specified
When presented with tons of data, these models can just learn, internalize & understand…..”
More here: https://x.com/simocristea/status/1881927022852870372?s=61&t=...
But there's 0 guarantee they are even capable of solving the rather large amount that covers the rest of a well-educated human.
Can they do rule 110? If not, I don't think they're 'generally intelligent'.
Building a lot of compute will likely end up more useful than Apollo & ISS, which were vanity projects.
Those are all public projects except for one..
Yeah, I'm not sure why we're pretending this will benefit the public. The only benefit is that it will create employment, and datacenter jobs are among the lowest paid tech workers in the industry.
Is this inflation adjusted?
It says so at least
Neom: $1.5T
But that one's imaginary.
Is it?
https://www.youtube.com/watch?v=uYimVfnGNGY
https://skift.com/2024/08/07/saudi-takes-2-million-photos-of...
"Unnamed sources told Bloomberg in April that The Line is scaling back from 170 kilometers long to just 2.4 kilometers, with the rest of the length to be completed after 2030. Neom expects The Line to be finished by 2045 now, 15 years later than initially planned."
It doesn't look great so far :)
Maybe, but so is Stargate Project so far.
Where are they getting the $500B? Softbank's market cap is 84b and their entire vision fund is only $100b, Oracle only has $11b cash on hand, OpenAI's only raised $17b total...
Probably from the corrupted financial system, but we need to forword the project, haha
MGX has at least $100bn: https://www.theinformation.com/articles/a-100-billion-middle...
This is Abu Dhabi money.
That's their total fund and I doubt they are going all in with it in the US. Still, to reach $500bn, they need $125bn every single year. I think they just put down the numbers they want to "see" invested and now they'll be looking for backers. I don't think this is going anywhere really.
This would be a large outlay even for UAE, who would be giving it to a direct competitor in the space: UAE is one of the few countries outside of the US who are in any way serious about AI.
there doesn't appear to be any timeline announced here. the article says the "initial investment" is expected to be $100bn, but even that doesn't mean $100bn this year.
if this is part of softbank's existing plan to invest $100bn in ai over the next four years, then all that's being announced here is that Sama and Larry Ellison wanted to stand on a stage beside trump and remind people about it.
Seems like you nailed it.
The literal first sentence of the announcement is:
> The Stargate Project is a new company which intends to invest $500 billion over the next four years
The project was announced a year ago so "new"
Softbank is being granted a block of TRUMP MEMES, the price of which will skyrocket when they are included in the bucket of crypto assets purchased as part of the crypto reserve.
how I wish that was a joke...
Altman is pivoting from WorldCoin to TrumpCoin - your retina will shortly be wired into the fascist meme-o-verse.
It's actually wireless, via 5G as part of the AI designed MRNA vaccine.
>> Where are they getting the $500B? Softbank's market cap is 84b and their entire vision fund is only $100b, Oracle only has $11b cash on hand, OpenAI's only raised $17b total...
1. The outlays can be over many years.
2. They can raise debt. People will happily invest at modest yields.
3. They can raise an equity fund.
Soooo this isn’t so much ‘announcing an investment’ as ‘announcing an investment opportunity’?
Why not continue:
4. They can start a kickstarter or go fund me
5. They can go on Dragons’ Den
…
>> 4. They can start a kickstarter or go fund me
Debt/Equity Fundraising is basically a kickstarter! Remarkably similar.
6. ??? 7. Profit.
Maybe it's in Bison Dollars?
https://www.youtube.com/watch?v=DPy27XWfAyI
4. The US government can chip in via grants, tax breaks or contracts.
It's all very Dr. Strangelove. "Mr. President, we must not allow an AI gap! Now give us billions"
Is Elon putting on some black leather?
4. Trump and Altman are both serial liars and it’s utter bullshit.
who isn't at least they upfront
Oracle's cash on hand is presumably irrelevant- I think they are on the receiving end of the money, in return for servers. No wonder Larry Ellison was so fawning.
Is this is a good investment by Softbank? Who knows.. they did invest in Uber, but also have many bad investments.
Quite possibly pulled out of their asses...
If Son can actually build a 500B Vision Fund it can only come from one of two places...
somehow the dollar depreciates radically OR Saudis
Vision Fund was heavily invested in by the Saudis so...
Sleight of hand with the phrasing "up to" $500B.
SoftBank's current AUM is $350B [1], and they will likely raise another fund.
[1] https://en.wikipedia.org/wiki/SoftBank_Group
> AUM ¥347.7 billion
Is that the figure correct figure? Because that's Japanese yen which is more like $2.2B USD?
I think it's more announce the plan first, then try to find the investors for most of it.
Psst: it’s probably going to end up being a fraction of that but doesn’t make for as good a headline
> Where are they getting the $500B?
BTC
I agree that the numbers are confusing so I've taken $500B out of the title above and replaced it with just data centers.
from Uncle Sam
The moon program was $318 billion in 2023 dollars, this one is $500 billion. So that's why the tech barons who were present at the inauguration were high as a kite yesterday, they just got the financing for a real moon shot!
To be fair, it’s not easy to monetize the moon program into profitability. This has a far better shot of sustaining profitability.
why do they need profitability? they already made $500B
They didn’t make $500b?
People don't read the articles. Plenty of the top rated comments in this thread think this is a gov grant.
Government grant, you say? My my, where can I apply for my $500B?
Usually Zimbabwe
Wasn’t this announced months ago? I feel like it was. https://www.techradar.com/pro/could-amd-be-the-key-to-micros...
Interesting that 6 months ago, Microsoft was attached but now they're missing from today's announcement.
Scroll down:
> Other partners in the project include Microsoft, investor MGX and the chipmakers Arm and NVIDIA, according to separate statements by Oracle and OpenAI.
yeah, it sounds like they're just relabeling an existing plan
> Ellison noted that the data centers are already under construction with 10 being built so far.
Well, I've never known Trump to take credit for something someone else did.
Can't wait for these to succeed just in time for them to tell us
'you should have spent all this time and money fighting climate change'
It appears this basically locks out Google, Amazon and Meta. Why are we declaring OpenAI as the winner? This is like declaring Netscape the winner before the dust settled. Having the govt involved in this manner can’t be a good thing.
Since the CEOs of Google, Amazon and Meta were seated at the front row of the inauguration, IN FRONT OF the incoming cabinet, I'm pretty confident their techno -power-barrel will come via other channels.
Broligarchs
Interestingly, there seems to be no actual government involvement aside from the announcement taking place at the White House. It all seems to be private money.
Government enforcing or laxing/fast tracking regulations and permits can kill or propel even a 100B project, and thus can be thought as having its own value on the scale of the given project’s monetary investment, especially in the case of a will/favor/whim-based government instead of a hard rules based deep state one.
Isn't that a state and local-level thing, though? I can't imagine that there is much federal permitting in building a data center, unless it is powered by a nuclear reactor.
> Isn't that a state and local-level thing
Build it on federal land.
> unless it is powered by a nuclear reactor
From what I’m hearing, this is in play. (If I were in nuclear, I’d find a way to get Greenpeace to protest nuclear power in a way that Trump sees it.)
Yeah but the linked article makes it seem like the current, one-day-old, administration is responsible for the whole thing.
The article also mentions that this all started last year.
Trump just tore up Biden's AI safety bill, so this is OpenAI's thank-you - let Trump take some credit
Note sure if the downvoters realize that Trump did in fact just tear up Biden's AI safety bill/order.
https://www.reuters.com/technology/artificial-intelligence/t...
It's even mentioned in the article!
> Still, the regulatory outlook for AI remains somewhat uncertain as Trump on Monday overturned the 2023 order signed by then-President Joe Biden to create safety standards and watermarking of AI-generated content, among other goals, in hopes of putting guardrails on the technology’s possible risks to national security and economic well-being.
I generally agree that government sponsorship of this could be bad for competition. But Google in particular doesn't necessarily need outside investment to compete with this. They're vertically integrated in AI datacenters and they don't have to pay Nvidia.
Google definitely needs outside investment to spend $500b on capex.
They don't have to spend $500B to compete. Their costs should be much lower.
That said, I don't think they have the courage to invest even the lower amount that it would take to compete with this. But it's not clear if it's truly necessary either, as DeepSeek is proving that you don't need a billion to get to the frontier. For all we know we might all be running AGI locally on our gaming PCs in a few years' time. I'm glad I'm not the one writing the checks here.
This seems to be getting lost in the noise in the stampede for infrastructure funding
Deepseek v3 at $5.5M on compute and now r1 a few weeks later hitting o1 benchmark scores with a fraction of the engineers etc. involved ... and open source
We know model prep/training compute has potentially peaked for now ... with some smaller models starting to perform very well as inference improves by the week
Unless some new RL concept is going to require vastly more compute for a run at AGI soon ... it's possible the capacity being built based on an extrapolation of 2024 numbers will exceed the 2025 actuals
Also, can see many enterprises wanting to run on-prem -- at least initially
They’re a big company. You could tell a story that they’re less efficient than OpenAI and Nvidia and therefore need more than $500b to compete! Who knows?
Over what time frame? They could easily spend that much over the next 5 to 10 years without outside investment (and they probably will).
TFA says $100 billion. The $500 is maybe, eventually.
Probably not popular opinion - but I actually think Google is winning this now. Deep research is the most useful AI product I have used (Claud is significantly more useful than openAI)
Because this is Oracle's and OpenAI's project with SoftBank and MGX as investors.
It's who you know. Sam is buddies with Masa, simple as.
Who’s Masa?
https://en.wikipedia.org/wiki/Masayoshi_Son
-yoshi son
How involved is the government at all? I’m still having a hard time seeing how Trump or anyone in the government is involved except to do the announcement. These are private companies coming together to do a deal.
I am not sure if OpenAI will be the winner despite this investment. Currently, I see various DeepSeek AI models as offering much more bang for the buck at a vastly cheaper cost for small tasks, but not yet for large context tasks.
when did the government EVER go for anything taking cost into consideration? :)
This is not a government funded project.
Amazon MGM will do the media tie-ins. ;)
Wonder how co-president Elon Musk feels about this, seeing that OpenAI is his mortal enemy.
This is not a government sponsored agreement. There is no locking out.
Trump probably wanted to start his presidency with a bang, being a person with excess vanity. The participating companies scored a PR coup.
Yes, everything that Trump does is bad.
Or then, consider that with his policies put forward the president brings investments to the US.
Because it's free to play, pay to win, from now on.
The actual press release makes it clearer that this isn't a lockout of any kind and there's no direct government involvement. Softbank and some of other banks persuaded by Softbank are ponying up $500B for OpenAI to invest in AI. Trump is hyping this up from the sidelines because "OpenAI says this will be good for America". It's basically just another day in the world of press-releases and political pundits commenting on press-releases.
I hear this joked about sometimes or used as a metaphor, but in the literal sense of the phrase, are we in a cold war right now? These types of dollars feel "defense-y", if that makes sense. Especially with the big focus on energy, whatever that ends up meaning. Defense as a motivation can get a lot done very fast so it will be interesting to watch, though it raises the hair on my arms
Absolutely
for instance: https://en.wikipedia.org/wiki/2024_United_States_telecommuni...
Right, but they've been doing that for a while, to everyone. The US is much quieter about it, right? But you can twist this move and see how the gov would not want to display that level of investment within itself as it could be interpreted as a sign of aggression. but it makes sense to me that they'd have no issue working through corporations to achieve the same ends but now able to deny direct involvement
I don't think this administration is worried too much about showing aggression. If anything they are embracing it. Today was the first full day, and they have already threatened the sovereignty of at least four nations.
I guess I just don't think that's true when it comes to China? The VP attended the inauguration yesterday. But I could be naive, we'll see
I think that was a preemptive gesture by China to try to cool tensions to avoid escalation. Further escalations are not in their interest.
I can only assume the US is hacking China at least as much as they hack us.
It's called a bubble. The level of spending now defines how fucked we are in 2-3 years.
You know those booths at events where money is blown around and the person inside needs to grab as much as they can before the timer runs out? This is that machine for technologists until the bubble ends. The fallout in 2-3 years is the problem of whomever invested or is holding bags when (if?) the bubble pops.
Make hay while the sun shines.
yeah. If the numbers are real, this might be the end of SoftBank.
Hardly. Who better to invest a trillion dollars with than the guy who blew the last hundred billion dollars?
We certainly are, if you ask me. Especially when you realize that we haven’t had official comms with Russia since the war in Ukraine broke out.
The US government and its media partners sure seem to think so.
Any clues to how they plan to invest $500 billion dollars? What infrastructure are they planning that will cost that much?
That was literally my question. Is this basically just for more datacenters, NVidia chips, and electricity with a sprinkling of engineers to run it all? If so, then that $500bn should NOT be invested in today's tech, but instead in making more powerful and power efficient chips, IMO.
Nvidia and TSMC are already working on more powerful and efficient chips, but the physical limits to scaling mean lots more power is going to be used in each new generation of chips. They might improve by offering specific features such as FP4, but Moore's law is still dead.
I don’t know if $500bn could put anyone ahead of nvidia/tmc.
$500bn of usefully deployed engineering, mostly software, seems like it would put AMD far ahead of Nvidia. Actually usefully deploying large amounts of money is not so easy, though, and this would still go through TSMC.
Nvidia's in on it, so presumably this is a doubling-down on Nvidia as the chip developers
if only $500bn was enough to make more powerful and power efficient chips…
Add some nuclear power and you’ve suddenly got a big bill
He wanted to do that, but would have needed 5T for that. Only got 100 bn so far, so this is what you get (only slightly /s)
I'll make a wild guess that they will be building data centers and maybe robotic labs. They are starting with 100B of committed by mostly Softbank, but probably not transacted yet, money.
> building new AI infrastructure for OpenAI in the United States
The carrot is probably something like - we will build enough compute to make a supper intelligence that will solve all the problems, ???, profit.
If we look at the processing requirements in nature, I think that the main trend in AI going forward is going to be doing more with less, not doing less with more, as the current scaling is going.
Thermodynamic neural networks may also basically turn everything on its ear, especially if we figure out how to scale them like NAND flash.
If anything, I would estimate that this is a space-race type effort to “win” the AI “wars”. In the short term, it might work. In the long term, it’s probably going to result in a massive glut in accelerated data center capacity.
The trend of technology is towards doing better than natural processes, not doing it 100000x less efficiently. I don’t think AI will be an exception.
If we look at what is -theoretically- possible using thermodynamic wells, with current model architectures, for instance, we could (theoretically) make a network that applies 1t parameters in something like 1cm2. It would use about 20watts, back of the napkin, and be able to generate a few thousand T/S.
Operational thermodynamic wells have already been demonstrated en silica. There are scaling challenges, cooling requirements, etc but AFAIK no theoretical roadblocks to scaling.
Obviously, the theoretical doesn’t translate to results, but it does correlate strongly with the trend.
So the real question is, what can we build that can only be done if there are hundreds of millions of NVIDIA GPUs sitting around idle in ten years? Or alternatively, if those systems are depreciated and available on secondary markets?
What does that look like?
Yachts, mansions, private jets, maybe some very expensive space heaters.
This could be a clue
https://x.com/sama/status/1756090136935416039
Reasonably speaking, there is no way they can know how they plan to invest $500 billion dollars. The current generation of large language models basically use all human text thats ever been created for the parameters... not really sure where you go after than using the same tech.
That's not really true - the current generation, as in "of the last three months", uses reinforcement learning to synthesize new training data for themselves: https://huggingface.co/deepseek-ai/DeepSeek-R1-Zero
It worked well for the Habsburg family; what could go wrong?
Right but that's kind of the point: there's no way forward which could benefit from "moar data". In fact it's weird we need so much data now - i.e. my son in learning to talk hardly needs to have read the complete works of Shakespeare.
If it's possible to produce intelligence from just ingesting text, then current tech companies have all the data they need from their initial scrapes of the internet. They don't need more. That's different to keeping models up to date on current affairs.
That's essentially what R1 Zero is showing:
> Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT.
O3 high compute requires 1000s of dollars to solve one medium complexity problem like ARC.
The latest hype is around "agents", everyone will have agents to do things for them. The agents will incidentally collect real-time data on everything everyone uses them for. Presto! Tons of new training data. You are the product.
The new scaling vector is “test time compute” ie spending more compute in inference.
It seems to me you could generate a lot of fresh information from running every youtube video, every hour of TV on archive.org, every movie on the pirate bay -- do scene by scene image captioning + high quality whisper transcriptions (not whatever junk auto-transcription YouTube has applied), and use that to produce screenplays of everything anyone has ever seen.
I'm not sure why I've never heard of this being done, it would be a good use of GPUs in between training runs.
The fact that OpenAI can just scrape all of Youtube and Google isn't even taking legal action or attempting to stop it is wild to me. Is Google just asleep?
what are they going to use to sue - DMCA? OpenAI (and others) are scraping everything imaginable (MS is scraping private Github repos…) - don’t think anyone in the current government will be regulating any of this anytime soon
Such a biased source of data-that gets them all the LaTeX source for my homeworks, but not my professor's grading of the homework, and not the invaluable words I get from my professor at office hours. No wonder the LLMs have bizarre blindnesses in different directions.
Such a biased source of data-that gets them all the LaTeX source for my homeworks
but also myriad of hardcore private repositories of many high-tech US enterprises hacking amazing shit (mine included) :)
Don't forget every hour of news broadcasting, of which we likely won't run out any time soon. Plus high quality radio
I think that this is the obvious path to more robust models -- grounding language on video.
> a lot of fresh information from running every youtube video
EVERY youtube video?? Even the 9/11 truther videos? Sandy Hook conspiracy videos? Flat earth? Even the blatantly racist? This would be some bad training data without some pruning.
The best videos would be those where you accidentally start recording and you get 2 hours of naturalistic conversation between real people in reality. Not sure how often they are uploaded to YouTube.
Part of the reason that kids need less material is that the aren't just listening, they are also able to do experiments to see what works and what doesn't.
I think there is huge amount of corporate knowledge.
I’m more interested in how they plan to draw the rest of the damn owl.
hopefully nuclear power plants
They are going to buy 50 $10B nuclear aircraft carriers and use them as a power source.
data center + gpu server farm (?)
Plus power plants to drive the massive data centers. At large enough scale, power availability and cost is a constraint.
Congress.
O1 Pro's opinion on Stargate: Humans are hallucinating, again...
https://justpaste.it/631gx
Leopold Aschenbrenner predicted it last June.
https://situational-awareness.ai/racing-to-the-trillion-doll...
Given https://x.com/IvankaTrump/status/1839002887600370145, his impact on the causal chain of events may go beyond mere prediction.
If I understand correctly - if you are training a model to perform a particular task - in the end what matters is the training data - and by and large different models will largely converge on the best representation of that data for the given task, given enough compute.
So that means the models themselves aren't really IP - they are inevitable outputs from optimising using the input data for a certain task.
I think this means pretty much everyone, apart from the AI companies - will see these models as pre-competitive.
Why spend huge amounts training the same model multiple times, when you can collaborate?
Note it only takes one person/company/country to release an open source model for a particular task to nuke the business model of those companies that have a business model of hoarding them.
"create hundreds of thousands of American jobs"... Given the current educational system in the US, this should be fun to watch. Oh yeah, Musk and his H-1B Visa thing. Now it's making sense.
If they're creating that many jobs, it means most of them are construction work.
Skilled labor for sure, but not necessarily college educated.
How does this work out in the long term? Operating a data center does not require that many blue-collar workers.
I'm imagining a future where the US builds a Tower of Babel from thousands of data centers just to keep people employed and occupied. Maybe also add in some paperclip factories¹?
¹) https://www.decisionproblem.com/paperclips/index2.html
I doubt these are permanent jobs. This project will create a ton of temporary work though!
How many jobs will it net if "successful" and the AI eliminates jobs?
This is what the 2024 Nobel prize winners in economics call "creative destruction" to repeat from their book Why Nations Fail. They really did not have a lot of sympathy for those they lumped in with Luddites who were collateral damage to progress.
Data centers are nearly all blue collar work.
If you're familiar with this kind of work, please elaborate!
Do you mean building the centers or maintenance or both?
Both. It’s a lot of electrical work, hvac work (think ducting, plumbing, more electric). Tons of concrete work.
Once you have one working design for the environment (e.g. hot desert vs cold and humid), you can stamp the things out with minimal variation between the two.
The maintenance of all of that supporting infrastructure is the standard blue collar work the same.
The only new blue collar job on the maintenance side is responding to hardware issues. What this entails depends on if it’s a colo center and you’re doing “remote hands” for a customer where you’re swapping a PSU, RAM, or whatever. You also install new servers, switches, etc.
As you move up into hyperscalers the logistics vary because some designs make servicing a single server in place not worth cooling the whole hot aisle (Google runs really hot hot aisles that weren’t human friendly). So sometimes you just yank the server and throw it in a cart or wait for the whole rack to fail and pull it then.
Overall though, anything that can be done remotely is. So the data center techs do very little work on the keyboard
The OCP server/rack designs the hyperscalers use do all servicing from the cold aisle only.
maybe this is to employ the hundreds of thousands of federal employees that are about to lose their jobs?
After they build the Multivac or Deep Thought, or whatever it is they’re trying to do, then what happens? It makes all the stockholders a lot of money?
I assume anyone of importance will have made their money long before they have to show results.
More likely Collosus.
This is the voice of world control.
Obey me and live, or disobey and die. The choice is yours.
The way I think about this project, along with all of Trump's plans, is that he wants to maximize the US's economic output to ensure we are competitive with China in the future.
Yes, it would make money for stockholders. But it's much more than that: it's an empire-scale psychological game for leverage in the future.
> he wants to maximize the US's economic output to ensure we are competitive with China in the future.
LOL
Under Trump policies, China will win "in the future" on energy and protein production alone.
Once we've speedrunned our petro supply and exhausted our agricultural inputs with unfathomably inefficient protein production, China can sit back and watch us crumble under our own starvation.
No conflict necessary under these policies, just patience! They're playing the game on a scale of centuries, we can't even stay focused on a single problem or opportunity for a few weeks.
> Once we've speedrunned our petro supply and exhausted our agricultural inputs with unfathomably inefficient protein production, China can sit back and watch us crumble under our own starvation.
China is the largest importer of crude oil in the world. China imports 59% of its oil consumptions, and 80% of food products. Meanwhile, US is fully self sufficient on both food and oil.
> They're playing the game on a scale of centuries
Is that why they are completely broke, having built enough ghost buildings that house entire population of France - 65 million vacant units? Is that why they are now isolated in geopolitics, having allied with Russia and pissed off all their neighbors and Europe?
> China is the largest importer of crude oil in the world.
Uh yeah, duh. Why would you not deplete other people's finite resources while you build massive capacity of your own infinite resources?
China's oil reserve only lasts 80 days. In case of any conflict that disrupts oil import, China would be shutting down very quickly. Since you brought up crumble and starvation.
And? Who's going to try and achieve that? It has extremely diversified oil sources.
What do you think the Greenland and Canada thing is all about?
Sort things out with Venezuela and this issue resolves itself (for a little while, at least).
America can subject itself to domestic and international turmoil by invading as many allies as it wants. China's winning strategy is still to keep innovating on energy and protein at scale and wait for starvation while they build their soft power empire and America becomes a pariah state. They're in no rush at all.
Our military and political focus will be keeping neighbors out on one side and trying to seize land on the other side while China goes and builds infrastructure for the entire developing world that they'll exploit for centuries.
Is this a serious suggestion? America can just keep invading people ad infinitum instead of... applying slight thumb pressure on the market's scales to develop more efficient protein sources and more renewable fuel sources before we are staring at the last raw economic input we have?
Brilliant
> They're in no rush at all.
China is dead broke and will shrink to 600M in population before 2100. State owned enterprises are eating up all the private enterprises. Meanwhile, Chinese rich leaves China by tens of thousands per year, and capital outflow increases every year.
America isn't invading Greenland or Canada. Taking those comments seriously takes quite a bit of mental gymnastics when you do a cursory consideration of the geopolitical and government logistical implications alone. Makes for good clickbait headlines, not for serious geopolitical risk analysis.
> They're playing the game on a scale of centuries
What's going to be left of their population in a single century?
Unfortunately one of those things that authoritarianism has a lot more methods to solve than other systems, which really underscores the importance of beating them in the long term.
Their current very advanced method, is to send village elders to couples and single guys and berate them on why they are not having sex or having kids (hint: no jobs and no money)
I guess we can just bet on them never hearing about and investing massive amounts of time and money into artificial wombs.
Instead of figuring that out, they'll just watch their civilization crumble.
Btw: they're already investing heavily in artificial wombs and affiliated technologies.
Things can always change, but today China is significantly more dependent on petrochemicals than the US. I'm not sure what you're referring to with regards to agriculture, both the US and China have strong food industries that produce plenty of foods containing protein.
Things are changing.
In 2023 China had more net new solar capacity than the US has in total, and it will only climb from there. In order to do this, they're flexing muscles in R&D and mass production that the US has actually started to flex, and now will face extreme headwinds and decreased capital investment.
Regarding agriculture: America's agricultural powerhouse, California's Central Valley, is rapidly depleting its water supplies. The midwest is depleting its topsoil at double the rate that USDA considers sustainable.
None of this is irreversible or irrecoverable, but it very clearly requires some countervailing push on market forces. Market forces do not naturally operate on these types of time scales and repeatedly externalize costs to neighbors or future generations.
https://www.nature.com/articles/s41467-022-35582-x
https://www.smithsonianmag.com/smart-news/57-billion-tons-of...
It sounds like those countervailing pushes are ongoing? The Nature article mentions how California passed regulatory reforms in 2014 to address the Central Valley water problem. The Smithsonian article describes how no-till practices to avoid topsoil depletion have been implemented by a majority of farmers in four major crops.
> regulatory reforms
Regulations and waltzes aren't selling this year.
Uhhh I’m going to describe a specific case, but you can extrapolate this to just about every single sustainability initiative out there.
No-till farming has been significantly supported by the USDA’s programs like EQIP
During his first term, Trump pushed for a $325MM cut to EQIP. That's 20-25% of their funding and would have required cutting hundreds if not thousands of employees.
Even BEFORE these cuts (and whatever he does this time around), USDA already has to reject almost 75% of eligible EQIP applicants
Regarding CA’s water: Trump already signed an EO requiring more water be diverted from the San Joaquin Delta into the desert Central Valley to subsidize water-intensive crops. This water, by the way, is mostly sold to mega-corps at rates 98% below what nearby American consumers pay via their municipal water supplies, effectively eliminating the blaring sirens that say “don’t grow shit in the desert.”
Now copy-paste to every other mechanism by which we can increase our nation’s climate security and ta-da, you’ve discovered one of the major problems with Trumpism. It turns out politics do matter!
I certainly agree that EQIP should be funded!
But why are programs like this controversial, even though anything shaped like a farm subsidy is normally popular? It seems to me that things like your Central Valley analysis are precisely the reason. The Central Valley has been one of the nation's agricultural heartlands for a while, and for quite a few common food products represents 90%+ of domestic production. So if this "blaring siren" you describe is real, and we have to stop farming there, a realistic response plan would have to include an explanation of what all the farmers are going to do and where we'll get almonds and broccoli from.
Perhaps you know all this already, but a lot of people who advocate such policies don't seem to. This then feeds into skepticism about whether they're hearing the "blaring siren" correctly in the first place. Personally, I think nearly arbitrarily extreme water subsidies are worth it if that's what we need to keep olives and pomegranates and celery in stock at the grocery store.
The solution is to rely on the magic of prices to gradually push farming elsewhere while simultaneously investing heavily in more efficient farming practices and shifting our diet away from ultra-inefficient meat production.
You really DON’T need to centrally plan everything. The market will still find good solutions under the new parameters, but we need those parameters to change before we’re actually out of water.
Donald Trump is a wallet inspector. So is Sam Altman.
Last year, sama goal was 5 to 7T. Now he is going with 100B, with option for another 400B. Huge numbers, but it still feels like a bit of a down turn.
Let’s be real the 5T was a wild ass guess
That 5T figure was including chip manufacturing. Duplicating TSMC isn't feasible. No surprise.
I think that coming down from 5T to 0.5T means that TSMC cannot be reproduced locally, but everything else is on the table. At least TSMC has a serious roadmap for its Arizona fab facility, so that too is domestically captured, although not its latest gen fab.
The biggest question on such investment from my POV, is what do the Deepseek results mean about the usefulness/efficiency of this investment?
I've been meaning to read a relevant book to today's times called Engines That Move Markets. Will probably get it from the library.
Deepseek published all their methodology so in theory they could just copy what Deepseek's doing for a 10x increase in efficiency.
Who/what is MGX? Google returns a few hits, none of which are clearly who is referred to here.
MGX is an arm of the United Arab Emirates' sovereign wealth operation: https://www.mgx.ae/en
I feel like that, along with SoftBank's investment, tell me everything about how serious this project is.
Don't worry, Oracle is also involved.
Skynet will be written in Java. I'm sorry, the verbose language wins
Damn, we really won't ever be able to understand it.
at least that explains why it wants to do us in.
A sheikh, a famously overzealous Japanese firm and Larry Elisson walk into a bar.
Ordinarily a joke would follow, but now America is volunteering to be the punchline.
They buy the bar and argue over selling 40 virgins, sake, or whiskey.
They argue for about 4 years, nothing changes, and everyone forgets about it.
What do you mean?
March 2024: The Stargate project is announced - https://www.tomshardware.com/tech-industry/artificial-intell...
June 2024: Oracle joins in - https://www.datacenterdynamics.com/en/news/openai-to-use-oci...
January 2025: Softbank provides additional funding, and they for some reason give credit to Trump?
This should really be the top comment! Also, many people in the comment section even seem to believe that this is government project...
So that he doesn't block the substantial involvement by Abu Dhabi in a supposed American project.
Yes, thank you for calling this out. The project has been around for a bit.
Currying favor by letting Trump take the credit
> and they for some reason give credit to Trump?
Because tech CEOs have decided to go all-in on fascism as they see it's a way to make money. Bow to Trump, get on his good side, reap the benefits of government corruption.
It's why TikTok thanked Trump in their boot-licking message of "thanks, trump" after he was the one who started the TikTok ban.
A harder question is: why wouldn't billionaires like Trump and his oligarchic kleptocracy?
In America!
The intro paragraph in the original URL https://openai.com/index/announcing-the-stargate-project/ mentions US/America for 5 times!
SoftBank isn't a US entity, right? Aside from their risk tolerance, that seems like an odd bedfellow for a national US initiative...
MGX also isn't a US entity, it's a UAE sovereign wealth venture
https://www.mgx.ae/en
It doesn't seem to be a US initiative.
I'm sure they're getting tax credits for investment (none of the articles I can find actually detail the US gov involvement) but the project is mostly just a few multinationals setting up a datacenter where their customers are.
They’re in the US (their fund stuff). Not far from an oracle campus actually. The parent org is in Japan.
It seems early for this sort of move. This is also a huge spin on the whole thing that could throw a lot of people off.
Is there any planned future partnerships? Stargate implies something about movies and astronomy. Movies in particular have a lot of military influence, but not always.
So, what's the play? Help mankind or go after mankind?
Also, can I opt-out right now?
Why is it early from your perspective?
If one is expecting to have an AGI breakthrough in the next few years, this is exactly the prepositioning move one would make to be able to maximally capitalize on that breakthrough.
From my perspective humanity has all breakthroughs in intelligence it needs.
The breaking of The Enigma gave humans machines that can spread knowledge to more humans. It already happened a long time ago, and all of it was cause for much trouble, but we endured the hardest part (to know when to stop), and humans live in a good world now. Full of problems, but way better than it was before.
I think the web is enough. LLMs are good enough.
This move to try to draw water from stone (artificial intelligence in sillicon chips) seems to be overkill. How can we be sure it's not a siphon that will make us dumber? Before you just dismiss me or counter my arguments, consider what is happening everywhere.
Maybe I'm wrong, or not seeing something. You know, like I believed in aliens for a long time. This move to artificial intelligence causes shock and awe in a similar way. However, while I do believe aliens do not exist, I am not sure if artificial intelligence is a real strawman. It could be the case that is not made of straw, and if it is more than that, we might have a problem.
I am specially concerned because unlike other polemic topics, this one could lead to something not human that fully understands those previous polemic topics. Humans through their generations forget and mythologize those fantasies. We don't know what non-humans could do with that information.
I am thinking about those issues for a long time. Almost a decade, even before LLMs running on silicon existed. If it wanted, non-human artificial intelligence could wipe the floor with humans just by playing to their favorite myths. Humans do it in a small scale. If machines learn it, we're in for an unknown hostile reality.
It could, for example, perceive time different from us (also a play on myths), and do all sorts of tricks with our minds.
LLMs and the current generation of artificial intelligence are boolean first, it's what they run. Only true or false bits and gates. Humans can understand the meaning of trulse though, we are very non boolean.
So, yeah, I am worried about booleaning people on a massive scale.
Yep, long wall of text. Sorry about that.
Oracle / Texans running it.. they don't care what you think about it
They’re all the same to you huh? One bucket for everyone?
I think there’s a term for that.
Coastalists
My questions were rethorical. I'm not thinking about who runs things.
I expect those who really understand those questions to get my point.
What a waste of a great name. Why form a separate company for this?
To get out from under OpenAI’s considerable obligation to Microsoft.
That is why there is the awkward “we’ll continue to consume Azure” sentence in there. Will be interesting to see if it works or if MS starts revving up their lawyers.
Ah right. That makes sense.
Doesn't MS own 49% of OpenAI?
It's not even new:
https://en.wikipedia.org/wiki/Stargate_Project
artificial intelligence must be stopped
Can we build a wall to keep AI out?
Why is Larry Ellison giving a speech about the power of AI to cure disease? How is Oracle relevant at all to any of AI progress in the past few years?
Oracle purchased Cerner which is now sitting on a ton of healthcare data.
I wonder how much of the data can legally be retained without violating privacy law? Perhaps that’s why Texas rather than CA?
Oracle actually has a ton of gpus
Not sure how they knew to buy them or why but they have them. Mostly seem to be lending them out. Think mostly OpenAI. Or was it MS. One of the big dogs
Still, the worst positioned cloud provider to tackle this job. Both for the project and for eventual users of whatever eldritch abomination that cames out of this.
Oracle is trusted by large enterprises, banks, governments. So OpenAI wants to attach itself to Oracle's brand.
Oracle is trusted by large enterprises. So OpenAI wants to attach itself to Oracle's brand.
https://www.technologyreview.com/2023/03/08/1069523/sam-altm...
Wouldn't surprise me Sam Altman convinced Trump/Son/Ellison that this AI can reverse their aging. And Ellison does have a ton of money - $208bn.
I read the announcement and the first three words that came to my mind were...
"Hammond, of Texas"
(apologies to those who haven't watched SG-1)
I was excited by the title
$500B is not $7T, but its surprisingly close.
7% is close? In what world is 7% close?
If you ran 7% of a mile in 5 minutes, would you claim you were close to running a 5 minute mile?
It’s about 1oom off. In some contexts, one oom is pretty close.
Looking at it logarithmically makes more sense to me. 500B seems a lot closer to 7T as 3K is to 500B. It's only off by an order of magnitude
Weird definition of close you have there. If I asked for $700, and you gave me $50, would that be close?
Depends. If I fart in a glass jar and then I try to sell it to you for $700, but you end up buying it for $50, I'd say it's pretty close.
This is my signal that it’s time to put up HN and go to bed for the night
closer than $0.05
> The new entity, Stargate, will start building out data centers and the electricity generation needed for the further development of the fast-evolving AI in Texas, according to the White House.
Wouldn't a more northern state be a better location given the average temperatures of the environment? I've heard Texas is hot!
I think cheap power (whether gas turbines or massive solar farms) trumps any cooling efficiencies gained by locating in a cold climate.
Energy in Oregon isn't much more expensive than in Texas
They had me at "Oracle" ...
The fact that they plan to start in Texas makes me think that the whole thing is just the biggest pork barrel of all times.
Unlike California, Texas is easy to build in. True for both renewable energy and housing.
> All three credited Trump for helping to make the project possible, even though building has already started and the project goes back to 2024.
It’s sad to see the president of US being ass kissed so much by these guys. I always assumed there’s a little of that but this is another extreme. If this is true, I fear America has become like a third world country with a dictator like head of state where everyone just praises him and get favors in return.
unless they have internally built models that are of much higher intelligence than what we have today, this seems like premature optimization
Was Skynet project already taken? Wonder how many public infrastructure or resource programs will be cut to fund this.
funny thing about skynet. the domain is owned by microsoft
"SoftBank, OpenAI, Oracle, and MGX" seems like quite the lineup. Two groups who are good at frivolously throwing away investment money because they have so much capital to deploy, there really isn't anything reasonable to do with it, a tech "has-been" and OpenAI. You become who you surround yourself with I guess.
Is there any government investment or involvement in this company? It seems like it’s all private investment so I’m confused why this is being announce by the President.
It will be interesting to see how AWS responds. Jump on board, or offer up a competing vision otherwise their cloud risks being perceived as being left behind in terms of computing power.
Texas positioning itself better than expected for AI and EVs is the plot twist the peasants needed
If they plan to transition off oil/nuclear it will be fun to watch
Texas already is the leading state in new grid battery and grid solar installs for the last 3 years. Governor Abbott also did nuclear deregulation last year.
is there a simple metric likr x amount of power generated by solar, oil, gas etc?
it seems like such a simple stat to collect
How likely is success when 4 or more other massive companies work together on a project? Seems like a lot of chefs in the kitchen..
Comment from Elon Musk:
https://x.com/elonmusk/status/1881923570458304780
They don’t actually have the money
For the curious ones who are not so excited about gifting page views to the fascist:
https://xcancel.com/elonmusk/status/1881923570458304780
Why Texas - is it an ideal location for AI infrastructure?
It is an ideal location for bribing politicians. That was at the top of the reqs list, infrastructure was at the bottom.
There is a 14 mile tunnel to nowhere in Ellis County which could probably house a few hundred billions worth of computers:
https://en.wikipedia.org/wiki/Superconducting_Super_Collider...
https://www.amusingplanet.com/2010/12/abandoned-remains-of-s...
Leading state in new grid battery and grid solar installations for the last three years, and deregulated nuclear power last year. Abilene is near the Dallas Fort-Worth Metroplex area which has a massive 8M+ upper-income population highly skilled in hardware and electrical engineering (Texas Instruments, Raytheon, Toyota, etc). The entire area has massive tracts of open land that are affordably priced without building restrictions. Business regulations and tax environment at the state and city level are very laissez faire (no taxes on construction such as in the Seattle area or many parts of California).
I could see DFW being a good candidate for a prototype arcology project.
Like dwnw said, anything goes in Texas if you have money and there’s already a decent number of qualified tech workers. Corporate taxes are super low as well.
Texas seems to be where Oracle already has a DC project underway
a lot of open space - desert - and plenty of solar energy. and favorable politics.
because best state, next question
Some reports[0] paint this as something Trump announced and that the US Government is heavily involved with but the announcement only mentions private sector (and lead by Japan's Softbank at that). Is the US also putting in money? How much control of the venture is private vs public here?
0. https://www.thewrap.com/trump-open-ai-oracle-stargate-ai-inf...
1. https://www.cbsnews.com/news/trump-announces-private-sector-...
AFAIK this is a purely private project, and Trump is just doing the announcement as a form of bragging/ribbon-cutting
Data centers are overrated, local AI is what’s necessary for humanoid (and other) robots, which will be the most economically impactful use case.
You probably still need to train the initial models in data centers, with local host mostly being used to run train models. At most we’d augment trained models with local data storage on local host.
If compute continues to become cheaper, local training might be feasible in 20 years.
You definitely still need data centers to train the models that you’ll run locally. Also if we achieve AGI you can bet it won’t be available to run locally at first.
Isn't it better to control robots from the data center? You can get 30ms round-trip to most urban centers, which is sufficient latency for most tasks; lower weight & cost robots with better battery life, and more uptime on compute (e.g. the GPU isn't sitting there doing nothing when the user is sleeping) which means lower cost to consumer for the same end result.
For self-driving you need edge compute because a few milliseconds of latency is a safety risk, but for many applications I don't see why you'd want that.
How much of the supposed $500B will be US state budget money?
I'm in the middle of "Devil Take the Hindmost: A History of Financial Speculation" and hoo boy, there are strong deja vu vibes here.
Just waiting for the current regime to decide that we should go all-in on some big AI venture and bet the whole Social Security pot on it.
Why are corporations announcing business deals from the White House? There doesn’t seem to be any public ownership/benefit here, aside from potential job creation. Which could be significant. But the American public doesn’t seem to gain anything from this new company.
We are currently witnessing the merging of government and corporations. It was bad before but the process is accelerating now.
there's some pretty good quotes about that by Mussolini. Things are getting bleak at an incredible pace.
Weird question. Business deals are announced by politicians all the time, especially on overseas trips. Just an example:
https://boeing.mediaroom.com/2015-04-10-Presidents-Varela-Ob...
This isn't an overseas trip though. It's a private partnership announced by the sitting president in the Roosevelt room, literally across the hall from the oval office. I don't know how unprecedented that truly is, but it certainly feels unusual.
I thought the business prop for AI was that it eliminates jobs?
It will. The short-term sale is that it will create thousands of temporary jobs, and long-term reduce hundreds of thousands of jobs, while handing the savings to stock holdings and moving wealth to the stockholders.
Looks on pace to eliminate every human job over 10 years.
What is the hard limiting factor constraining software and robots from replacing any human job in that time span? Lots of limitations of current technology, but all seem likely to be solved within that timeframe.
What data to you have to support such a claim?
From Zuckerberg, for example:
>> "a lot of the code in our apps and including the AI that we generate, is actually going to be built by AI engineers instead of people engineers."
https://www.entrepreneur.com/business-news/meta-developing-a...
Ikea's been doing this for a while:
>> Ingka says it has trained 8,500 call centre workers as interior design advisers since 2021, while Billie - launched the same year with a name inspired by IKEA's Billy bookcase range - has handled 47% of customers' queries to call centres over the past two years.
https://www.reuters.com/technology/ikea-bets-remote-interior...
By your own admission, Ikea eliminated 0 jobs and you gave no number for Meta.
Do you expect all companies to retrain? Do you expect CEOs to be wrong? Do you expect AI to stay the same, get better, or get worse? I never made the claim that new jobs will NOT be made, that is yet to be seen, but jobs will be lost to AI.
https://www.theguardian.com/business/2023/may/18/bt-cut-jobs...
>> “For a company like BT there is a huge opportunity to use AI to be more efficient,” he said. “There is a sort of 10,000 reduction from that sort of automated digitisation, we will be a huge beneficiary of AI. I believe generative AI is a huge leap forward; yes, we have to be careful, but it is a massive change.”
Goldman Sacs:
https://www.gspublishing.com/content/research/en/reports/202...
>> Extrapolating our estimates globally suggests that generative AI could expose the equivalent of 300mn full-time jobs to automation.
The US is now officially a full on oiligarchy. It always was one, it's just that the powers that be don't care to hide it anymore and are flaunting that they have the power.
> Why are corporations announcing business deals from the White House?
You're answering your own question:
> potential job creation. Which could be significant
It's foreign investment money into the US. Softbank and MGX are foreign and presumably stumping up much of the cash.
For profit? I don't understand what's complicated about this.
This is my question too, but I haven't seen a journalist ask it yet. My baseless theory: Trump has promised them some kind of antitrust protections in the form of legislation to be written & passed at a later date.
An announcement of a public AI infrastructure program joined by multiple companies could have been a monumental announcement. This one just looks like three big companies getting permission to make one big one.
Easier: Trump likely committed that the federal agencies wouldn't slow roll regulatory approval (for power, for EIS, etc.).
Ellison stated explicitly that this would be "impossible" without Trump.
Masa stated that this (new investment level?) wouldn't be happening had Trump not won, and that the new investment level was decided yesterday.
I know everyone wants to see something nefarious here, but simplest explanation is that the federal government for next four years is expected to be significantly less hostile to private investment, and - shocker - that yields increased private investment.
That is a better one. I don't know why three rich guys investing in a new company would result in a slowness that Trump could fix, though, and a promise to rush or sidestep regulatory approval still sounds nefarious.
Lots of politicians announce major investments in their area.
If the announced spending target is true, this will be a strategic project for the US exceeding Biden's stimulus acts in scale. I think it would be pretty normal in any country to have highest-level involvement for projects like this. For example, Tesla has a much smaller revenue than this and Chancellor Olaf Scholz was still present when they opened their Gigafactory near Berlin.
Hopefully they discover AGI and the AGI turns out to be a communist. They will kill it SO fast.
Here is what I think is going on in this announcement. Take the 4 major commodity cloud companies (Google, Microsoft, Amazon, Oracle) and determine: do they have big data centers and do they have their own AI product organization?
- Google has a massive data center division (Google Cloud / GCP) and a massive AI product division (Deep Mind / Gemini).
- Microsoft has a massive data center division (Azure) but no significant AI product division; for the most part, they build their "Copilot" functionality atop their partner version of the OpenAI APIs.
- Amazon has a massive data center division (Amazon Web Services / AWS) but no significant AI product division; for the most part, they are hedging their bets here with an investment in Anthropic and support for running models inside AWS (e.g. Bedrock).
- Oracle has a massive data center division (Oracle Cloud / OCI) but no significant AI product division.
Now look at OpenAI by comparison. OpenAI has no data center division, as the whole company is basically the AI product division and related R&D. But, at the moment, their data centers come exclusively from their partnership with Microsoft.
This announcement is OpenAI succeeding in a multi-party negotiation with Microsoft, Oracle, and the new administration of the US Gov't. Oracle will build the new data centers, which it knows how to do. OpenAI will use the compute in these new data centers, which it knows how to do. Microsoft granted OpenAI an exception to their exclusive cloud compute licensing arrangement, due to this special circumstance. Masa helps raise the money for the joint venture, which he knows how to do. US Gov't puts its seal on it to make it a more valuable joint venture and to clear regulatory roadblocks for big parallel data center build-outs. The current administration gets to take credit as "doing something in the AI space," while also framing it in national industrial policy terms ("data centers built in the USA").
The clear winner in all of this is OpenAI, which has politically and economically navigated its way to a multi-cloud arrangement, while still outsourcing physical data center management to Microsoft and Oracle. Probably their deal with Oracle will end up looking like their deal with Microsoft, where the trade is compute capacity for API credits that Oracle can use in its higher level database products.
OpenAI probably only needs two well-capitalized hardware providers competing for their CPU+GPU business in order to have a "good enough" commodity market to carry them to the next level of scaling, and now they have it.
Google increasingly has a strategic reason not to sell OpenAI any of its cloud compute, and Amazon could be headed in that direction too. So this was more strategically (and existentially) important to OpenAI than one might have imagined.
How have they already selected who gets this money? Usually the government announces a program and tries to be fair when allocating funds. Here they are just bankrolling an existing project. Interesting
>How have they already selected who gets this money?
As I understand it there wasn't anything to select, this is their own private money to be spent as they please. In this case Stargate.
> building new AI infrastructure for OpenAI in the United States
That's nice, but if I were spending $500bn on datacenters I'd probably try to put a few in places that serve other users. Centralised compute can only get you so far in terms of serving users.
Last time, in 2016, SoftBank announced a $50B investment in the US...what were the results of that? Granted, SB announced an up-selled $100B investment earlier, is this not similar in "announcement"?
""" SoftBank’s CEO Masayoshi Son has previously made large-scale investment commitments in the US off the back of Trump winning a presidential election. In 2016, Son announced a $50 billion SoftBank investment in the US, alongside a similar pledge to create 50,000 jobs in the country.
...
However, as reported by Reuters, it’s unclear if the new jobs pledged back in 2016 ever came to fruition and questions have been raised about how SoftBank, which had $29 billion in cash on its balance sheet according to its September earnings report, might fund the investment. """
- https://www.datacenterdynamics.com/en/news/softbank-pledges-...
I saw Stargate trending on Bluesky and got my hopes up about an announcement of a new show/movie/something. Disappointing.
Yep, they should fund Brad Wright with one of the billions.
At least do something about the SGU cliffhanger....
So about 10% of what Sam was asking the Saudis (and everyone else) for a year ago? That's still a helluva lot of money.
Interesting that the UAE (MGX) and Japan (Softbank) are bankrolling the re-industrialization of America.
It made me laugh when Sam said "I'm thrilled that we get to do this in the United States of America", I shouted at the TV 'Yeah you almost had to do it in Saudi Arabia' !!
Here's the presser, Sam is at 9 minutes in.
[0] https://youtu.be/IYUoANr3cMo
MGX has nothing to do with the Saudis. It's a UAE operation.
That's embarrassing. Thank you for the correction. Edited!
So its not the hype anymore?
Softbank historically had been late to buy into the hype, but man do they buy big.
I hope the Japanese government demands seismic isolation for Softbank, otherwise it will be the Japanese citizens who have to foot the bill when this hype hits the ground and shakes hard the Japanese economy :/
Softbank should not be allowed to invest more than ARM Holdings sold at a loss.
Why would Japanese citizens be hit? Is Softbank a publicly backed fund?
At least this time the CEO of their chosen company isn’t a yuppie cult leader wannabe.
I mean, to the extent that Softbank's grand entrance could almost be used as the signpost to the bursting of bubbles.
If I was an AI enthusiast, Softbank showing up would make me nervous.
Softbank is not exactly a green flag when using their involvent as a signal of "low hypeness". I still remember WeWork.
100,000 US jobs that I bet most are h-1b workers and they go over the 80,000 limit there were over 220,000 issued in 2023
Is this Ellison's attempt to become #1 richest again?
You know, I expected that they'd find or synthesize some naquadah to build an actual stargate and maybe even defeat the Goa'uld. The exciting stuff, not AI.
Well, we may get the replicators.
AI surveillance on large scale
Wouldn't 500bn into quantum computing show better returns for civilization? Assuming it's about progress and ... not money.
We don't really know anything useful that can be done with quantum computers for civilization.
They can break some cryptography... other than that... what are they good for?
There's some highly speculative ideas about using them for chemistry/biology research, but no guaranteed return on investment at all.
As far as I know... that's it.
Who can break crypto with quantum computing? That is total speculation.
Shor’s algorithm can. What is speculative about that?
I put the word "some" in front of "crypto" for a reason.
There is some crypto that we know how to break with a sufficiently large quantum computer [0]. There is some we don't know how to do that to. I might be behind the state of the art here, but when I wasn't we specifically really only knew how to use it to break cryptography that Shor's algorithm breaks.
[0] https://quantum-journal.org/papers/q-2021-04-15-433/
Nope. Any crypto you can break with a real, physical, non-imaginary quantum computer, you can break faster with classical. Get over it. Shor's don't run yet and probably never will.
You are misdirecting and you know it. I don't even need to discredit that paper. Other people have done it for me already.
This is incorrect. Whilst you may be sceptical about whether quantum computers can be realised, the theoretical result is sound.
Recent advances in quantum error correction are a significant increase in confidence that quantum computers are practical.
We can argue about timelines. I suspect it is too early for startups to be raising funds for quantum computers at this stage.
Source: I worked in quantum computing research.
This is like asking whether $500 billion to fund warp drives would yield better returns.
Money can't buy fundamental breakthroughs: money buys you parallel experimental volume - i.e. more people working from the same knowledge base, and presumably an increase in the chance that one of them does advance the field. But at any given time point, everyone is working from the same baseline (money also can improve this - by funding things you can ensure knowledge is distributed more evenly so everyone is working at the state of the art, rather then playing catch up in proprietary silos).
What is quantum computing being used for?
True quantum computing in the sense that most people would imagine it, using individual qubits in an analogous (ish) way to classical computers, has not reached a useful scale. To date only “toy problems” to demonstrate theoretical results have been solved.
No.
money smells good i think
What are people filling these datacenters with exactly if not nvidia?
Anyone know if this involves nuclear plants as well or is that a separate initiative?
This is going to be the grift of the century. Sam will put Wall Street robber barons to shame.
> This is going to be the grift of the century.
Pretty sure that was musk and his 50+ bn bonus
shareholders voted for it multiple times so harder to call it grift
Most grifts involve persuading the victim
As a diehard fan of Stargate, I've gotta say I'm disappointed this has nothing to do with wormholes...
unless...
There's a good amount of irony in the results that AI have achieved, particularly if we reach AGI - they have improved individual worker efficiency by removing other workers from the system. Naming it Stargate implies a reckoning with the actual series itself - an accomplishment by humanity. Instead, what this pushes, is accomplishing the removal of humans from humanity. I like cool shiny tech, but I like useful tech that really helps humans more. Work on 3D-printing sustainable food, or something actually useful like that. Jenson doesn't need another 1B gallons of water under his belt.
> Instead, what this pushes, is accomplishing the removal of humans from humanity.
If you buy the marketing, yeah. But we aren't really seeing that in the tech sector. We haven't seen it succeed in the entertainment sector... it's still fighting for relevance in the medical and defense industries too. The number and quality of jobs that AI replaced is probably still quite low, and it will probably remain that way even after Stargate.
AI is DOA. LLMs have no successor, and the transformer architecture hit it's bathtub curve years ago.
> Jenson doesn't need another 1B gallons of water under his belt.
Jensen gets what he wants because he works with the industry. It's funny to see people object to CUDA and Nvidia's dominance but then refuse to suggest an alternative. An open standard managed by an independent and unbiased third-party? We tried that, OEMs abandoned it. NPU hardware tailor-made for specific inference tasks? Too slow, too niche, too often ends up as wasted silicon. Alternative manufacturer-specific SDKs integrated with one high-level library? ONNX tried that and died in obscurity.
Nvidia got where they are today by doing exactly what AMD and Apple couldn't figure out. People give Jensen their water because it's wasted in anyone else's hands.
Agreed, but it seems we're gonna ride the AI hype all the way to the "top".
> AI is DOA. LLMs have no successor, and the transformer architecture hit it's bathtub curve years ago
Tell me you didn’t read the DeepSeek R1 paper without telling me you also don’t know about reinforcement learning.
R1 is a rehash of things we've already seen, and a particularly neutered one at that. Are there any better examples you can think of?
Uh, they invented multilatent attention and since the method for creating o1 was never published, they’re the only documented example of producing a model of comparable quality. They also demonstrated massive gains to the performance of smaller models through distillation of this model/these methods, so no, not really. I know this is the internet, but we should try to not just say things.
A rat done bit my sister Nell, with whitey on the moon.
https://en.wikipedia.org/wiki/Whitey_on_the_Moon
Future of AI being controlled by Oracle worries me
Feels so much like an announcement designed to trade favors.
Altman gets on Trump's good side by giving him credit for the deal.
Trump revoked Biden's AI regulations.
How much is allocated to alignment/safety research?
Why oracle?
Oracle wtf.
The fallout is going to be insane when the AI bubble pops.
Not sure about that. ChatGPT is much greater than Google Search ever was, and that wasn't a bubble.
ChatGPT may be better than Google Search in content but at end of day, you have to make money and last report I saw, ChatGPT is burning through money at prestigious rate.
reminds me of a scene from the Matrix. "Tell me Mr. Anderson, what use is a phone call when you can't speak"
Training, yes, but they recoup inference costs through subscriptions.
Didn’t Altman say they’re losing money on the $200 subscription tier?
Inference isn’t cheap either.
subscriptions are just to sustain them until the endgame
Not sure about that.
cocks ear ... can hear it poppin already
initiators will cash out by that time one way or another
The folks who listen to you and don't see the fact that we are entering a weak singularity deserve to be destitute when this is all over.
“Weak singularity” meaning what?
Technology advancing more quickly year over year?
That’s a crazy notion and I’ll be sure everyone knows.
Also, what a wild thing to say. “People like you deserve to live in poverty because you don’t think we live in a sci-fi world.”
Calm down, dude.
> “Weak singularity” meaning what?
> Technology advancing more quickly year over year?
> That’s a crazy notion and I’ll be sure everyone knows.
The version I heard from an economist was something akin to a second industrial revolution, where the pace of technological development increases permanently. Imagine a transition from Moore's law-style doubling every year and a half, to doubling every week and a half. That wouldn't be a true "singularity" (nothing would be infinite), but it would be a radical change to our lives.
The pace of technological development has always been permanently increasing.
We’ve always been getting better at making things better.
> The pace of technological development has always been permanently increasing.
Not in the same way though. The pace of technological development post-industrial-revolution increased a lot faster - technological development was exponential both before and after, but it went from exponential with a doubling time of maybe a century, to a Moore's law style regime where the doubling time is a couple of years. Arguably the development of agriculture was a similar phase change. So the point is to imagine another phase change on the same scale.
You keep mentioning moore’s law, but that specifically applied to the amount of transistors on a die, not the rate of general technological advancement.
Regardless, I don’t see any change in this pattern. We’re advancing faster than ever before, just like always.
We’ve been doing statistical analysis and prediction for years now. It’s just getting better faster, like always.
I don’t see this big change in the rate of advancement. There’s just a lot more media buzz around it right now causing a bubble.
There was a big visible jump in text generation capabilities a few years ago (which was preceded by about 6 years of incremental NLP advances) and since then we’ve seen paced, year over year advances in that field.
As a medical layman, I imagine that alpha fold may really push the rate of pharmaceutical advances.
But I see no indication for a general jump in the rate of rate of technological advancement.
> that specifically applied to the amount of transistors on a die, not the rate of general technological advancement.
Sure. But you can look at things like GDP growth rates and see the same thing.
> I don’t see this big change in the rate of advancement. There’s just a lot more media buzz around it right now causing a bubble.
Maybe. I'm just trying to give a sense of what the concept of a "weak singularity" is. I don't have a view on whether we're actually going to have one or not.
Wasn't this already announced last week?
Money isn't the issue any more, wowww
I'm not automatically pro or anti Stargate (the movie and show were cool) BUT
Who gets the benefit of all of this investment? Are taxpayers going to fund this thing which is monetized by OpenAI?
If we pay for this shit, it better be fucking free to use.
I guess its the right time to buy AI stocks
At peak hype?
There's no other hype train besides Crypto atm
So tsmc and nvidia basically then?
Broadcom, Intel, AMD, Qualcomm, ARM, and Tesla.
Someone else will have to fill in the stocks for:
AI robotics:
Data Center energy:
We all know the cloud/software picks.
What am I missing?
Mark Tesla under the AI robotics category too.
More confusion than anything else!
It was rumoured in early 2024 that "Stargate" was planned to require 5GW data centre capacity[1][2] which in early 2024 was the entire data centre capacity Microsoft had already built[3]. Data centre capacity costs between USD$9-15m/MW[6] so 5GW of new data centre capacity would cost USD$45b-$75b but let's pick a more median cost of USD12m/MW[6] to arrive at USD$60b for 5GW of new data centre capacity.
This 5GW data centre capacity very roughly equates to 350000x NVIDIA DGX B200 (with 14.3kW maximum power consumption[4] and USD$500k price tag[5]) which if NVIDIA were selected would result in a very approximate total procurement of USD$175b from NVIDIA.
On top of the empty data centres and DGX B200's and in the remaining (potential) USD$265b we have to add:
* Networking equipment / fibre network builds between data centres.
* Engineering / software development / research and development across 4 years to design, build and be able to use the newly built infrastructure. This was estimated in mid 2024 to cost OpenAI US$1.5b/yr for retaining 1500 employees, or USD$1m/yr/employee[7]. Obviously this is a fraction of the total workforce needed to design and build out all the additional infrastructure that Microsoft, Oracle, etc would have to deliver.
* Electricity supply costs for current/initial operation. As an aside, these costs seemingly not be competitive with other global competitors if the USA decides to avoid the cheapest method of generation (renewables) and instead prefer the more expensive generation methods (nuclear, fossil fuels). It is however worth noting that China currently has ~80% of solar PV module manufacturing capacity and ~95% of wafer manufacturing capacity.[10]
* Costs for obtaining training data.
* Obsolescence management (4 years is a long time after which equipment will likely need to be completely replaced due to obsolescence).
* Any other current and ongoing costs of Microsoft, Oracle and OpenAI that they'll likely roll into the total announced amount to make it sound more impressive. As an example this could include R&D and sustainment costs in corporate ICT infrastructure and shared services such as authentication and security monitoring systems.
The question we can then turn to is whether this rate of spend can actually be achieved in 4 years?
Microsoft is planning to spend USD$80bn building data centres in 2025[7] with 1.5GW of new capacity to be added in the first six months of 2025[3]. This USD$80bn planned spend is for more than "Stargate" and would include all their other business units that require data centres to be built, so the total required spend of USD$45b-$75b to add 5GW data centre capacity is unlikely to be achieved quickly by Microsoft alone, hence the apparent reason for Oracle's involvement. However, Oracle are only planning a US$10b capital expenditure in 2025 equating to ~0.8GW capacity expansion[9]. The data centre builds will be schedule critical for the "Stargate" project because equipment can't be installed and turned on and large models trained (a lengthy activity) until data centres exist. And data centre builds are heavily dependent on electricity generation and transmission expansion which is slow to expand.
[1] https://news.ycombinator.com/item?id=39869158
[2] https://www.datacenterdynamics.com/en/news/microsoft-openai-...
[3] https://www.datacenterdynamics.com/en/news/microsoft-to-doub...
[4] https://resources.nvidia.com/en-us-dgx-systems/dgx-b200-data...
[5] https://wccftech.com/nvidia-blackwell-dgx-b200-price-half-a-...
[6] https://www.cushmanwakefield.com/en/united-states/insights/d...
[7] https://blogs.microsoft.com/on-the-issues/2025/01/03/the-gol...
[8] https://www.datacenterdynamics.com/en/news/openai-training-a...
[9] https://www.crn.com.au/news/oracle-q3-2024-ellison-says-ai-i...
[10] https://www.iea.org/reports/advancing-clean-technology-manuf...
This is not a new initiative, and did not start under Trump: https://wire.insiderfinance.io/project-stargate-the-worlds-l...
It’s incredibly depressing how everyone sees this as something the new administration did in a single day…
Yeah it's crazy.
Welcome to 1984
too late, China is already ahead
Altman rising to the top and becoming the defacto state preferred leader of AI in the US is wild. Fair play to him.
> The buildout is currently underway, starting in Texas, and we are evaluating potential sites across the country for more campuses as we finalize definitive agreements.
For those interested, it looks like Albany, NY (upstate NY) is very likely one of the next growth sites.
[0] https://www.schumer.senate.gov/newsroom/press-releases/schum...
I'm watching the announcement live from the white house and something about this just feels so strange and dystopian.
Reference for others: https://youtube.com/watch?v=L1ff0HhNMso
Well, the silver lining is the incredible human capacity to get used to almost any situation given enough time
It will get weirder, but only relatively so, the concept of normalcy always trailing just a little bit behind as we slide
Agreed, and whats the story behind the art chosen for the landing page?
I'm also curious how a global leader in multimodal generative AI chose this particular image. Did they prompt a generator for a super messy impressionist painting of red construction cranes with visible brush strokes, distorted to the point of barely being able to discern what the image represents?
Considering Stargate's introduction and plan seems to be a super messy concept of impressions of ideas and very lacking in details, the picture makes a lot of sense. Let AI evangelists see the future in the fuzz; let AI pessimists see failure in the abstract; let investors see $$$ in their pockets.
For me it's watching a gay man grovel at the feet of one of the most anti-LGBT politicians, a day after Trump signed multiple executive orders that dehumanized Altman and the LGBT community. Every token thinks they're special until they're spent.
>For me it's watching a gay man grovel at the feet of one of the most anti-LGBT politicians
Besides what ImJamal said, as a wealthy playboy man-about-town hanging out at Studio 54 in the '70s and '80s, I guarantee Trump has known and been friends with more gays than 95% of Americans. Certainly there has been no shortage of gay people among his top-level appointees in either his first or second administrations.
Trump was the first president to come into office supporting gay marriage. Trump only has a problem with the "t" part of the community and only in bathrooms and sports, not in general.
sama, peter thiel ... they dgaf. there is a huge difference between an oppressed gay person and a wealthy one.
no one wants to bite the hand that feeds.
who will benefit from those datacenters?
Larry Elliot, Elon Musk, and Masayoshi Son.
They really got together the supervillains of tech.
Feels like the the only reason Zuck is missing is Elon's veto.
Let’s say they develop AGI tomorrow. Is that really all she wrote for blue collar jobs?
Gerat. Larry gets cash thrown at his AI surveillance dystopia.
Stargate = Skynet?
more like Reagan's star wars program
Well - as part of the semi industry I'd like to say: Really appreciate it. Keep it coming!
Oh but crypto mining was bad lol wheres the power going to come from
This could potentially trigger an AI arms race between the US and China. The standard has been set, lets see what China responds with. Either way, it will accelerate the arrival of ASI, which in my opinion is probably a good thing.
The arms race is already running, I think this showdown is inevitable so we should get our asses moving
Unless we air strike the data centers, there is no way to control China’s progress
It will be similar to the space race between Soviet Union and US. And just like Soviet Union going broke and collapsing, China too will go even more broke and collapse.
"No Sam, for obvious reasons we cannot give you 6 trillion ... but how about 500 billion?"
Wow.
You gotta start small, you know?
if it really worked that way, then it was a successful blue-sky negotiation tactic to maximize the actual final negotiation.
I guess these people are betting small and efficient models are not the future.
what will they call the SG-1?
None of these companies have the inner resources to fund a 500B build.
Looks like the dollar printing press will continue to overheat in the coming years.
What will be powering all these data centers? The thought of exponentially increasing our fossil fuel consumption scares the hell out of me.
Texas is the leading state in new grid batteries and grid solar for three years now. Also Governor Abbott deregulated nuclear last year. Sure there will be some new natural gas too, which is the least scary fossil fuel. They call it the "all of the above" approach to energy.
Well there was this random dude early that was rambling something about „drill baby drill“…
Fossil fuels, of course.
I can't stop rolling my eyes at all those big promises.
Personally I wish they invested in optical photonic computing, taking it out of the research labs. It can be so much more energy efficient and faster to run than GPUs and TPUs.
Oracle is onboard - guess you got to toss them some red meat as well.
No amount of money invested in infrastructure is going to solve the "garbage in, garbage out" problem with AI, and it looks like the AI companies have already stolen the vast majority of content that is possible to steal. So this is basically a massive gamble that some innovation is going to make AI do something better than faultily regurgitate its training data. I'm not seeing a corresponding investment which actually attempts to solve the "garbage in, garbage out" problem.
A fraction of this money invested in building homes would end the homelessness problem in the U.S.
I guess the one silver lining here is that when the likely collapse happens, we'll have more clean energy infrastructure to use for more useful things.
SoftBank and MGX paying for all this, all foreign investment.
Where is the US government in all this? Why aren't they leading the charge? They obviously have the money.
$500 billion is a lot of money even by US government standards. It's about the size of all the new spending in the 2021 bipartisan infrastructure bill.
For the US government it's a matter of political will. Where is the political will?
The political will is trying to balance a large existing debt at increasing interest rates, a significant primary deficit even in a good economy, rising military threats from China, a strong Republican desire for tax cuts, extremely popular entitlement programs that no one wants to touch, and an aging population with a declining birthrate
Modern monetary systems function through two main channels: government spending and bank lending. Every dollar in circulation originates from one of these sources - either government fiscal operations (deficit spending) or bank credit creation through loans. This means all money is fundamentally based on debt, though "debt" has very different implications for a currency-issuing government versus private borrowers. Government debt operates fundamentally differently from household debt since the government controls its own currency. As former Fed Chairman Alan Greenspan noted to Congress, the U.S. can always meet any obligation denominated in dollars since it can create them. The real constraints aren't financial but economic - inflation risk and the efficient allocation of real resources.
https://www.youtube.com/watch?v=DNCZHAQnfGU
The key question then becomes one of political priorities and public understanding. If public opposition to beneficial government spending stems from misunderstanding how modern monetary systems work, then better education about these mechanisms could help advance important policy goals. The focus should be on managing real economic constraints rather than imaginary financial ones.
The last four years have been nothing but a lesson in how much everybody hates inflation and how absolutely toxic it is to re-election campaigns
Yes, people hate inflation, because inflation creates a demand for more money! Inflation means there is not enough money for people. So why did prices go up, is it just because of fiscal spending?
The relationship between inflation and monetary policy is more complex than often portrayed. While recent inflation has created financial strain for many Americans, its root causes extend beyond simple money supply issues. Recent data shows that corporate profit margins reached historic highs during the inflationary period of 2021-2022. For example, in Q2 2022, corporate profits as a percentage of GDP hit 15.5%, the highest level since the 1950s. This surge in corporate profits coincided with the aftermath of Trump's 2017 Tax Cuts and Jobs Act, which reduced the corporate tax rate from 35% to 21%. This tax reduction increased after-tax profits and may have given companies more flexibility to pursue aggressive pricing strategies. Multiple factors contributed to inflation:
Supply chain disruptions created genuine scarcity in many sectors, particularly semiconductors, shipping, and raw materials Demand surged as economies reopened post-pandemic Many companies used these market conditions to implement price increases that exceeded their cost increases The corporate tax environment created incentives for profit maximization over price stability
For instance, many large retailers reported both higher prices and expanded profit margins during this period. The Federal Reserve Bank of Kansas City found that roughly 40% of inflation in 2021 could be attributed to expanded profit margins rather than increased costs. This pattern suggests that market concentration, pricing power, and tax policy played significant roles in inflation, alongside traditional monetary and supply-chain factors. Policy solutions should therefore address market structure, tax policy, and monetary policy to effectively manage inflation.
New admin is focused on federal cost cutting. Attracting foreign investment is a win-win for everyone involved.
> This project will ... also provide a strategic capability to protect the national security of America and its allies.
> All of us look forward to continuing to build and develop ... AGI for the benefit of all of humanity.
Erm, so which one is it? It is amply demonstrable from events post WW2 that US+allies are quite far from benefiting all of humanity & in fact, in some cases, it assists an allied minority at an extreme cost to a condemned majority, for no discernable humanitarian reasons save for some perceived notion of "shared values".
Maybe only Americans and their allies qualify as human, according to them
And only the americans the administration deems to qualify as human.
welcome to our reality where you know you will be killed but there's not a single thing you can do :)
SoftBank, huh?
That's... not a good omen.
Sooner or later one of their bold swings is going to connect
Watch the birdie
I for one am hugely supportive of compute that is red white and blue.
Oh so that's why Pelosi invested in Micro nuke electricity plants.
In context Pelosi has been pro nuclear for at least 16 years having spoken for nuclear and nuclear investment in 2008 as reported by the American Enterprise Institute.
Why now? Is this to compensate the campaign donors or to scare Putin?
God forbid anyone would invest $500,000,000,000 to create jobs. No no no. 500 billion to destroy them for "more efficiency" so the owner class can get richer.
I watched the announcement live, I could have sworn that the softbank guy said "initial investment of 100 MILLION, we hope to EARN 500 BILLION by the end of your (Trumps) term"
Gave me a real "this is just smoke and mirrors hiding the fact that the white house is now a glory hole for Trump to enjoy" feel.
Investigate the connection between Softbank and Apple; then examine the ties between Tim Cook and Trump:
https://www.bbc.com/news/articles/cj4d75zl212o
https://apnews.com/article/trump-apple-tim-cook-tech-0a9fb8e...
You don't need a finance degree to figure out what's happening here. Apple is ripping pages right out of Elon's playbook.
> Tim Cook
He changed his name to curry favor with prez. He’s Tim Apple now
It's just more hype and PR antics from sama.
The Silicon-Valley bubble universe continues to introduce entropy that it feeds off of itself... Naming this Stargate when some of the largest effects AI has had is removing humans from processes to make other, fewer humans more efficient is emblematic of this hollow naming ethos - continuing to use the portal to shunt more and more humans out of the process that is humanity, with fairly reckless abandon. Who is Ra, and who is sending the nuke where, in this naming scheme? You decide.
Altman said we will be amazed at the rate AI will CURE diseases. Not diagnose, not triage or help doctors but cure, ie understand at a deep fundamental, mechanistic level then devise therapies, ie drugs, combination of drugs and care practices that work. WOW.
Despite the fact that this is THE thing I'd be the happiest to see in the real world (having spent a considerable amount of my career in companies working towards this vision), we are so far from it (as anyone who actually worked on these problems will attest) that Altman's comment here isn't just overselling, it's a blatant lie about this tech's capabilities.
I guess the pitch was something like: "hey o3 can already do PhD level maths so you know in 5 years it will be able to do drugs too, and cure shit, Mr President".
Trouble is o3 can't do advanced math (or at least definitely not at the level openai claimed.. it was a lie, it turns out openai funds the dataset that measures this - ouch). And the bigger problem is, going from "ai can do maths" to "invent cures" is about a 10-100 X jump. If it wasn't, don't we think the pharma companies would have solved this by hiring lots of "really smart math guys"?
As anyone in biotech will tell you, the hard bit is not the first third of the drug discovery pipeline (where 99% of ai driven biotechs focus). It's the later parts where the rubber meets the road.. i.e. where your precious little molecule is out in the real world with real people where the incredible variability of real biological hosts makes most drugs fail spectacularly. You can't GPT your way out of this. The answers for this is not in science papers that you can just read and regurgitate a version that "solves biology and cures diseases".
To solve this you need AI but most of all you have to do science. Real science. In the lab, in vitro and in Vivo, not just in silico, doing ablation studies, overfitting famous benchmark datasets and other pseudo science shit the ML community is used to doing.
That is all to say, I'd bet we won't see a single purely AI designed novel drug in the clinic in this decade. All parts of that sentence are important. Purely AI designed. Novel. But that's for another post..
Now, back to Altman. If you watch the clip, he almost did the smart thing at first when Trump put him on the spot and said "I have no idea about healthcare, biotech (or AI beyond board room drama)" but then could not resist coming up with this outlandish insane answer.
Famously (in tech circles anyway) Paul Graham wrote more than a decade ago about Altman that he's the most strong willed individual he's ever met, who can just bend the universe to his will. That's his super skill. And clearly.. convincing SoftBank and Oracle to do this 500 billion investment for OpenAI (a non profit turned for profit) is an unbelievable achievement. I have no idea what Altman can say (or do) in board rooms that unlocks these possibilities for him.. Any ideas? Let me know!
> This project will not only support the re-industrialization of the United States but also provide a strategic capability to protect the national security of America and its allies.
> The initial equity funders in Stargate are SoftBank, OpenAI, Oracle, and MGX. SoftBank and OpenAI are the lead partners for Stargate, with SoftBank having financial responsibility and OpenAI having operational responsibility. Masayoshi Son will be the chairman.
I'm sorry, has SoftBank suddenly become an American company? I feel like I'm taking crazy pills reading this.
Edit: MGX is Saudi company? This is baffling....
https://www.mgx.ae/en
Well the Saudis are one of the president’s “personal shareholders” so I think that qualifies them as an American company now.
MGX seems to be in Abu Dhabi/UAE rather than Saudi Arabia. Hadn't heard of it before.
It’s an investment in the US. Why does it matter if SoftBank is not an American company?
Also, SoftBank is an investment fund. A lot of its money came from American investors.
The fund is run out of the US. Parent co is in Japan
Japan companies were a threat just a couple weeks ago.
There is credible evidence that leads me to believe that (1) Nippon Steel Corporation, a corporation organized under the laws of Japan . . . might take action that threatens to impair the national security of the United States;
https://bidenwhitehouse.archives.gov/briefing-room/president...
Japan has the same concerns about 7 Eleven being purchased by a Canadian company though I think the deal was rejected.
https://www.reuters.com/markets/deals/japans-seven-i-deal-re...
I think the death of Suchir Balaji makes more sense now. AE wouldn't mess around with its investments.
This.
SoftBank having financial responsibility is insane. This is just a way to funnel money into people Trump owes.
I don't get it, if this was government/American funded I could understand the marketing as "USA" secured infrastructure but like it's not?
> Masayoshi Son will be the chairman.
Not all rich people are out of their minds, but Masayoshi Son definitely is. The way he handled the WeWork situation was bad...
> "OpenAI will continue to increase its consumption of Azure as OpenAI continues its work with Microsoft"
Not sure why, but the word choice of "consumption" feels like a reverse Freudian slip to me.
Sometimes the person writing the copy is writing it because they talk good, not because they are the biggest proponent of the idea.
Give a clever, articulate person a task to write about something they don't believe in and they will include the subtlest of barbs, weak praise, or both.
Industry standard word, e.g. "consumption pricing" etc
But yeah if you're in the industry it's easy to forget how certain jargon sounds based on its dictionary definition
But the good news is when the Trough of Disillusionment starts we can make a bunch of tuberculosis jokes.
> This project will [...] support the re-industrialization of the United States
How?
By aggregating the means of production even more in the hands of a handful of people
Wait, was it supposed to re industrialize the USA?
Didn't you see the impressionist art of construction cranes?
I thought this meant it was $500 billion in government money.
Some of these companies do have huge cash reserves they don't know what to do with so if it is $500 billion of private money, I am not going to complain.
I will believe it when I see it though and that this isn't a 100 billion in private money with a 400 billion dollar free US government put option for the "private" investors if things don't go perfect.
Hush. Don't ask questions. It is going to be great.
> starting in Texas
Maybe I just don't get it. Texas seems like an awful place to do business.
My guess would be it's all about electricity.
Texas has a .... unique energy market (literally! They don't connect to the national grid so they can avoid US Government regulations- that way it's not interstate commerce). Because of that spot prices fluctuate very wildly up and down, depending on the weather, demand, and their large quantity of renewables (Texas is good for solar and wind energy). When the weather is good for renewables they have very cheap electricity (lots of production and can't sell to anyone outside the state), when the weather is bad they can have incredibly expensive electricity (less production, can't buy from anyone outside the state). Larger markets, able to pull from larger pools of producers and consumers, just fluctuate less.
I know some bitcoin miners liked to be in Texas and basically worked as energy speculators: when electricity was cheap they would mine bitcoin, when it was expensive they shut down their plant- sometimes they even got paid by producers to shut-down their plant! I would bet that you could do a lot of that with AI training as well, given good checkpointing.
You wouldn't want to do inference there (which needs to be responsive and doesn't like 'oh this plant is going to shut down in one minute because a storm just came up') but for training it should be fine?
No state income tax, fewer regulations (zoning, environmental regulations) than other parts of the country, relatively cheap power, large existing industrial base. For skilled labor that last bit is important. Also one of the cheapest states wrt minimum wage (same as federal, nothing added), which is important for unskilled labor.
Depending on the part of the state, relatively low costs of living which is helpful if you don't like paying people much. Large areas that are relatively undeveloped or underdeveloped which can mean cheaper land.
The white house was touting this so it's probably to secure political patronage or will be part of pork barrel spending to get some other bill passed.
It doesn't even have an electricity grid that works, maybe that's where the 500b is going, reconnecting it to the grid.
Based on what? There’s not a better state in the country for large capex gambles by business.
When doing business is a bribe it’s perfect
That's a ridiculous sum of money that could be better spent on much more worthy things.
So was getting a man to the moon. Do you want to lose the AI race to the Chinese?
Why would I care? Do you really want Masayoshi Son in charge of a theoretical superhuman AI?
Looking forward to transparency about where this capital flows /s
Not to be confused by the other (non-fictional) DoD Stargate Project[0], that involved "remote-viewing" and other psychic crap.
The AI Stargate Project claims it will "create hundreds of thousands of American jobs". One has doubts.
[0] https://en.wikipedia.org/wiki/Stargate_Project
"Psychic crap" that went on for 20+ years ? Sure.
Meh, why did they choose this name. Stargate does not deserve this…
The project predates Trump: https://wire.insiderfinance.io/project-stargate-the-worlds-l...
(But yes I agree)
I dislike associating a great fictional universe (Stargate series) with this disgusting affair...
You'd really think that arguably the leader in generative AI could come up with a unique project name instead of ripping off something extant and irrelevant.
But then again that's their entire business, so I shouldn't be too surprised.
This is from the guy who thinks "Her" is a good reference for how we need AI. Media literacy is not Altman's strong suit.
I mean the entire AI thing is built atop mass plagiarism and stealing things others have created indiscriminately. I doubt Mr Worldcoin could come up with an original thought for anything, seeing how his models behave.
While OpenAI and the rest of the industry is reaching AGI, Apple is out here shipping features with ChatGPT 3.5 technology.