This is very insightful. I remember the epoch of clueless startups wasting venture capital on Sun servers. I worked at one of those startups. Warden is clearly correct that if you want to train your AI faster then the optimal amount to spend on software optimization is at least a substantial fraction of your hardware budget.
However, clueless people who don't know how to optimize probably don't know where to spend money on optimization, either.
If they optimize though - and this is coming at some point - local AI becomes possible, and their entire business case as a cloud monopoly evaporates. I think they know they're in a race between centralized control, and widespread use and control, and that is what is really driving this.
Yes, if you see the LLM as a compressed dictionary of all available information.
But if they succeed with agentic reasoning models (we are absolutly not there yet) then I think meritocracy will be replaced with assetocracy. The better the model, the more expensive it will be and the better the software will be.
I don’t worry about it myself, but I do worry for my kids. Im not even sure what to teach them anymore to have a shot at early retirement (and they keep raising the retirement age too).
Teach them basic financial literacy. The time value of money, the power of compounding, the relationship between risk and expected returns. Grade school does not cover any of this.
It does not matter what your income is if you cannot budget and save.
I see a lot of comments here criticizing the author, and I think both teams have a point. There's definitely a bubble, because companies are buying up infrastructure which doesn't need to be used right now.
But also, the companies are buying up this infrastructure because whoever controls the infrastructure also controls the industry in around 5 years time.
Being the CEO of a notable publicly traded company (and liable if caught lying about what they do with billions of shareholders dollars) surely a little more than random HN commentator without sources...?
What also cannot be ignored, is that transformer models are a great unifying force. It's basically one architecture that can be used for many purposes.
This eliminates the need for more specialized models and the associated engineering and optimizations for their infrastructure needs.
While I agree there’s a lack of attention for the impact of software engineers on near-term industry growth — rather the opposite with layoffs and agentic automation (attempts) et cetera; the mentioned Scott Gray is working at OpenAI now, so the human capital angle is I guess just flying under the mainstream radar.
Sounds like someone that got lucky in big picture (in ML during Alexnet era), but then unlucky in picking the sub-genre.
>I see hundreds of billions of dollars being spent on hardware
>I don’t see are people waving large checks at ML infrastructure engineers like me
Which seemed like a valid question mark until you look at the github. <1B Raspberry pi class edge speech models. That's not the game the hyperscalers are playing
I don't think we can conclude much of anything about the datacenter build out from that
> That's not the game the hyperscalers are playing
The hyperscalers are playing the game hyperscalers are playing - and only them. Where do they expect to find talent then? If the logic is, you need to work at a hyperscaler to work at a hyperscaler, no wonder they won't find any talent. That would be like NASA only hiring astronauts to send to space if they had already experience being in space.
That is unfortunate, because these are special skills you may but find inhouse. I know some guys that did it inhouse for a long time, toured from project to project in the right phase and saved bigcorp lots of money. Now they are doing it publicly.
Usually large companies attract or develop these skills by their scale. I do think there a lot of smaller companies that are underserved in this area, though.
There are a few "optimization startups". But in this context I find it a bit ironic that pretty much everyone is working with the same architecture, and the same hardware for the most part, so actually there isn't really that much demand for bespoke optimizations.
Those that are serious are paying through the nose for their engineers to work on these optimizations. Your competitor working on "the same hardware" does not magically make your MFU go up.
And when you have enough spending to account for 1%+ of revenue for the AI hardware companies?
You can get the engineers from those very hardware companies to do bespoke optimizations for your specific high load use cases. That's something a startup would struggle to match.
Author is definitely correct in pointing out the incentives for companies to buy hardware. What the article misses is that there is in fact a reasonable economic incentive to not invest in software even if LLMs were not an economic bubble. It is that every single company is developing the same thing, there are many of those who even develop them as open source, and the ones that are closed as well as any company who would hire this guy, have a bunch of industrial spies inside anyway. Buying hardware may increase your moat, but developing software just rises the sea level.
"OpenAI rejected me so the entire industry is going to collapse" is certainly a take. They are still probably one of the less arrogant engineers in silicon valley.
There are people who are experts in a generalist sense. When a new field opens up, they quickly snatch up the opportunity and make immense progress and name for themselves in the evolving field. So in this case the first author is the mouse who ate the cheese and died.
The take is that small incremental improvements on the hardware-software at that scale imply massive returns yet there isn't much work for that use case.
It isn't really. The assumption that these companies aren't hiring any infrastructure engineers is absurd. They all have massive in-house teams doing GPU optimization and everything else that the author brings up. They just don't need an external consultant for it.
He didn't say they aren't hiring _any_ but that they are hiring few and that he finds it strange that despite his multi-decades record of squeezing performance on the gpu-software stack he isn't getting much collaboration proposals.
Spending a lot (on capex or opex) certainly is not providing any kind of signaling benefit at this level. It's the opposite, because obviously every single financial analyst in the market is worried about the rapid increase in capex. The companies involved are cutting everything else to the bone to make sure they can still make those (necessary) investments without degrading their top-line numbers too much. Or in some cases actively working to hide the debt they're financing this with from their books.
Even if we imagined that the author's conspiracy theory were true, there would still be massive incentives for optimization because everyone is bottlenecked on compute despite expanding it as fast as is physically possible. Like, are we supposed to believe that nobody would run larger training runs if the compute was there? That they're intentionally choosing to be inefficient, and as a result having to rate-limit their paying customers? Of course not.
The reality is that any serious ML operation will have teams trying to make it more efficient, at all levels. If the author's services are not wanted, there are a few more obvious options than the outright moronic theory of intentional inefficiency. In this case most likely that their product is an on-edge speech to text model, which is not at all relevant to what is driving the capex.
> Spending a lot (on capex or opex) certainly is not providing any kind of signaling benefit at this level.
It's not providing any benefit now but there's still signalling going on, and it absolutely provided benefit at the beginning of this cycle of economy-shattering fuckwittery.
OP here, I didn't write the post, but found it interesting and posted it here.
> So i understand correctly, they spend more even thought They can, optimize and spend less
This is what I understand as well, we could utilise the hw better today and make things more efficient but instead we are focusing on making more. TBH I think both need to happen, money should be spent to make better more performant hw and at the same time squeeze any performance we can from what we already have.
I believe the author is making the point that the companies spending all this money on hardware aren't concerned at all with how the hardware is actually used.
Optimization isn't even being considered because its the total cost spent on hardware that is the goal, not output from the hardware.
I slightly have trouble believing that Mr “Stop wasting tokens by saying please to LLMs” Altman is not considering how his models can be optimized. I suppose the real question is how accurate are the utilization numbers in the article.
But can that really be the case? It takes a long time to train and tune the models, any small, even low % digit of squeezing more implies much faster iteration.
> When I look around, I see hundreds of billions of dollars being spent on hardware – GPUs, data centers, and power stations. What I don’t see are people waving large checks at ML infrastructure engineers like me and my team.
That doesn't seem to be the case to me. I guess the author wants to do everything on his own terms and maybe companies aren't interested in that.
There's probably a bit more to it. It really only takes one company to bet on optimizing infrastructure, to the degree that the author suggests to undermine the entire house of cards being built on Nvidia GPUs currently, yet not one AI company is willing to take that bet?
The author could also be correct. Investors tend to be herd animals, and if you're not buying into the same tech as everyone else, your proposal is higher risk. It might very well be easier to say to an investor that you're going to buy a million Nvidia GPUs and stuff them in a datacenter in Texas like everyone else.
I'm interested in the one company that does take the bet on infrastruture optimization. If that works, then a lot of people are going to lose a lot of money really quickly.
This is very insightful. I remember the epoch of clueless startups wasting venture capital on Sun servers. I worked at one of those startups. Warden is clearly correct that if you want to train your AI faster then the optimal amount to spend on software optimization is at least a substantial fraction of your hardware budget.
However, clueless people who don't know how to optimize probably don't know where to spend money on optimization, either.
If they optimize though - and this is coming at some point - local AI becomes possible, and their entire business case as a cloud monopoly evaporates. I think they know they're in a race between centralized control, and widespread use and control, and that is what is really driving this.
Yes, if you see the LLM as a compressed dictionary of all available information.
But if they succeed with agentic reasoning models (we are absolutly not there yet) then I think meritocracy will be replaced with assetocracy. The better the model, the more expensive it will be and the better the software will be.
I don’t worry about it myself, but I do worry for my kids. Im not even sure what to teach them anymore to have a shot at early retirement (and they keep raising the retirement age too).
Teach them basic financial literacy. The time value of money, the power of compounding, the relationship between risk and expected returns. Grade school does not cover any of this.
It does not matter what your income is if you cannot budget and save.
Military then a trade then a small business with employees doing the trade then done.
Hard disagree. How is putting guns in your children's hands a wise and loving first step?
This assumes you believe in the scaling hypothesis.
I see a lot of comments here criticizing the author, and I think both teams have a point. There's definitely a bubble, because companies are buying up infrastructure which doesn't need to be used right now.
But also, the companies are buying up this infrastructure because whoever controls the infrastructure also controls the industry in around 5 years time.
> There's definitely a bubble, because companies are buying up infrastructure which doesn't need to be used right now.
Source? Satya Nadella seems to disagree with your statement (at least as I understand both): https://uk.finance.yahoo.com/news/microsoft-ceo-satya-nadell...
Can Satya Nadella be honest regarding this subject?
"Ah yes we invested $13B into OpenAI but it's a bubble"
Being the CEO of a notable publicly traded company (and liable if caught lying about what they do with billions of shareholders dollars) surely a little more than random HN commentator without sources...?
What also cannot be ignored, is that transformer models are a great unifying force. It's basically one architecture that can be used for many purposes.
This eliminates the need for more specialized models and the associated engineering and optimizations for their infrastructure needs.
While I agree there’s a lack of attention for the impact of software engineers on near-term industry growth — rather the opposite with layoffs and agentic automation (attempts) et cetera; the mentioned Scott Gray is working at OpenAI now, so the human capital angle is I guess just flying under the mainstream radar.
OTOH garage-startup acquisitions are acquihires.
Sounds like someone that got lucky in big picture (in ML during Alexnet era), but then unlucky in picking the sub-genre.
>I see hundreds of billions of dollars being spent on hardware
>I don’t see are people waving large checks at ML infrastructure engineers like me
Which seemed like a valid question mark until you look at the github. <1B Raspberry pi class edge speech models. That's not the game the hyperscalers are playing
I don't think we can conclude much of anything about the datacenter build out from that
> That's not the game the hyperscalers are playing
The hyperscalers are playing the game hyperscalers are playing - and only them. Where do they expect to find talent then? If the logic is, you need to work at a hyperscaler to work at a hyperscaler, no wonder they won't find any talent. That would be like NASA only hiring astronauts to send to space if they had already experience being in space.
This is just not correct. Also nobody is making optimization startups because if you cared you’d have an in house team working on it.
That is unfortunate, because these are special skills you may but find inhouse. I know some guys that did it inhouse for a long time, toured from project to project in the right phase and saved bigcorp lots of money. Now they are doing it publicly.
https://efficientware.net/how-we-work/
Usually large companies attract or develop these skills by their scale. I do think there a lot of smaller companies that are underserved in this area, though.
There are a few "optimization startups". But in this context I find it a bit ironic that pretty much everyone is working with the same architecture, and the same hardware for the most part, so actually there isn't really that much demand for bespoke optimizations.
Those that are serious are paying through the nose for their engineers to work on these optimizations. Your competitor working on "the same hardware" does not magically make your MFU go up.
And when you have enough spending to account for 1%+ of revenue for the AI hardware companies?
You can get the engineers from those very hardware companies to do bespoke optimizations for your specific high load use cases. That's something a startup would struggle to match.
Author is definitely correct in pointing out the incentives for companies to buy hardware. What the article misses is that there is in fact a reasonable economic incentive to not invest in software even if LLMs were not an economic bubble. It is that every single company is developing the same thing, there are many of those who even develop them as open source, and the ones that are closed as well as any company who would hire this guy, have a bunch of industrial spies inside anyway. Buying hardware may increase your moat, but developing software just rises the sea level.
"OpenAI rejected me so the entire industry is going to collapse" is certainly a take. They are still probably one of the less arrogant engineers in silicon valley.
There are people who are experts in a generalist sense. When a new field opens up, they quickly snatch up the opportunity and make immense progress and name for themselves in the evolving field. So in this case the first author is the mouse who ate the cheese and died.
>is the mouse who ate the cheese and died.
I don’t follow what this means
That's not the take?
The take is that small incremental improvements on the hardware-software at that scale imply massive returns yet there isn't much work for that use case.
There’s no sour grapes in this article. I went in expecting the same but found that the author actually makes a good point.
It isn't really. The assumption that these companies aren't hiring any infrastructure engineers is absurd. They all have massive in-house teams doing GPU optimization and everything else that the author brings up. They just don't need an external consultant for it.
He didn't say they aren't hiring _any_ but that they are hiring few and that he finds it strange that despite his multi-decades record of squeezing performance on the gpu-software stack he isn't getting much collaboration proposals.
Great insight & nice read! Thanks
Every part of this is nonsense.
Spending a lot (on capex or opex) certainly is not providing any kind of signaling benefit at this level. It's the opposite, because obviously every single financial analyst in the market is worried about the rapid increase in capex. The companies involved are cutting everything else to the bone to make sure they can still make those (necessary) investments without degrading their top-line numbers too much. Or in some cases actively working to hide the debt they're financing this with from their books.
Even if we imagined that the author's conspiracy theory were true, there would still be massive incentives for optimization because everyone is bottlenecked on compute despite expanding it as fast as is physically possible. Like, are we supposed to believe that nobody would run larger training runs if the compute was there? That they're intentionally choosing to be inefficient, and as a result having to rate-limit their paying customers? Of course not.
The reality is that any serious ML operation will have teams trying to make it more efficient, at all levels. If the author's services are not wanted, there are a few more obvious options than the outright moronic theory of intentional inefficiency. In this case most likely that their product is an on-edge speech to text model, which is not at all relevant to what is driving the capex.
> Spending a lot (on capex or opex) certainly is not providing any kind of signaling benefit at this level.
It's not providing any benefit now but there's still signalling going on, and it absolutely provided benefit at the beginning of this cycle of economy-shattering fuckwittery.
It's good that you didn't give up ! So i understand correctly,they spend more even thought They can, optimize and spend less ?
OP here, I didn't write the post, but found it interesting and posted it here.
> So i understand correctly, they spend more even thought They can, optimize and spend less
This is what I understand as well, we could utilise the hw better today and make things more efficient but instead we are focusing on making more. TBH I think both need to happen, money should be spent to make better more performant hw and at the same time squeeze any performance we can from what we already have.
I believe the author is making the point that the companies spending all this money on hardware aren't concerned at all with how the hardware is actually used.
Optimization isn't even being considered because its the total cost spent on hardware that is the goal, not output from the hardware.
I slightly have trouble believing that Mr “Stop wasting tokens by saying please to LLMs” Altman is not considering how his models can be optimized. I suppose the real question is how accurate are the utilization numbers in the article.
But can that really be the case? It takes a long time to train and tune the models, any small, even low % digit of squeezing more implies much faster iteration.
> When I look around, I see hundreds of billions of dollars being spent on hardware – GPUs, data centers, and power stations. What I don’t see are people waving large checks at ML infrastructure engineers like me and my team.
That doesn't seem to be the case to me. I guess the author wants to do everything on his own terms and maybe companies aren't interested in that.
There's probably a bit more to it. It really only takes one company to bet on optimizing infrastructure, to the degree that the author suggests to undermine the entire house of cards being built on Nvidia GPUs currently, yet not one AI company is willing to take that bet?
The author could also be correct. Investors tend to be herd animals, and if you're not buying into the same tech as everyone else, your proposal is higher risk. It might very well be easier to say to an investor that you're going to buy a million Nvidia GPUs and stuff them in a datacenter in Texas like everyone else.
I'm interested in the one company that does take the bet on infrastruture optimization. If that works, then a lot of people are going to lose a lot of money really quickly.