very excited for cerebras, hopefully nvidia/amd will have less AI sales and bring back more consumer options when they realize they have abandoned/neglected the market that made them who they are
the hardware diversification story here is more interesting than the speed numbers. OpenAI going from a planned $100B Nvidia deal to "actually we're unsatisfied with your inference speed" within a few months is a pretty dramatic shift. AMD deal, Amazon cloud deal, custom TSMC chip, and now Cerebras. that's not hedging, that's a full migration strategy.
1,000 tok/s sounds impressive but Cerebras has already done 3,000 tok/s on smaller models. so either Codex-Spark is significantly larger/heavier than gpt-oss-120B, or there's overhead from whatever coding-specific architecture they're using. the article doesn't say which.
the part I wish they'd covered: does speed actually help code quality, or just help you generate wrong code faster? with coding agents the bottleneck isn't usually token generation, it's the model getting stuck in loops or making bad architectural decisions. faster inference just means you hit those walls sooner.
With agent teams I’ve found CC significantly better at catching mistakes on itself before it finishes its task. Having several agents challenging the implementation agents seems to produce better results. If so, faster is better always as you then can run more adversarial/verification tasks before finishing.
> OpenAI going from a planned $100B Nvidia deal to "actually we're unsatisfied with your inference speed" within a few months is a pretty dramatic shift.
A different way to read this might be: Nvidia isn't going to agree to that deal, so we now need to save face by dumping them first"
Ever since the recent revelation that Ars has used AI-hallucinated quotes in their articles, I have to wonder whether any of these quotes are AI-hallucinated, or if the piece itself is majority or minority AI generated.
If so, I have to ask: If you aren’t willing to take the time to write your own work, why should I take the time to read your work?
I didn’t have to worry about this even a week ago.
I have a question for those who closely follows Cerebras: do they have a future beyond being inference platform based on (an unusual) in-house silicon?
My mental model of cerebras is that they have a way of giving you 44GB of SRAM (and then more compute than you'll probably need relative to that). So if you have applications where the memory access patterns would benefit massively from basically having 44GB of L3-ish-speed SRAM, and it's worth $1-3m to get that, then it's a win.
Honestly not sure what else fits that bill. Maybe some crazy radar applications? The amount of memory is awfully small for traditional HPC.
If chip manufacturing advances allow them to eventually run leading edge models at speeds much faster than competition, that seems a really bright future all on its own. Their current chip is reportedly 5nm already, and much too small for the real 5.3-codex: https://www.cerebras.ai/press-release/cerebras-announces-thi...
In principle? No. In practice? I think others, eg TPUs and Trainiums of the world will cannibalize a lot of Cerebras’s share. I am not an expert though, that’s why I’m asking opinions of others.
tldr is possibly. their packaging does offer inherent advantages in giving you maximal compute without external communication, and that seems likely to remain true unless 3d stacking advances a lot further.
Their chips aren't actually square, they get an extra 2.9mm in both dimensions by having slightly rounded corners. They are wasting the rest of the circle though yes.
I believe the discs are a product of the manufacturing, having to spin them. The entire disc is not useable, so not really would you call it a wafer. If the entire cjip comes from the wafer, its wafer scale.
Large scale Capital is not gonna make any more investments into microelectronics going forward
Capital is incentivized to make large data centers and very high speed private Internet, not public Internet, private Internet like starlink
So the same way in the 1970s it was the main frame era and server side computing, which turned into server side rendering, which then turned into client side rendering which culminated in the era of the private computer in your home and then finally in your pocket
we’re going back to server side model communication and that’s going to encompass effectively the gateway to all other information which will be increasingly compartmentalized into remote data centers and high-speed access
Missing "OpenAI sidesteps" from the beginning of the title article title
Yeah. Completely changes the meaning of the article. I thought Nvidia was now competing with Cerebras. That's not the case...
very excited for cerebras, hopefully nvidia/amd will have less AI sales and bring back more consumer options when they realize they have abandoned/neglected the market that made them who they are
Nvidia bought groq, so they might be working on their own answer to low-latency serving. (I found this good explanation of groq compared to TPU [1])
[1] https://reddit.com/r/LocalLLaMA/comments/1pw8nfk/nvidia_acqu...
the hardware diversification story here is more interesting than the speed numbers. OpenAI going from a planned $100B Nvidia deal to "actually we're unsatisfied with your inference speed" within a few months is a pretty dramatic shift. AMD deal, Amazon cloud deal, custom TSMC chip, and now Cerebras. that's not hedging, that's a full migration strategy.
1,000 tok/s sounds impressive but Cerebras has already done 3,000 tok/s on smaller models. so either Codex-Spark is significantly larger/heavier than gpt-oss-120B, or there's overhead from whatever coding-specific architecture they're using. the article doesn't say which.
the part I wish they'd covered: does speed actually help code quality, or just help you generate wrong code faster? with coding agents the bottleneck isn't usually token generation, it's the model getting stuck in loops or making bad architectural decisions. faster inference just means you hit those walls sooner.
If you are OpenAI, why wouldn’t you naturally want more than one single supplier? Especially at a time where no one can get enough chips.
With agent teams I’ve found CC significantly better at catching mistakes on itself before it finishes its task. Having several agents challenging the implementation agents seems to produce better results. If so, faster is better always as you then can run more adversarial/verification tasks before finishing.
I'm 99% sure this 20-hour old user is an LLM posting on HN. Specifically, ChatGPT.
> OpenAI going from a planned $100B Nvidia deal to "actually we're unsatisfied with your inference speed" within a few months is a pretty dramatic shift.
A different way to read this might be: Nvidia isn't going to agree to that deal, so we now need to save face by dumping them first"
I imagine sama doesn't like rejection.
> On Thursday, OpenAI released its first production AI model to run on non-Nvidia hardware,
They used amd gpus before - MI300X via azure a year plus ago
Previous discussion on 5.3 codex Spark (sharing as the article doesn’t add tremendous value to it): https://news.ycombinator.com/item?id=46992553
Ever since the recent revelation that Ars has used AI-hallucinated quotes in their articles, I have to wonder whether any of these quotes are AI-hallucinated, or if the piece itself is majority or minority AI generated.
If so, I have to ask: If you aren’t willing to take the time to write your own work, why should I take the time to read your work?
I didn’t have to worry about this even a week ago.
>I didn’t have to worry about this even a week ago
No, you didn’t realize you had to worry about this until a week ago.
Im actually very cpncerned people have yet to realize they dont need to put truth values on internet content.
Once you default to 'doesnt matter if true' you end up being a lot more even keeled.
I have a question for those who closely follows Cerebras: do they have a future beyond being inference platform based on (an unusual) in-house silicon?
My mental model of cerebras is that they have a way of giving you 44GB of SRAM (and then more compute than you'll probably need relative to that). So if you have applications where the memory access patterns would benefit massively from basically having 44GB of L3-ish-speed SRAM, and it's worth $1-3m to get that, then it's a win.
Honestly not sure what else fits that bill. Maybe some crazy radar applications? The amount of memory is awfully small for traditional HPC.
If chip manufacturing advances allow them to eventually run leading edge models at speeds much faster than competition, that seems a really bright future all on its own. Their current chip is reportedly 5nm already, and much too small for the real 5.3-codex: https://www.cerebras.ai/press-release/cerebras-announces-thi...
They can also train models using this silicon. They're advertising 24T parameter models on their site right now.
Do they need any future beyond inference? It's going to be a huge market.
In principle? No. In practice? I think others, eg TPUs and Trainiums of the world will cannibalize a lot of Cerebras’s share. I am not an expert though, that’s why I’m asking opinions of others.
tldr is possibly. their packaging does offer inherent advantages in giving you maximal compute without external communication, and that seems likely to remain true unless 3d stacking advances a lot further.
Title is currently: "OpenAI sidesteps Nvidia with unusually fast coding model on plate-sized chips"
One thing I don't get about Cerebras, they say it's wafer scale, but the chips they show are square, I thought wafers were circular?
Their chips aren't actually square, they get an extra 2.9mm in both dimensions by having slightly rounded corners. They are wasting the rest of the circle though yes.
They cut off the sides. It's the largest square you can make from a wafer.
I believe the discs are a product of the manufacturing, having to spin them. The entire disc is not useable, so not really would you call it a wafer. If the entire cjip comes from the wafer, its wafer scale.
Mark my words:
The era of “Personal computing” is over
Large scale Capital is not gonna make any more investments into microelectronics going forward
Capital is incentivized to make large data centers and very high speed private Internet, not public Internet, private Internet like starlink
So the same way in the 1970s it was the main frame era and server side computing, which turned into server side rendering, which then turned into client side rendering which culminated in the era of the private computer in your home and then finally in your pocket
we’re going back to server side model communication and that’s going to encompass effectively the gateway to all other information which will be increasingly compartmentalized into remote data centers and high-speed access