I suspect it is its own model. Running it on 10B+ user queries per day you're gonna want to optimize everything you can about it - so you'd want something really optimized to the exact problem rather than using a general purpose model with careful prompting.
Wow, software is hard! Imagine an entire company working to build an insanely huge and expensive wafer scale chip and your super smart and highly motivated machine learning engineers get 1/3 of peak performance on their first attempt. When people say NVIDIA has no moat I'm going to remember this - partly because it does show that they do, and partly because it shows that with time the moat can probably be crossed...
Cerebras really has impressed me with their technicality and their approach in the modern LLM era. I hope they do well, as I've heard they are en-route to IPO. It will be interesting to see if they can make a dent vs NVIDIA and other players in this space.
You don't need quantization aware training on larger models. 4 bit 70b and 405b models exhibit close to zero degradation in output with post training quantization[1][2].
Wonder if they'll eventually release Whisper support. Groq has been great for transcribing 1hr+ calls at a significnatly lower price compared to OpenAI ($0.36/hr vs. $0.04/hr).
Does it run well on CPU? I've used it locally but only with my high end (consumer/gaming) GPU, and haven't got round to finding out how it does on weaker machines.
That's pretty much exactly how I started. Ran whisper.cpp locally for a while on a 3070Ti. It worked quite well when n=1.
For our use case, we may get 1 audio file at a time, we may get 10. Of course queuing them is possible but we decided to prioritize speed & reliability over self hosting.
If you're using an LLM as a compressed version of a search index, you'll be constantly fighting hallucinations. Respectfully, you're not thinking big-picture enough.
There are LLMs today that are amazing at coding, and when you allow it to iterate (eg. respond to compiler errors), the quality is pretty impressive. If you can run an LLM 3x faster, you can enable a much bigger feedback loop in the same period of time.
There are efforts to enable LLMs to "think" by using Chain-of-thought, where the LLM writes out reasoning in a "proof" style list of steps. Sometimes, like with a person, they'd reach a dead-end logic wise. If you can run 3x faster, you can start to run the "thought chain" as more of a "tree" where the logic is critiqued and adapted, and where many different solutions can be tried. This can all happen in parallel (well, each sub-branch).
Then there are "agent" use cases, where an LLM has to take actions on its own in response to real-world situations. Speed really impacts user-perception of quality.
> There are LLMs today that are amazing at coding, and when you allow it to iterate (eg. respond to compiler errors), the quality is pretty impressive. If you can run an LLM 3x faster, you can enable a much bigger feedback loop in the same period of time.
Well now the compiler is the bottleneck isn't it? And you would still need human check for bugs that aren't caught by the compiler.
Still nice to have inference speed improvements tho.
If the speed is used to get better quality with no more input from the user then sure, that is great. But that is not the only way to get better quality (though I agree that there are some low hanging fruit in the area).
To be honest most LLM's are reasonable at coding, they're not great.
Sure they can code small stuff.
But the can't refactor large software projects, or upgrade them.
Upgrading large java projects is exactly what AWS want you to believe their tooling can do, but the ergonomics aren't great.
I think most of the capability problems with coding agents aren't the AI itself, it's that we haven't cracked how to let them interact with the codebase effectively yet. When I refactor something, I'm not doing it all at once, it's a step by step process. None of the individual steps are that complicated. Translating that over to an agent feels like we just haven't got the right harness yet.
> The biggest time sink for me is validating answers so not sure I agree on that take.
But you're assuming that it'll always ne validated by humans. I'd imagine that most validation (and subsequent processing, especially going forward) will be done on machines.
By comparison with reality. The initial LLMs had "reality" be "a training set of text", when ChatGPT came out everyone rapidly expanded into RLFH (reinforcement learning from human feedback), and now there's vision and text models the training and feedback is grounded on a much broader aspect of reality than just text.
I found this on their product page, though just for peak power:
> At 16 RU, and peak sustained system power of 23kW, the CS-3 packs the performance of a room full of servers into a single unit the size of a dorm room mini-fridge.
Ex-cereberas engineer here. The chip is very powerful and there is no 'one way' to do things. Rearchitecting data flow, changing up data layout, etc can lead to significant performance improvements. That's just my informed speculation. There's likely more perf somewhere
The first implementation of inference on the Wafer Scale Engine and utilized only a fraction of its peak bandwidth, compute, and IO capacity. Today’s release is the culmination of numerous software, hardware, and ML improvements we made to our stack to greatly improve the utilization and real-world performance of Cerebras Inference.
We’ve re-written or optimized the most critical kernels such as MatMul, reduce/broadcast, element wise ops, and activations. Wafer IO has been streamlined to run asynchronously from compute. This release also implements speculative decoding, a widely used technique that uses a small model and large model in tandem to generate answers faster.
They said in the announcement that they've implemented speculative decoding, so that might have a lot to do with it.
A big question is what they're using as their draft model; there's ways to do it losslessly, but they could also choose to trade off accuracy for a bigger increase in speed.
It seems they also support only a very short sequence length. (1k tokens)
Speculative decoding does not trade off accuracy. You reject the speculated tokens if the original model does not accept them, kind of like branch prediction. All these providers and third parties benchmark each other's solutions, so if there is a drop in accuracy, someone will report it. Their sequence length is 8k.
"bitnet.cpp achieves speedups of 1.37x to 5.07x on ARM CPUs, with larger models experiencing greater performance gains. Additionally, it reduces energy consumption by 55.4% to 70.0%, further boosting overall efficiency. On x86 CPUs, speedups range from 2.37x to 6.17x with energy reductions between 71.9% to 82.2%. Furthermore, bitnet.cpp can run a 100B BitNet b1.58 model on a single CPU, achieving speeds comparable to human reading (5-7 tokens per second), significantly enhancing the potential for running LLMs on local devices. "
It turns out someone has written a plugin for my LLM CLI tool already: https://github.com/irthomasthomas/llm-cerebras
You need an API key - I got one from https://cloud.cerebras.ai/ but I'm not sure if there's a waiting list at the moment - then you can do this:
Then you can run lightning fast prompts like this: Here's a video of that running, it's very speedy: https://static.simonwillison.net/static/2024/cerebras-is-fas...It has a waiting list
The "AI overview" in google search seems to be a similar speed, and the resulting text of similar quality.
I wonder which of their models they use. Might even be Gemini 1.5 Flash 8B which is VERY quick.
I just tried that out with the same prompt and it's fast, but not as fast as Cerebras: https://static.simonwillison.net/static/2024/gemini-flash-8b...
I suspect it is its own model. Running it on 10B+ user queries per day you're gonna want to optimize everything you can about it - so you'd want something really optimized to the exact problem rather than using a general purpose model with careful prompting.
Wow, software is hard! Imagine an entire company working to build an insanely huge and expensive wafer scale chip and your super smart and highly motivated machine learning engineers get 1/3 of peak performance on their first attempt. When people say NVIDIA has no moat I'm going to remember this - partly because it does show that they do, and partly because it shows that with time the moat can probably be crossed...
Cerebras really has impressed me with their technicality and their approach in the modern LLM era. I hope they do well, as I've heard they are en-route to IPO. It will be interesting to see if they can make a dent vs NVIDIA and other players in this space.
Apparently so. You can also buy in via various PE outfits before IPO, if you so desire. I did.
When Meta releases the quantized 70B it will give another > 2X speedup with similar accuracy: https://ai.meta.com/blog/meta-llama-quantized-lightweight-mo...
You don't need quantization aware training on larger models. 4 bit 70b and 405b models exhibit close to zero degradation in output with post training quantization[1][2].
[1]: https://arxiv.org/pdf/2409.11055v1 [2]: https://lmarena.ai/
Probably not. Cerebras chip only has 16bit and 32bit operators.
Wonder if they'll eventually release Whisper support. Groq has been great for transcribing 1hr+ calls at a significnatly lower price compared to OpenAI ($0.36/hr vs. $0.04/hr).
https://Lemonfox.ai is another alternative to OpenAI's Whisper API if you need support for word-level timestamps and diarization.
Whisper runs so well locally on any hardware I’ve thrown at it, why run it in the cloud?
Does it run well on CPU? I've used it locally but only with my high end (consumer/gaming) GPU, and haven't got round to finding out how it does on weaker machines.
That's pretty much exactly how I started. Ran whisper.cpp locally for a while on a 3070Ti. It worked quite well when n=1.
For our use case, we may get 1 audio file at a time, we may get 10. Of course queuing them is possible but we decided to prioritize speed & reliability over self hosting.
Damn, that's some impressive speeds.
At that rate it doesn't matter if the first try resulted in an unwanted answer, you'll be able to run once or twice more in a fast succession.
I hope their hardware stays relevant as this field continues to evolve
The biggest time sink for me is validating answers so not sure I agree on that take.
Fast iteration is a killer feature, for sure, but at this time I'd rather focus on quality for it to be worthwhile the effort.
If you're using an LLM as a compressed version of a search index, you'll be constantly fighting hallucinations. Respectfully, you're not thinking big-picture enough.
There are LLMs today that are amazing at coding, and when you allow it to iterate (eg. respond to compiler errors), the quality is pretty impressive. If you can run an LLM 3x faster, you can enable a much bigger feedback loop in the same period of time.
There are efforts to enable LLMs to "think" by using Chain-of-thought, where the LLM writes out reasoning in a "proof" style list of steps. Sometimes, like with a person, they'd reach a dead-end logic wise. If you can run 3x faster, you can start to run the "thought chain" as more of a "tree" where the logic is critiqued and adapted, and where many different solutions can be tried. This can all happen in parallel (well, each sub-branch).
Then there are "agent" use cases, where an LLM has to take actions on its own in response to real-world situations. Speed really impacts user-perception of quality.
> There are LLMs today that are amazing at coding, and when you allow it to iterate (eg. respond to compiler errors), the quality is pretty impressive. If you can run an LLM 3x faster, you can enable a much bigger feedback loop in the same period of time.
Well now the compiler is the bottleneck isn't it? And you would still need human check for bugs that aren't caught by the compiler.
Still nice to have inference speed improvements tho.
If the speed is used to get better quality with no more input from the user then sure, that is great. But that is not the only way to get better quality (though I agree that there are some low hanging fruit in the area).
To be honest most LLM's are reasonable at coding, they're not great. Sure they can code small stuff. But the can't refactor large software projects, or upgrade them.
Upgrading large java projects is exactly what AWS want you to believe their tooling can do, but the ergonomics aren't great.
I think most of the capability problems with coding agents aren't the AI itself, it's that we haven't cracked how to let them interact with the codebase effectively yet. When I refactor something, I'm not doing it all at once, it's a step by step process. None of the individual steps are that complicated. Translating that over to an agent feels like we just haven't got the right harness yet.
> The biggest time sink for me is validating answers so not sure I agree on that take.
But you're assuming that it'll always ne validated by humans. I'd imagine that most validation (and subsequent processing, especially going forward) will be done on machines.
And who validates the validation?
the compiler/interpreter are assumed to work in this scenario.
If that is the way to get quality, sure.
Otherwise I feel that power consumption is the bigger issue than speed, though in this case they are interlinked.
Humans consume a lot of power and resources.
The basic efficiency is pretty high.
How does the next machine/LLM know what’s valid or not? I don’t really understand the idea behind layers of hallucinating LLMs.
By comparison with reality. The initial LLMs had "reality" be "a training set of text", when ChatGPT came out everyone rapidly expanded into RLFH (reinforcement learning from human feedback), and now there's vision and text models the training and feedback is grounded on a much broader aspect of reality than just text.
Given that there are more and more AI generated texts and pictures that ground will be pretty unreliable.
Could you link to a paper or working POC that shows how this “turtles all the way down“ solution works?
I don't understand your question.
This isn't turtles all the way down, it's grounded in real world data, and increasingly large varieties of it.
How does the AI know it’s reality and not a fake image or text fed to the system?
Exactly, validating and rewriting the prompt are the real time consuming tasks.
So what is inference?
I wonder if there is a token/watt metric. Afaiu cerebras uses plenty of power/cooling.
I found this on their product page, though just for peak power:
> At 16 RU, and peak sustained system power of 23kW, the CS-3 packs the performance of a room full of servers into a single unit the size of a dorm room mini-fridge.
It's pretty impressive looking hardware.
https://cerebras.ai/product-system/
What made it so much faster based on just a software update?
Ex-cereberas engineer here. The chip is very powerful and there is no 'one way' to do things. Rearchitecting data flow, changing up data layout, etc can lead to significant performance improvements. That's just my informed speculation. There's likely more perf somewhere
They said in the announcement that they've implemented speculative decoding, so that might have a lot to do with it.
A big question is what they're using as their draft model; there's ways to do it losslessly, but they could also choose to trade off accuracy for a bigger increase in speed.
It seems they also support only a very short sequence length. (1k tokens)
Speculative decoding does not trade off accuracy. You reject the speculated tokens if the original model does not accept them, kind of like branch prediction. All these providers and third parties benchmark each other's solutions, so if there is a drop in accuracy, someone will report it. Their sequence length is 8k.
Could someone please bring Microsoft's Bitnet into the discussion and explain how its performance relates to this announcement, if at all?
https://github.com/microsoft/BitNet
"bitnet.cpp achieves speedups of 1.37x to 5.07x on ARM CPUs, with larger models experiencing greater performance gains. Additionally, it reduces energy consumption by 55.4% to 70.0%, further boosting overall efficiency. On x86 CPUs, speedups range from 2.37x to 6.17x with energy reductions between 71.9% to 82.2%. Furthermore, bitnet.cpp can run a 100B BitNet b1.58 model on a single CPU, achieving speeds comparable to human reading (5-7 tokens per second), significantly enhancing the potential for running LLMs on local devices. "
It is an inference engine for 1bit LLMs, not really comparable.
The novelty of the inexplicable bitnet obsession has worn off I think.
IDK, they remind me of Sigma-Delta ADCs [0], which are single bit ADCs but used in high resolution scenarios.
I believe we'll get to hear more interesting things about Bitnet in the future.
[0] https://en.wikipedia.org/wiki/Delta-sigma_modulation
We have yet to see a large model trained using it, haven't we?
Demo, API?
Demo: https://inference.cerebras.ai/
API: https://cloud.cerebras.ai/
That's odd, attempting a prompt fails because auth isn't working.
I filled out a lengthy prompt in the demo. submitted it. an auth window pops up. I don't want to login. I want the demo. such a repulsive approach.
chill with the emotionally charged words. their hardware, their rules. if this upsets you you will not have a good time on the modern internet.