So, we have:
- claude for corps and gov
- codex for devs
- grok for what, roleplay, racism? Those are the two things I've ever heard grok associated with around me.
So interestingly, I know of at least one application in a charity that deals with trafficking where grok was happy to do one-shot classification tasks where all other models refused to cooperate.
I think there's a surprising number of actually useful applications in this sort of grey area for a slightly-less guardrailed, near-frontier model (also the grok-fast models are cheap!).
There are lots of uncensored models out there. I don't think grok is leading in that front. They kind of pick and choose which things they want to support based on elons world views. Elon used to hang out with sex traffickers so of course grok is fine talking about it. Probably even offers strategies for them does free accounting etc...
Lol. I think they unleashed it on this post, look at the number of only vaguely related, lukewarm opinions trying to push the racism and CSAM stuff to the bottom
That's what it was doing. Like literally. Chatgpt it or Google it. Supporting grok is paying money to a csam generator.
Edit I cannot reply to the post below me. I have gone entirely over to local models so I am paying zero dollars to any of the use defense contractors that are also tech companies. It's awesome.
Yeah? And Claude not? Don't be absurd and spread atrocities like this, you can hate the guy as much as you want but don't be disrespectful, ANY model can generate CP, in what world the model (A MACHINE) is responsible for this? Tor is responsible for CP as well?
Grok is as progressive as any of the other models. Despite some of the highly-publicised fuck-ups, try asking Grok anything racist and see how it replies. Yes, I know you didn't try this and you won’t.
Isn't grok currently holding the world record for the biggest generator of CSAM? Or did they change focus to enhance their racism and propaganda vertical? Things move so quickly these days hard to keep up!
100% agree. Grok may or may not be biased one way or the other as far as the US is concerned but from the rest of the world perspective it's mostly the same as any other model trained on Wikipedia.
Grok is my favorite model for chatting, and my favorite voice mode. It seems to be the only voice mode that isn't routing to a extremely cheap model (like Haiku), and has been the highest quality out of all the frontier ones. When you subscribe to SuperGrok you can also create a "council" of agents, each with their own system prompt and when you ask something, they will all get asked in parallel to come to a conclusion. Good stuff!
Just wish they would finally put some work into their apps, it's the only thing keeping me from actually subscribing to SuperGrok:
- No MCP / connected apps support. It's been teased but here we are, still not available. I can't connect Grok to anything, so I can't use it for serious work
- Projects are still not available in the app so as soon as you move something into a project, it's gone from all the native apps
- No way to add artifacts (like generated markdown docs) directly to a project, we have to export to PDF/markdown and re-import. And there isn't even a way to export artifacts. This makes serious project work hard because we can't dynamically evolve projects with new information
- No memory, no ability to look up other chats, each chat is completely new
- No voice mode in projects at all
If someone from xAI is reading this, please consider adding some of these.
When I signed up, I accidently paid for a full year. So from time to time, I'll throw it something just to see what it produces compared to the other LLMs. And, even after all this time, it still feels like a really "dumb" model compared to the other frontier ones. But, worse, many of my system prompts make it go wacky and puke jibberish. However it was pretty cool for those couple months awhile back when it was uncensored. You could ask it about a wild conspiracy, and it would actually build the case and link you to legitimite source material. They dropped the hammer down on that real quick.
I also think Grok would benefit from allowing usage of "SuperGrok Heavy" (their $300 plan) in coding harnesses with included usage. Currently they give you some API credits on the Heavy plan so you can use some Grok for coding, but $300 USD value is just not there.
Not saying they should create their own grok-code harness, just allowing usage in existing ones would already be beneficial. But that's probably what the Cursor acquisition is going to do eventually
> No MCP / connected apps support. It's been teased but here we are, still not available. I can't connect Grok to anything, so I can't use it for serious work
Grok has tool use, no? Why would you also need MCP? What does MCP add?
I'm talking about the consumer Grok app and grok.com website. There currently are not connected apps (or MCP) at all, so while Grok can use tools, there is no way to add tools to it
As an English-as-second-language speaker and writer, one thing Grok really shines at is capturing the tone and level of "formality" of a piece of text and the replicating it correctly. It seems to understand the little human subtleties of language in a way the other major providers don't. Chatgpt goes overly stiff and formal sounding, or ends up in a weird "aye guvnor" type informal language (Claude is sometimes better but not always).
Grok seems in general better at being "human" in ways that are hard to define: for eg. if I ask it "does this message roughly convey things correctly, to the level it can given this length", it will likely answer like a human would (either a yes or a change suggestion that sticks to the tone and length), while Chatgpt would write a dissertation on the message that still doesn't clear anything up.
Recently I've noticed that Grok seems to have gotten really good at dictation too (that feature where you click the mic to ask it something). Chatgpt has like 90-95% accuracy with my accent, the speech input on Android's Gboard something like 75%, Grok surprisingly gets something like 98% of my words correct.
I've also noticed that when I communicate with Grok in my native language, its tone is more natural than other models. I think this is due to the advantage of being trained on a large amount of Twitter data. However, as Twitter contains more and more AI-generated content now, I'm afraid continued training will make it less natural.
I'm sure Twitter knows which are the bot accounts and is surely excluding them from their model training. Twitter bots aren't a new phenomenon after all.
There is bots everywhere, it has nothing to do with the platform, it has to do with attackers having an incentive to do mass account farming, no platform is secure against it.
The tok/s stat is interesting. Since the dominant constraint on inference speed is hardware, it suggests X purchased far more compute than was really needed to serve the demand for their models.
In court vs openai, Musk said Grok is partly trained on openai models, so it should be somehow similar to Chinese models in terms of performance and cost!
The problem with speed is that they usually are very fast for first few weeks and then suddenly much slower. They did such trick when they advertised Grok 4 fast ( dropped from 200 tps to 60tps)
But debating whether the models are intelligent is slim to debating whether a car can walk.
You can offload to the model a lot of work that until recently we thought requires intelligence. The more and better of those tasks the model can do, it's fair to call it intelligence*
I like that there are models with divergent politics; the status quo being creepy corporate left silicon valley is not healthy or pleasant to interact with.
Even with grock it's only broadening things to creepy corporate right of silicon valley.
This puts Sonnet 4.6 above Opus 4.6 in the coding index.. kinda hard to trust those numbers.
(Also it puts Opus 4.7 universally above Opus 4.6, and I may be wrong but this doesn't seem to match the experience of most/many/some people. I think it's widely recognized that Anthropic is severely lacking compute and Opus 4.7 is a costs saving measure)
Does numbers don't look exciting at all? I may have gotten spoiled by releases from Qwen, Kimi and Z.ai who keep closing the gap between closed weight SOTA models and open weight. From my experience, Grok is only useful for one thing, and that's looking up things for you and gathering a consensus on topics. That's it.
Update, I noted that Grok 4.3 is in the "Most attractive quadrant", that's cool! It is also in the top 5 highest in "AA-Omniscience Index", good! Really good.
Musk bought a social media company for the specific purpose of getting Trump elected by turning it into a right wing propaganda machine. Have Anthropic/OpenAI/Google done something similar to that?
(ran this on arena.ai direct chat and also tried to write this gist inspired by how simon writes his gists about pelicans)
Edit: just realized that I made pelican riding a bike instead of bicycle, which now makes sense as to why it hardened the bicycle to look tankier, going to compare this with pelican riding a bicycle if anybody else shares the pelican riding a bicycle.
Personal opinion but the beaver one looks especially bad as compared to pelicans. Can we be for sure that this model of grok-4.3 hasn't been trained on pelican. Simonw in blog-post says that he will try with other creatures so I hope he does that but it does feel to me as the model/xAI is trying to cheat, Hope Simonw tests it out more.
Edit: Also added turtle riding a scooter, something which literally has images online or heck even teenage mutant ninja turtles and I thought that it would be able to pass this but it wasn't even able to generate this: https://gist.github.com/SerJaimeLannister/f6de26bd0d0817e056...
This literally looks more avocado than turtle. Perhaps this could be a bug from arena.ai or something else too, not sure but at this point waiting for simon's analysis.
Thankfully it's not an either / or, I don't trust any models. This is a healthy attitude to have because you shouldn't trust anyone on the internet either, especially when it comes to specific subjects.
When looking at the benchmarks, this model seems to be really close to Kimi K2.6 in terms of intelligence and pricing, hitting that sweet spot. It does also have a higher AA-Omniscience index, which is something kimi and other open models lack in. Curious to see how pleasant it is to use.
So, we have: - claude for corps and gov - codex for devs - grok for what, roleplay, racism? Those are the two things I've ever heard grok associated with around me.
So interestingly, I know of at least one application in a charity that deals with trafficking where grok was happy to do one-shot classification tasks where all other models refused to cooperate.
I think there's a surprising number of actually useful applications in this sort of grey area for a slightly-less guardrailed, near-frontier model (also the grok-fast models are cheap!).
There are lots of uncensored models out there. I don't think grok is leading in that front. They kind of pick and choose which things they want to support based on elons world views. Elon used to hang out with sex traffickers so of course grok is fine talking about it. Probably even offers strategies for them does free accounting etc...
Lol. I think they unleashed it on this post, look at the number of only vaguely related, lukewarm opinions trying to push the racism and CSAM stuff to the bottom
You should try all of them, then update your opinion about your information sources accordingly.
Grok for furthering the far-right filter bubble Elon has been hard at work building.
And of course child porn
Do you hear yourself? You should be ashame of propagating shit like this.
That's what it was doing. Like literally. Chatgpt it or Google it. Supporting grok is paying money to a csam generator.
Edit I cannot reply to the post below me. I have gone entirely over to local models so I am paying zero dollars to any of the use defense contractors that are also tech companies. It's awesome.
Yeah and you are telling me I can't use Google models to do it?
PS: Claude is also promoting meth manufacturing fyi, you are ok with this right?
GPT4o is promoting suicide, your point is?
Grok was used to create CSAM
Yeah? And Claude not? Don't be absurd and spread atrocities like this, you can hate the guy as much as you want but don't be disrespectful, ANY model can generate CP, in what world the model (A MACHINE) is responsible for this? Tor is responsible for CP as well?
How does Grok further far-right filter? This is blatantly untrue. Try prompting it and getting it to say something far right.
Grok if anything reduces populism because fake claims can be debunked
How could MechaHitler possibly be far right...
When you really think about it palantir told me Hitler was good and therefore mechahitler aka grok should be a okay!
Grok is as progressive as any of the other models. Despite some of the highly-publicised fuck-ups, try asking Grok anything racist and see how it replies. Yes, I know you didn't try this and you won’t.
There is a lot of daylight in between “progressive” and “openly explicitly racist”
Isn't grok currently holding the world record for the biggest generator of CSAM? Or did they change focus to enhance their racism and propaganda vertical? Things move so quickly these days hard to keep up!
Can you share a prompt that can show how it is openly racist now? Lots of easy claims like this can be debunked
I didn’t say “progressive”; I said “as progressive”.
100% agree. Grok may or may not be biased one way or the other as far as the US is concerned but from the rest of the world perspective it's mostly the same as any other model trained on Wikipedia.
Grok is my favorite model for chatting, and my favorite voice mode. It seems to be the only voice mode that isn't routing to a extremely cheap model (like Haiku), and has been the highest quality out of all the frontier ones. When you subscribe to SuperGrok you can also create a "council" of agents, each with their own system prompt and when you ask something, they will all get asked in parallel to come to a conclusion. Good stuff!
Just wish they would finally put some work into their apps, it's the only thing keeping me from actually subscribing to SuperGrok:
- No MCP / connected apps support. It's been teased but here we are, still not available. I can't connect Grok to anything, so I can't use it for serious work
- Projects are still not available in the app so as soon as you move something into a project, it's gone from all the native apps
- No way to add artifacts (like generated markdown docs) directly to a project, we have to export to PDF/markdown and re-import. And there isn't even a way to export artifacts. This makes serious project work hard because we can't dynamically evolve projects with new information
- No memory, no ability to look up other chats, each chat is completely new
- No voice mode in projects at all
If someone from xAI is reading this, please consider adding some of these.
When I signed up, I accidently paid for a full year. So from time to time, I'll throw it something just to see what it produces compared to the other LLMs. And, even after all this time, it still feels like a really "dumb" model compared to the other frontier ones. But, worse, many of my system prompts make it go wacky and puke jibberish. However it was pretty cool for those couple months awhile back when it was uncensored. You could ask it about a wild conspiracy, and it would actually build the case and link you to legitimite source material. They dropped the hammer down on that real quick.
I also think Grok would benefit from allowing usage of "SuperGrok Heavy" (their $300 plan) in coding harnesses with included usage. Currently they give you some API credits on the Heavy plan so you can use some Grok for coding, but $300 USD value is just not there.
Not saying they should create their own grok-code harness, just allowing usage in existing ones would already be beneficial. But that's probably what the Cursor acquisition is going to do eventually
> No MCP / connected apps support. It's been teased but here we are, still not available. I can't connect Grok to anything, so I can't use it for serious work
Grok has tool use, no? Why would you also need MCP? What does MCP add?
I'm talking about the consumer Grok app and grok.com website. There currently are not connected apps (or MCP) at all, so while Grok can use tools, there is no way to add tools to it
As an English-as-second-language speaker and writer, one thing Grok really shines at is capturing the tone and level of "formality" of a piece of text and the replicating it correctly. It seems to understand the little human subtleties of language in a way the other major providers don't. Chatgpt goes overly stiff and formal sounding, or ends up in a weird "aye guvnor" type informal language (Claude is sometimes better but not always).
Grok seems in general better at being "human" in ways that are hard to define: for eg. if I ask it "does this message roughly convey things correctly, to the level it can given this length", it will likely answer like a human would (either a yes or a change suggestion that sticks to the tone and length), while Chatgpt would write a dissertation on the message that still doesn't clear anything up.
Recently I've noticed that Grok seems to have gotten really good at dictation too (that feature where you click the mic to ask it something). Chatgpt has like 90-95% accuracy with my accent, the speech input on Android's Gboard something like 75%, Grok surprisingly gets something like 98% of my words correct.
I've also noticed that when I communicate with Grok in my native language, its tone is more natural than other models. I think this is due to the advantage of being trained on a large amount of Twitter data. However, as Twitter contains more and more AI-generated content now, I'm afraid continued training will make it less natural.
Did you try meta? I was into grok but now meta works well for me
I'm sure Twitter knows which are the bot accounts and is surely excluding them from their model training. Twitter bots aren't a new phenomenon after all.
There is bots everywhere, it has nothing to do with the platform, it has to do with attackers having an incentive to do mass account farming, no platform is secure against it.
I still wish they named it something else, but congratulations to the team on what seems to be a good release!
Pricing is also quite surprising, compared to comparable competitors. I guess they have tons of capacity or really want to bring over more people.
The tok/s stat is interesting. Since the dominant constraint on inference speed is hardware, it suggests X purchased far more compute than was really needed to serve the demand for their models.
Expensive miscalculation.
Didn't a bunch of hardware that was destined for Tesla get redirected to xAI? I'm sure I remember something like that.
In court vs openai, Musk said Grok is partly trained on openai models, so it should be somehow similar to Chinese models in terms of performance and cost!
All those plans from providers should be sliders – prepay more, get more in return.
Ok speed (202.7 tok/s) and value (1.25 -> 2.50) look great, with pretty decent intelligence.
The problem with speed is that they usually are very fast for first few weeks and then suddenly much slower. They did such trick when they advertised Grok 4 fast ( dropped from 200 tps to 60tps)
Wow. That is a big drop.
For the 1000th time, models do not possess Intelligence
I don't remember the source of the quote.
But debating whether the models are intelligent is slim to debating whether a car can walk.
You can offload to the model a lot of work that until recently we thought requires intelligence. The more and better of those tasks the model can do, it's fair to call it intelligence*
"The question of whether a computer can think is no more interesting than the question of whether a submarine can swim." - Edsger Dijkstra
Please elaborate.
Prediction is not intelligence.
Misprediction is?
What does intelligence mean to you?
Very competitive price for the speed and intelligence being offered!
Despite their attrition, this combined with their cursor partnership is likely going to make them competitive in coding agents soon.
This project is a gigantic waste of resources, it’s fine tuned on politics of the CEO, was used for CSAM generation and just sucks overall
It’s a model made for 36% of Americans. The rest of the world can’t care less.
Considering how few Americans there are and how little of that 39% even uses technology, that's what 20 million people at a maximum?
I like that there are models with divergent politics; the status quo being creepy corporate left silicon valley is not healthy or pleasant to interact with.
Even with grock it's only broadening things to creepy corporate right of silicon valley.
Yay, free tokens. I don't know why but grok always seems good fast in the free token phase and after that degrades.
https://artificialanalysis.ai/models/grok-4-3
This puts Sonnet 4.6 above Opus 4.6 in the coding index.. kinda hard to trust those numbers.
(Also it puts Opus 4.7 universally above Opus 4.6, and I may be wrong but this doesn't seem to match the experience of most/many/some people. I think it's widely recognized that Anthropic is severely lacking compute and Opus 4.7 is a costs saving measure)
Anthropic themselves have (had?) this thing where Opus is used for planning and Sonnet for coding.
Does numbers don't look exciting at all? I may have gotten spoiled by releases from Qwen, Kimi and Z.ai who keep closing the gap between closed weight SOTA models and open weight. From my experience, Grok is only useful for one thing, and that's looking up things for you and gathering a consensus on topics. That's it.
Update, I noted that Grok 4.3 is in the "Most attractive quadrant", that's cool! It is also in the top 5 highest in "AA-Omniscience Index", good! Really good.
What's with the charts and numbers?
It says #1 for speed but then in the chart it's #2. Also says #10 for intelligence but then it's #7 in the chart.
What an exciting game we're playing, where the most popular leaderboard is completely made up and the stakes are in the trillions.
I lost the trust in them when they added the racist "what about killing of Boers in south Africa" thing to their system prompt.
No way am I going to use a model where the backing has such blatantly obvious brain washing goals.
Than you shouldn't be using any model from a 5-eyes country.
They all have biases, it's just that you don't like Grok bias, but are fine with Anthropic, OpenAI and Google brain washing.
There is no non-bias. What you call unbiased is always just a reflection of your personal biases.
That being said, I am definitely against a model that is biased to be following the ideology of a far-right extremist.
Musk bought a social media company for the specific purpose of getting Trump elected by turning it into a right wing propaganda machine. Have Anthropic/OpenAI/Google done something similar to that?
Pelican riding a bike here: https://gist.github.com/SerJaimeLannister/f6de26bd0d0817e056...
(ran this on arena.ai direct chat and also tried to write this gist inspired by how simon writes his gists about pelicans)
Edit: just realized that I made pelican riding a bike instead of bicycle, which now makes sense as to why it hardened the bicycle to look tankier, going to compare this with pelican riding a bicycle if anybody else shares the pelican riding a bicycle.
https://simonwillison.net/2025/Nov/13/training-for-pelicans-...
You should probably come up with variations, like a beaver riding a scooter or something, just so see what's what :)
Thanks I have generated both
beaver riding a scooter: https://gist.github.com/SerJaimeLannister/f6de26bd0d0817e056...
pelican riding a bicycle: https://gist.github.com/SerJaimeLannister/f6de26bd0d0817e056...
Personal opinion but the beaver one looks especially bad as compared to pelicans. Can we be for sure that this model of grok-4.3 hasn't been trained on pelican. Simonw in blog-post says that he will try with other creatures so I hope he does that but it does feel to me as the model/xAI is trying to cheat, Hope Simonw tests it out more.
Edit: Also added turtle riding a scooter, something which literally has images online or heck even teenage mutant ninja turtles and I thought that it would be able to pass this but it wasn't even able to generate this: https://gist.github.com/SerJaimeLannister/f6de26bd0d0817e056...
This literally looks more avocado than turtle. Perhaps this could be a bug from arena.ai or something else too, not sure but at this point waiting for simon's analysis.
We can never be sure of course, but I think this is a very strong indication that pelican riding a bike is indeed going into the training dataset.
Thanks for generating those!
If there was any model I wouldn’t trust, it wouldn’t be the ones from China, it would be the one from Elon Musk
Thankfully it's not an either / or, I don't trust any models. This is a healthy attitude to have because you shouldn't trust anyone on the internet either, especially when it comes to specific subjects.
I don't trust this. But by not trust it I am inherently trusting it. But by trusting it I shouldn't.
When looking at the benchmarks, this model seems to be really close to Kimi K2.6 in terms of intelligence and pricing, hitting that sweet spot. It does also have a higher AA-Omniscience index, which is something kimi and other open models lack in. Curious to see how pleasant it is to use.
I’ll eat my hat if it even comes close to Kimi
How would you like it? Well done?