Notice how pricing is the top discussion theme. People love free shit and it's hard to deny codex usage limits are more generous. My 2c for someone who uses both tools pretty consistently in an enterprise context:
- Codex-medium is better if you have a well articulated plan you "merely" need to execute on, need help finding a bug, have some specific complex piece of logic you need to tweak, truly need a ton of long range context to reason about an issue. It's great and usage limits are very generous!
- Sonnet 4.5 is better for everything else. That means for me: non-coding CLI ops, git ops, writing code with it as a pair programmer, OOD tasks, big new chunks of functionality that are highly conceptual, architectural discussion, etc. I generally approve every edit and often interrupt it. The fast iteration and feedback is key.
I probably use CC 80% of the time with Codex the other 20%. My company pays for CC and I don't even look at the cost. Most of my coworkers use CC over Codex. We do find the Codex PR reviewer to be the best of any tool out there.
Codex gets a lot of play on twitter also because a lot of the most prolific voices there are solo devs who are "building in public". A greenfield, solo project is the ideal (only?) use case for running 5 agents in parallel or whatever. Codex is probably amazing at that. But it's not practical for building in enterprise contexts IMO.
They are similar enough that using one over the other is at most a small mistake. I prefer Claude models (perhaps I'm more used to them?) but Codex is also very good.
Interesting, my experience has been the opposite. I've been running Codex and Sonnet 4.5 side by side the past few weeks, and Codex gives me better results 90% of the time, pretty much across all tasks. Where Claude really shines is that it's much faster than codex. So if I know exactly what I want or if it's a simpler task I feel comfortable giving it to Claude because I don't want to wait for Codex to work through it. Claude cli is also a much better user experience than codex cli. But Codex gets complex things right more consistently.
My experience is similar. So most of the work I do with Claude as I like the small tasks / fast iteration pair coding experience. When I need to investigate some issues I let Codex handle it, and check back in 10 minutes when it's ready. But Codex is way too slow for the pair programming style of work.
Also, most of the time Codex opts to use Python to edit files. Those edits are unreviewable so it's even less interactive, you just have to let it finish and check the outcome.
> better for everything else. That means for me: non-coding CLI ops, git ops, writing code with it as a pair programmer, OOD tasks, big new chunks of functionality that are highly conceptual, architectural discussion..
I would argue this is the wrong way of using these tools. Writing out a defined plan in plain english and then have codex / claude write it out is better since that way we understand the intention. You can always have codex come up with an abstract plan first, iterate on it and then implement. Kind of like how we would implement software in real life.
For larger tasks that I know are parallelizable, I just tell Claude to figure out which steps can be parallelized and then have it go nuts with sub-agents. I’ve had pretty good success with that.
I need to try this because I've never deliberately told it to, but I've had it do it on it's own before. Now I'm wondering if that project had instructions somewhere about that, which could it explain why it happened.
In my experience gpt5-codex (medium) and codex-cli is notably better than Sonnet 4.5 and claude-code.
(note: never tried Opus)
It is slower, but the results are much more often correct and it doesn't rush into half-baked solutions/dumb approaches as eagerly.
I'd much rather wait 5 minutes than have to clean up manually or try to coax a model into doing things differently.
I also wouldn't be surprised if the slowness was partially due to OpenAI being quite resource constrained. They are repeatedly complaining about not having sufficient compute.
Bigger picture: I think all the AI coding environments are incredibly immature. There are many improvements to be unlocked.
Where codex falls short is in background processing, both running a daemon in the background and using its output as context while simultaneously being interactive for the user, and with subagents, ie, do multiple things in parallel. Presumably codex will catch up, but for now, that puts Claude Code ahead of things for me.
As far as which one is better, it's highly dependent on what we're each doing, but I will say that I have this one project where bare "make" won't work, and I have a script that needs to be run instead. I have instructions to call that script in multiple .md files, and codex is able to call the script instead of make, but it keeps forgetting that and tries to run make which fails and it gets confused. (Claude code running on macOS host but build on Linux vm.) I could work around it, but that really takes the "shiny" factor off of codex+GPT-5 for me.
Honestly I think the simplicity of codex to not do anything fancy pants like background coding is what gives it an edge. I am happy to wait for a while and even to repeat context to it (helps me remember stuff anyway) if it types out the right thing.
Seems like HN is slowly split between “ai sucks” and everyone else who is slowly discovering what it can do, while Twitter is leagues ahead using other tools to build stuff.
I really like codex… but without the ability to launch sub-agents, it kinda struggles with context.
The biggest thing I use agents for is getting good search with less context.
Codex just struggles when the model needs to search too much because of this. Codex also struggles with too much context: there have been a number of times when it has just ran up on the context limit and couldn’t compact, so you just loose everything since your last message, which has been a lot of lost context/work for me.
For a good month I juggled between Claude Code and Codex CLI and found that Codex CLI did the job better. I recently ditched Claude Code and am currently only using Codex CLI.
for me I don't understand codex the same way I don't understand gemini.
In my day to day tasks the only models that actually do what I want are the antrophic ones all other ones just fall flat on their face most of the time and end up creating more work than antrophic models.
I wonder if it's because I tend to abuse my models and constantly tell them that they're stupid
Its interesting to me that Codex has such high sentiment. I'm definitely an outlier on the more principled end of the spectrum, but I refuse to use OpenAI products.
I take issue with the AI industry in general and the hand-wavy approach to risk, but OpenAI really is on another level in my book. While I don't trust the industry's approach to AI development, with OpenAI I don't trust the leaderships' intentions.
Regular codex user. Its my typing assistant. Allows me to be the ideas guy when writing software. Codex makes plenty of mistakes when generating large blocks of code but its easier to cleanup and consolidate with a refactoring pass once the typing had been done.
It looks like you have not reviewed r/ClaudeAI. This is a much larger subreddit and most of the posts are about Claude Code. Many comparisons of CC vs Codex.
This sub is full of "vibe coders" that use "prompt engineered" 1000 line prompts with 500 MCPs and then complain that they reach their limit in the first day while using the 200$ max plan.
Reading the comments and posts about both Claude Code and Codex on Reddit (and often hacker news), it’s hard to imagine they’re not extremely astroturfed.
There seems to be constant stream of not terribly interesting or unique “my Claude code/codex success story” blog posts that mange to solicit so many upvotes.
Did you also use Claude, and you like Codex better, or are you making a more general observation about the leapfrog in creative power agents are bringing to engineering?
I’ll tell you something. I love working with Claude. It’s enthusiastic, it’s nice, it’ll give you suggestions. It’s an all around pleasant experience.
I hate working with codex. It feels like a machine. You tell it to do something, and it just does it. No pretension at being human, or enthusiastic, or anything really.
But codex almost always does it right. And the comments are right, I never run into random usage limits. Codex doesn’t arbitrarily decide to shrink the context window, or start compacting again after 3 messages.
The codex client sucks, claude code is much better. But the codex client is consistent, which is much more important. Claude was amazing 3 months ago. The model is still fine, but the quality of the experience has degraded so far it’s hard to consider using it.
This is my experience as well. Codex is very verbose which is annoying considering the limits. My work flow tends to be have Claude code describe the problem (succinct as it can) based on my mashing of the keyboard description of what I want done then send that to codex. I've tried it the other way around doesn't work nearly as good. Disclaimer: not using the 5 prompts per week opus.
Your comment is obviously not AI generated, but since we were talking about astroturfing on Reddit and which presumably is done a lot by bots, it's interesting to me when I read comments on what kind of triggers my inner LLM detector.
> Outsource all that annoying part? Heck yeah - bring it on.
This sentence really and some of your other cadence somehow triggers my sense a lot. Or the comment somehow feels sloganish, formulaic. Not trying to criticise or offend, just thought it's interesting how it triggers this in my brain. And I do agree with you.
I think I'm partly responsible. I've been having a lot of fun with these tools, and so seeing other people doing the same just makes me want to engage even if the discussion isn't particularly sophisticated. I swear I'm not paid to do this (actually I pay out the wazoo for Claude..)
Truthfully I find sonnet-4.5 better at Rust code than Codex (medium/high). Haven't tried anything else (like react/typescript) since I only use AI for issues/problems I don't understand.
my suspicion is that much (or at least some) of the negative sentiment towards claude code is from folks that were on it early (when code was even more widely used than codex) and created intensive workflows using it. when anthropic tightened quotas to make it more equitable across plan users they were much more likely to be impacted.
this is obviously pure conjecture, but perhaps the OE folks had automated their multiple roles and now they need to be more involved.
Eh, honestly I had some health issues since the vibe coding craze started. Normally I'm one of the people that try things like that out - mostly cuz I don't actually have any hobbies beyond coding and generally find such things funny.
As I got better round June/July I finally found the energy to try it out. It was working incredibly well at the time.
It was so fun (for me), that I basically kept playing with it every day after finishing work. So for roughly 1.5 months basically every free minute each day, along with side explorations during work hours when I could get away with it.
Then I had to take another business trip mid August, when I finally came back in September it was unrecognizable - and from my perspective, it definitely hasn't recovered to how ultrathink+opus performed back then.
You can definitely still use it, but you need to take a massively more hands-on approach.
At least my opinion is not swayed by their reduced quota ... But to stay in line with the sentiment analysis this article is about - neither have I tried Codex to this point. Which I will, eventually.
In life, it helps to be skeptical, so the real question is where do I find real life humans to ask about their experiences? And even then, they could still be paid actors. Though, I've often wondered how would that work. Like, the marketing department staffed by hot people finds developers and then offers to Venmo them $500 to write something nice online about the product? It's a big Internet, and there's a lot of people on Upwork, so I'm not saying it isn't happening, but I've never gotten an email asking me to write something nice about Claude Code in exchange for a couple of bucks.
One thing worth taking into account is the practice of finding people who actually like the product, and then paying them to write an honest review. I find this to be much closer to ethical than paying exclusively for positive reviews to people who may not have ever used the product, but it has a similar net effect of distorting the sentiment by amplifying a subset of opinions, so still not ideal but at least it’s rooted in honesty.
If you haven’t been vocal about your support of products in general, you wouldn’t show up on the radar for these “opportunities.”
Meanwhile I am talking about unique shit with Claude Code trying to draft on that sentiment for little to no traction with them. We've built the best way to automate and manage production infrastructure using these models and no one gives a shit. It's so weird.
Notice how pricing is the top discussion theme. People love free shit and it's hard to deny codex usage limits are more generous. My 2c for someone who uses both tools pretty consistently in an enterprise context:
- Codex-medium is better if you have a well articulated plan you "merely" need to execute on, need help finding a bug, have some specific complex piece of logic you need to tweak, truly need a ton of long range context to reason about an issue. It's great and usage limits are very generous!
- Sonnet 4.5 is better for everything else. That means for me: non-coding CLI ops, git ops, writing code with it as a pair programmer, OOD tasks, big new chunks of functionality that are highly conceptual, architectural discussion, etc. I generally approve every edit and often interrupt it. The fast iteration and feedback is key.
I probably use CC 80% of the time with Codex the other 20%. My company pays for CC and I don't even look at the cost. Most of my coworkers use CC over Codex. We do find the Codex PR reviewer to be the best of any tool out there.
Codex gets a lot of play on twitter also because a lot of the most prolific voices there are solo devs who are "building in public". A greenfield, solo project is the ideal (only?) use case for running 5 agents in parallel or whatever. Codex is probably amazing at that. But it's not practical for building in enterprise contexts IMO.
They are similar enough that using one over the other is at most a small mistake. I prefer Claude models (perhaps I'm more used to them?) but Codex is also very good.
Totally agree. A lot of it is simply personal preference at this point.
Interesting, my experience has been the opposite. I've been running Codex and Sonnet 4.5 side by side the past few weeks, and Codex gives me better results 90% of the time, pretty much across all tasks. Where Claude really shines is that it's much faster than codex. So if I know exactly what I want or if it's a simpler task I feel comfortable giving it to Claude because I don't want to wait for Codex to work through it. Claude cli is also a much better user experience than codex cli. But Codex gets complex things right more consistently.
My experience is similar. So most of the work I do with Claude as I like the small tasks / fast iteration pair coding experience. When I need to investigate some issues I let Codex handle it, and check back in 10 minutes when it's ready. But Codex is way too slow for the pair programming style of work.
Also, most of the time Codex opts to use Python to edit files. Those edits are unreviewable so it's even less interactive, you just have to let it finish and check the outcome.
> better for everything else. That means for me: non-coding CLI ops, git ops, writing code with it as a pair programmer, OOD tasks, big new chunks of functionality that are highly conceptual, architectural discussion..
I would argue this is the wrong way of using these tools. Writing out a defined plan in plain english and then have codex / claude write it out is better since that way we understand the intention. You can always have codex come up with an abstract plan first, iterate on it and then implement. Kind of like how we would implement software in real life.
For larger tasks that I know are parallelizable, I just tell Claude to figure out which steps can be parallelized and then have it go nuts with sub-agents. I’ve had pretty good success with that.
I need to try this because I've never deliberately told it to, but I've had it do it on it's own before. Now I'm wondering if that project had instructions somewhere about that, which could it explain why it happened.
In my experience gpt5-codex (medium) and codex-cli is notably better than Sonnet 4.5 and claude-code. (note: never tried Opus)
It is slower, but the results are much more often correct and it doesn't rush into half-baked solutions/dumb approaches as eagerly.
I'd much rather wait 5 minutes than have to clean up manually or try to coax a model into doing things differently.
I also wouldn't be surprised if the slowness was partially due to OpenAI being quite resource constrained. They are repeatedly complaining about not having sufficient compute.
Bigger picture: I think all the AI coding environments are incredibly immature. There are many improvements to be unlocked.
That’s falsifiable quite easily by measuring tokens per second.
Rather, the real reason codex takes longer is that it does more work to read more context.
IMO the results are much better with codex, not even close
Where codex falls short is in background processing, both running a daemon in the background and using its output as context while simultaneously being interactive for the user, and with subagents, ie, do multiple things in parallel. Presumably codex will catch up, but for now, that puts Claude Code ahead of things for me.
As far as which one is better, it's highly dependent on what we're each doing, but I will say that I have this one project where bare "make" won't work, and I have a script that needs to be run instead. I have instructions to call that script in multiple .md files, and codex is able to call the script instead of make, but it keeps forgetting that and tries to run make which fails and it gets confused. (Claude code running on macOS host but build on Linux vm.) I could work around it, but that really takes the "shiny" factor off of codex+GPT-5 for me.
Honestly I think the simplicity of codex to not do anything fancy pants like background coding is what gives it an edge. I am happy to wait for a while and even to repeat context to it (helps me remember stuff anyway) if it types out the right thing.
Seems like HN is slowly split between “ai sucks” and everyone else who is slowly discovering what it can do, while Twitter is leagues ahead using other tools to build stuff.
If you want to compare Codex and Claude Code side by side you can do it in Crystal in worktrees from one prompt https://github.com/stravu/crystal
I really like codex… but without the ability to launch sub-agents, it kinda struggles with context.
The biggest thing I use agents for is getting good search with less context.
Codex just struggles when the model needs to search too much because of this. Codex also struggles with too much context: there have been a number of times when it has just ran up on the context limit and couldn’t compact, so you just loose everything since your last message, which has been a lot of lost context/work for me.
For a good month I juggled between Claude Code and Codex CLI and found that Codex CLI did the job better. I recently ditched Claude Code and am currently only using Codex CLI.
for me I don't understand codex the same way I don't understand gemini.
In my day to day tasks the only models that actually do what I want are the antrophic ones all other ones just fall flat on their face most of the time and end up creating more work than antrophic models.
I wonder if it's because I tend to abuse my models and constantly tell them that they're stupid
Ah bots analyzing bots. Seems openai has a larger bot army than Anthropic rn
Need to go to conferences and actually talk to people to understand what Real People (TM) think of GenAI.
And what would that be ? Please make me save a plane ticket
openai's crawling is the best. just following anthropic's way
comparing vibe coding tools based on vibes — makes sense!
Its interesting to me that Codex has such high sentiment. I'm definitely an outlier on the more principled end of the spectrum, but I refuse to use OpenAI products.
I take issue with the AI industry in general and the hand-wavy approach to risk, but OpenAI really is on another level in my book. While I don't trust the industry's approach to AI development, with OpenAI I don't trust the leaderships' intentions.
> Its interesting to me that Codex has such high sentiment.
Me too, so much so that I doubt this is legitimate. This blog post is the only place I've seen people 'raving' about codex.
Claude Code is the current standard all others are measured against.
It was true until GPT-5. That model hugely improved Codex, so it being comparable with CC is a recent thing.
Regular codex user. Its my typing assistant. Allows me to be the ideas guy when writing software. Codex makes plenty of mistakes when generating large blocks of code but its easier to cleanup and consolidate with a refactoring pass once the typing had been done.
It looks like you have not reviewed r/ClaudeAI. This is a much larger subreddit and most of the posts are about Claude Code. Many comparisons of CC vs Codex.
This sub is full of "vibe coders" that use "prompt engineered" 1000 line prompts with 500 MCPs and then complain that they reach their limit in the first day while using the 200$ max plan.
I'm still using cursor and it seems fine. What does CC and Codex offer that's so much better than Cursor. Idgi
Is Aider done for?
Reading the comments and posts about both Claude Code and Codex on Reddit (and often hacker news), it’s hard to imagine they’re not extremely astroturfed.
There seems to be constant stream of not terribly interesting or unique “my Claude code/codex success story” blog posts that mange to solicit so many upvotes.
I dunno.
I've been coding for 30 years.
Using Codex I'm finally enjoying it again for the first time in maybe 15 years. Outsource all that annoying part? Heck yeah - bring it on.
And I tell everyone I can how transformational it has been for me.
Did you also use Claude, and you like Codex better, or are you making a more general observation about the leapfrog in creative power agents are bringing to engineering?
I’ll tell you something. I love working with Claude. It’s enthusiastic, it’s nice, it’ll give you suggestions. It’s an all around pleasant experience.
I hate working with codex. It feels like a machine. You tell it to do something, and it just does it. No pretension at being human, or enthusiastic, or anything really.
But codex almost always does it right. And the comments are right, I never run into random usage limits. Codex doesn’t arbitrarily decide to shrink the context window, or start compacting again after 3 messages.
The codex client sucks, claude code is much better. But the codex client is consistent, which is much more important. Claude was amazing 3 months ago. The model is still fine, but the quality of the experience has degraded so far it’s hard to consider using it.
This is my experience as well. Codex is very verbose which is annoying considering the limits. My work flow tends to be have Claude code describe the problem (succinct as it can) based on my mashing of the keyboard description of what I want done then send that to codex. I've tried it the other way around doesn't work nearly as good. Disclaimer: not using the 5 prompts per week opus.
Personally I prefer Codex's less-chatty nature nice. I prefer to save my human emotions for humans.
Your comment is obviously not AI generated, but since we were talking about astroturfing on Reddit and which presumably is done a lot by bots, it's interesting to me when I read comments on what kind of triggers my inner LLM detector.
> Outsource all that annoying part? Heck yeah - bring it on.
This sentence really and some of your other cadence somehow triggers my sense a lot. Or the comment somehow feels sloganish, formulaic. Not trying to criticise or offend, just thought it's interesting how it triggers this in my brain. And I do agree with you.
I think I'm partly responsible. I've been having a lot of fun with these tools, and so seeing other people doing the same just makes me want to engage even if the discussion isn't particularly sophisticated. I swear I'm not paid to do this (actually I pay out the wazoo for Claude..)
Truthfully I find sonnet-4.5 better at Rust code than Codex (medium/high). Haven't tried anything else (like react/typescript) since I only use AI for issues/problems I don't understand.
> since I only use AI for issues/problems I don't understand
I only use [coding assistants] for problems I DO understand.
my suspicion is that much (or at least some) of the negative sentiment towards claude code is from folks that were on it early (when code was even more widely used than codex) and created intensive workflows using it. when anthropic tightened quotas to make it more equitable across plan users they were much more likely to be impacted.
this is obviously pure conjecture, but perhaps the OE folks had automated their multiple roles and now they need to be more involved.
Eh, honestly I had some health issues since the vibe coding craze started. Normally I'm one of the people that try things like that out - mostly cuz I don't actually have any hobbies beyond coding and generally find such things funny.
As I got better round June/July I finally found the energy to try it out. It was working incredibly well at the time. It was so fun (for me), that I basically kept playing with it every day after finishing work. So for roughly 1.5 months basically every free minute each day, along with side explorations during work hours when I could get away with it.
Then I had to take another business trip mid August, when I finally came back in September it was unrecognizable - and from my perspective, it definitely hasn't recovered to how ultrathink+opus performed back then.
You can definitely still use it, but you need to take a massively more hands-on approach.
At least my opinion is not swayed by their reduced quota ... But to stay in line with the sentiment analysis this article is about - neither have I tried Codex to this point. Which I will, eventually.
In life, it helps to be skeptical, so the real question is where do I find real life humans to ask about their experiences? And even then, they could still be paid actors. Though, I've often wondered how would that work. Like, the marketing department staffed by hot people finds developers and then offers to Venmo them $500 to write something nice online about the product? It's a big Internet, and there's a lot of people on Upwork, so I'm not saying it isn't happening, but I've never gotten an email asking me to write something nice about Claude Code in exchange for a couple of bucks.
One thing worth taking into account is the practice of finding people who actually like the product, and then paying them to write an honest review. I find this to be much closer to ethical than paying exclusively for positive reviews to people who may not have ever used the product, but it has a similar net effect of distorting the sentiment by amplifying a subset of opinions, so still not ideal but at least it’s rooted in honesty.
If you haven’t been vocal about your support of products in general, you wouldn’t show up on the radar for these “opportunities.”
I'm quite confused by the comments you got on this one, surely half of them must be satirical?
Meanwhile I am talking about unique shit with Claude Code trying to draft on that sentiment for little to no traction with them. We've built the best way to automate and manage production infrastructure using these models and no one gives a shit. It's so weird.
> Meanwhile I am talking about unique shit with Claude Code trying to draft on that sentiment for little to no traction with them.
What does this mean? What do you mean unique shit? What do you mean when you say you’re trying to draft on the sentiment? What is “them” referring to?
Genuinely. I’m not being (deliberately) obtuse, just trying to follow. Thanks
Thanks for asking this because for a moment I thought I was too dense to read this correctly
Best according to?
Sucks-Rules-o-Meter, but 2025.