Notable re author: “Addy Osmani is an Irish Software Engineer and leader currently working on the Google Chrome web browser and Gemini with Google DeepMind. A developer for 25+ years, he has worked at Google for over thirteen years, focused on making the web low-friction for users and web developers. He is passionate about AI-assisted engineering and developer tools. He previously worked on Fortune 500 sites. Addy is the author of a number of books including Learning JavaScript Design Patterns, Leading Effective Engineering Teams, Stoic Mind and Image Optimization.“
Agreed. Been using Claude Code daily for the past year and Codex as a fall back when Claude gets stuck. Codex has two problems: it Windows support sucks and it's way to "mission driven" vs the collaborative Claude. Gemini CLI falls somewhere in the middle, has some seriously cool features (Ctrl+X to edit prompt in notepad) and it's web research capability is actually good.
I'm constantly floored with how well claude-cli works and gemini-cli stumbled on something simple the first time I used it and Gemini's 3 Pro release availability was just bad.
I too am curious. My daily driver has been Claude Code CLI since April. I just started using Codex CLI and there are lot of gaps--the most annoying being permissions don't seem to stick. I am so used to plan mode in Claude Code CLI and really miss that in Codex.
The model needs to be trained to use the harness. Sonnet 4.5 and gpt-5.1-codex-max are "weaker" models in abstract, but you can get much more mileage out of them due to post-training.
Ah, Gemini is the model and Google Vertex AI is like AWS Bedrock, it's the Google service actually serving Gemini. I wonder if Gemini can be used from OpenCode when made available through a Google Workspace subscription...
I had a terrible first impression with Gemini CLI a few months ago when it was released because of the constant 409 errors.
With Gemini 3 release I decided to give it another go, and now the error changed to: "You've reached the daily limit with this model", even though I have an API key with billing set up. It wouldn't let me even try Gemini 3 and even after switching to Gemini 2.5 it would still throw this error after a few messages.
Google might have the best LLMs, but its agentic coding experience leaves a lot to be desired.
Would be nice to have an official confirmation. Once token get back to the user those are likely already counted.
Sucks when the LLM goes on a rant only to stop because of hardcoded safeguards, or what I encounter often enough with Copilot: it generates some code, notices it's part of existing public code and cancels the entire response. But that still counts towards my usage.
Looking through this, I think a lot of these also apply to Google Antigravity which I assume just uses the same backend as the CLI and just UI wraps a lot of these commands (e.g. checkpointing).
Not so much with Gemini 3 Pro (which came out a few days ago)... to the point that the loop detection that they built into gemini-cli (to fight that) almost always over-detects, thinking that Gemini 3 Pro is looping when it in fact isn't. Haven't had it fail at tool calls either.
Interesting, I run into loop detection in 2.5-pro but haven't seen it get in 3 Pro. Maybe its the type of tasks I throw st it though, I only use 3 at work and the code base is much more mature and well defined than my random side projects.
Gemini 3 with CLI is relentless if you give it detailed specs and other than API errors, it just is great. I'd still rank Claude models higher but Gemini 3 is good too.
And the GPT-5 Codex has a very somber tone. Responses are very brief.
Gemini-CLI on Termux does not work anymore. Gemini itself found a way to fix the problem, but I did not totally grok what it was going to do. It insisted my Termux was old and rotten.
>this lets you use Gemini 2.5 Pro for free with generous usage limits
Considering that access is limited to the countries on the list [0], I wonder what motivated their choices, especially since many Balkan countries were left out.
agentic coding seems like its not the top priority but more at capturing the search engine users which is understandable.
still i had high hopes for gemini 3.0 but was let down by the benchmarks i can barely use it in cli however in ai studio its been pretty valuable but not without quirks and bugs
lately it seems like all the agentic coders like claude, codex are starting to converge and differentiated only by latency and overall cli UX and usage.
i would like to use gemini cli more even grok if it was possible to use it like codex
There seems to be an awful many "could" and "might" in that part. Given how awfully limited the Gemini integration inside Google Docs is, it's an area that's just made me feel Google is executing really slowly on this.
I've built a document editor that has AI properly integrated - provides feedback in "Track Changes" mode and actually gives good writing advice. If you've been looking for something like this - https://owleditor.com
I have never had a luck with using Gemini. I had a pretty good app create with CODEX. Due to the hype I thought let me give Gemini a try. I asked it find all way to improve security and architecture / design. sure enough it gave a me a list of components etc that didn’t match best patterns and practices. So I let it refactor the code.
It Fucked up the entire repot. It hard coded tenant ids and used ids, it completely destroyed my UI. Broke my entire grpahql integration. Set me back 2 weeks of work.
I do admit the browse version of Gemini chat does much better job at providing architecture and design guidance time to time.
I really tried to get gemini to work properly in Agent mode. Tho it way to often wen't crazy, started rewriting files empty, commenting like "here you could implement the requested function" and many more stuff including running into permanent printing loops of stuff like "I've done that. What's next on the debugger? Okay, I've done that. What's next on the with? Okay, I've done that. What's next on the delete? Okay, I've done that. What's next on the in? Okay, I've done that. What's next on the instanceof? Okay, I've done that. What's next on the typeof? Okay, I've done that. What's next on the void? Okay, I've done that. What's next on the true? Okay, I've done that. What's next on the false? Okay, I've done that. What's next on the null? Okay, I've done that. What's next on the undefined? Okay, I've done that..." which went on for like 1hour (yes i waited to see how long it takes for them to cut it).
Its just really good yet.
I recently tried IntelliJs Junie and i have to say it works rather well.
I mean at the end of the day all of them need a human in the loop and the result is just as good as your prompt, tho with Junie i at least most of the time got something of a result, while with gemini 50% would have been a good rate.
Finally: Still dont see agentic coding for production stages - its just not there yet in terms of quality. For research and fun? Why not.
Notable re author: “Addy Osmani is an Irish Software Engineer and leader currently working on the Google Chrome web browser and Gemini with Google DeepMind. A developer for 25+ years, he has worked at Google for over thirteen years, focused on making the web low-friction for users and web developers. He is passionate about AI-assisted engineering and developer tools. He previously worked on Fortune 500 sites. Addy is the author of a number of books including Learning JavaScript Design Patterns, Leading Effective Engineering Teams, Stoic Mind and Image Optimization.“
Also a winner of the Irish Young Scientist competition, 2 years before Patrick Collison. https://en.wikipedia.org/wiki/Young_Scientist_and_Technology...
Gemini CLI sucks. Just use Opencode if you have to use Gemini. They need to rebuild the CLI just as OAI did with Codex.
YMMV I guess but it's my goto tool; fast and reliable results at least for my use cases
Agreed. Been using Claude Code daily for the past year and Codex as a fall back when Claude gets stuck. Codex has two problems: it Windows support sucks and it's way to "mission driven" vs the collaborative Claude. Gemini CLI falls somewhere in the middle, has some seriously cool features (Ctrl+X to edit prompt in notepad) and it's web research capability is actually good.
I'm constantly floored with how well claude-cli works and gemini-cli stumbled on something simple the first time I used it and Gemini's 3 Pro release availability was just bad.
Well Opencode also completely replaced its TUI a few weeks ago too.
BTW Gemini 3 via Copilot doesn't currently work in Opencode: https://github.com/sst/opencode/issues/4468
Copilot on Opencode is not good. It’s all over the place which is a shame because Copilot is one of the best values.
what happened with Codex? Did they rebuild it?
Codex CLI switched from a typescript implementation to a Rust based one.
I too am curious. My daily driver has been Claude Code CLI since April. I just started using Codex CLI and there are lot of gaps--the most annoying being permissions don't seem to stick. I am so used to plan mode in Claude Code CLI and really miss that in Codex.
The model needs to be trained to use the harness. Sonnet 4.5 and gpt-5.1-codex-max are "weaker" models in abstract, but you can get much more mileage out of them due to post-training.
> To use OpenCode, you’ll need:
> A modern terminal emulator like:
> WezTerm, cross-platform
> Alacritty, cross-platform
> Ghostty, Linux and macOS
> Kitty, Linux and macOS
What's wrong with any terminal? Are those performance gains that important when handling a TUI? :-(
Edit:
Also, I don't see Gemini listed here:
https://opencode.ai/docs/providers/
Only Google Vertex AI (?): https://opencode.ai/docs/providers/#google-vertex-ai
Edit 2:
Ah, Gemini is the model and Google Vertex AI is like AWS Bedrock, it's the Google service actually serving Gemini. I wonder if Gemini can be used from OpenCode when made available through a Google Workspace subscription...
It's silly of them to say you need a "modern terminal emulator", it's wrong and drives people away. I'm using xfce4-terminal.
Gemini 3 via any provider except Copilot should work in Opencode.
All these tips and tricks just to get out-coded by some guy rawdogging Copilot in VS Code.
My tip: Move away from Google to an LLM that doesn't respond with "There was a problem getting a response" 90% of the time.
I had a terrible first impression with Gemini CLI a few months ago when it was released because of the constant 409 errors.
With Gemini 3 release I decided to give it another go, and now the error changed to: "You've reached the daily limit with this model", even though I have an API key with billing set up. It wouldn't let me even try Gemini 3 and even after switching to Gemini 2.5 it would still throw this error after a few messages.
Google might have the best LLMs, but its agentic coding experience leaves a lot to be desired.
I had to make a new API key. My old one got stuck with this error; it's on Google's end. New key resolved immediately.
and then loosing half a day setting up billing - with a limited virtual credit card so you have at least some cost control
For me, I had just set up a project and set billing to that. Making a second key and assigning the billing to that was instant; I got to reuse it.
I have sympathy for any others who did not get so lucky
Are we getting billed for these? The billing is so very not transparent.
My experience working in FAANG.. Nobody knows
we need a Nate Bargatze skit for these quips
Would be nice to have an official confirmation. Once token get back to the user those are likely already counted.
Sucks when the LLM goes on a rant only to stop because of hardcoded safeguards, or what I encounter often enough with Copilot: it generates some code, notices it's part of existing public code and cancels the entire response. But that still counts towards my usage.
Copilot definitely bills you for all the errors.
Looking through this, I think a lot of these also apply to Google Antigravity which I assume just uses the same backend as the CLI and just UI wraps a lot of these commands (e.g. checkpointing).
A lot of times Gemini models will get stuck in a loop of errors in a lot of times it fails to edit/read or other simple function calling
it's really really terrible at agentic stuff
Not so much with Gemini 3 Pro (which came out a few days ago)... to the point that the loop detection that they built into gemini-cli (to fight that) almost always over-detects, thinking that Gemini 3 Pro is looping when it in fact isn't. Haven't had it fail at tool calls either.
Interesting, I run into loop detection in 2.5-pro but haven't seen it get in 3 Pro. Maybe its the type of tasks I throw st it though, I only use 3 at work and the code base is much more mature and well defined than my random side projects.
Tried in V0, it always gets into an infinite loop
will give the CLI another shot
Gemini 3 with CLI is relentless if you give it detailed specs and other than API errors, it just is great. I'd still rank Claude models higher but Gemini 3 is good too.
And the GPT-5 Codex has a very somber tone. Responses are very brief.
Gemini-CLI on Termux does not work anymore. Gemini itself found a way to fix the problem, but I did not totally grok what it was going to do. It insisted my Termux was old and rotten.
Make sure you've turned off the "alternate buffer" setting
>this lets you use Gemini 2.5 Pro for free with generous usage limits
Considering that access is limited to the countries on the list [0], I wonder what motivated their choices, especially since many Balkan countries were left out.
[0]: https://developers.google.com/gemini-code-assist/resources/a...
For Europe it's EU + UK + EFTA plus for some reason, Armenia.
agentic coding seems like its not the top priority but more at capturing the search engine users which is understandable.
still i had high hopes for gemini 3.0 but was let down by the benchmarks i can barely use it in cli however in ai studio its been pretty valuable but not without quirks and bugs
lately it seems like all the agentic coders like claude, codex are starting to converge and differentiated only by latency and overall cli UX and usage.
i would like to use gemini cli more even grok if it was possible to use it like codex
A lot it seems to mirror syntax of Claude Code
Integration with Google Docs/Spreadsheets/Drive seems interesting but it seems to be via MCP so nothing exclusive/native to Gemini CLI I presume?
There seems to be an awful many "could" and "might" in that part. Given how awfully limited the Gemini integration inside Google Docs is, it's an area that's just made me feel Google is executing really slowly on this.
I've built a document editor that has AI properly integrated - provides feedback in "Track Changes" mode and actually gives good writing advice. If you've been looking for something like this - https://owleditor.com
Nice breakdown. Curious if you’ve explored arbitration layers or safety-bounded execution paths when chaining multiple agentic calls?
I’m noticing more workflows stressing the need for lightweight governance signals between agents.
It would/will be interesting to see this modified to include Antigravity alongside Gemini CLI.
Am I stupid? I run /corgi, nothing happens and I don't see a corgi. I have the latest version of the gemini CLI. Or is it just killedbygoogle.com
I have never had a luck with using Gemini. I had a pretty good app create with CODEX. Due to the hype I thought let me give Gemini a try. I asked it find all way to improve security and architecture / design. sure enough it gave a me a list of components etc that didn’t match best patterns and practices. So I let it refactor the code.
It Fucked up the entire repot. It hard coded tenant ids and used ids, it completely destroyed my UI. Broke my entire grpahql integration. Set me back 2 weeks of work.
I do admit the browse version of Gemini chat does much better job at providing architecture and design guidance time to time.
Do you use AI agents on repos without version control?
> Set me back 2 weeks of work.
How did this happen?
Did you let the agent loose without first creating its own git worktree?
What's the benefit of git worktree? I imagine you can just not give the agent access to git and you're in the same spot?
tfw people are running agents outside containers
Yeah this something I need to get to.
Apologies. I meant branch. I nuked the branch. But set me back a lot of time as I thought it may be few things here and there.
I really tried to get gemini to work properly in Agent mode. Tho it way to often wen't crazy, started rewriting files empty, commenting like "here you could implement the requested function" and many more stuff including running into permanent printing loops of stuff like "I've done that. What's next on the debugger? Okay, I've done that. What's next on the with? Okay, I've done that. What's next on the delete? Okay, I've done that. What's next on the in? Okay, I've done that. What's next on the instanceof? Okay, I've done that. What's next on the typeof? Okay, I've done that. What's next on the void? Okay, I've done that. What's next on the true? Okay, I've done that. What's next on the false? Okay, I've done that. What's next on the null? Okay, I've done that. What's next on the undefined? Okay, I've done that..." which went on for like 1hour (yes i waited to see how long it takes for them to cut it).
Its just really good yet.
I recently tried IntelliJs Junie and i have to say it works rather well.
I mean at the end of the day all of them need a human in the loop and the result is just as good as your prompt, tho with Junie i at least most of the time got something of a result, while with gemini 50% would have been a good rate.
Finally: Still dont see agentic coding for production stages - its just not there yet in terms of quality. For research and fun? Why not.
Why is this AI generated slop so highly upvoted?
Even thought the doc _might_ be AI generated, that repo is Addy Osmani's.
Of Addy Osmani fame.
I seriously doubt he went to Gemini and told it "Give me a list of 30 identifiable issues when agentic coding, and tips to solve them".
Because it's good slop.