The title is misleading if you don’t read the whole text: Anthropic is not blocking OpenCode from the API that they sell.
They’ve blocked OpenCode from accessing the private Claude Code endpoints. These were not advertised or sold as usable with anything else. OpenCode reverse engineered the API and was trying to use it.
The private API isn’t intended for use with other tools. Any tool that used it would get blocked.
The gist showsmthat the first line ofmthe system prompt must be "You are Claude Code, Anthropic's official CLI for Claude."
That’s a reasonable attempt to enforce the ToS. For OpenCode, they also take the next step of additionally blocking a second line of “You are OpenCode.”
There might be more thorough ways to effect a block (e.g. requiring signed system prompts), but Anthropic is clearly making its preferences known here.
What do you mean by "not all"? They aren't obligated to block every tool/project trying to use the private API all the way to a lone coder making their own closed-source tool. That's just not feasible. Or did you have a way to do that?
> The gist shows that only certain tools are block, not all.
Are those other phrases actually used by any tools? I thought they were just putting phrases into the LLM arbitrarily. Any misuse of the endpoint is detected at scale they probably add more triggers for that abuse.
Expecting it to magically block different phrases is kind of silly.
> They're selectively enforcing their ToS.
Do you have anything to support that? Not a gist of someone putting arbitrary text into the API, but links to another large scale tool that gets away with using the private API?
Seems pretty obvious that they’re just adding triggers for known abusers as they come up.
They’re not literally blocking OpenCode. You can use OpenCode with their API like any other tool.
They’ve blocked the workaround OpenCode was using to access a private API that was metered differently.
Any tool that used that private endpoint would be blocked. They’re not pushing an agenda. They’re just enforcing their API terms like they would for any use.
after they exploited us by training without any limits on code without licensing it (including GPLed code) they now scramble to ban and restrict when we want to do the same to them. that's the schadenfreude...
They may however be obligated to not give customers access to their services at a discounted rate either - predatory pricing is at least some of the time and in some jurisdictions illegal.
Predatory pricing is selling something below cost to acquire/maintain market dominance.
The Claude subscription used for Claude Code is to all appearances being sold substantially below the cost to run it, and it certainly seems that this is being done to maintain Claude Code's market dominance and force out competitors who cannot afford to subsidize LLM inference in the same way such as OpenCode.
It's not a matter of there being a public API, I don't believe they are obligated to offer one at all, it's a matter of the Claude Subscription being priced fairly so that OpenCode (on top of, say, gemini) can be competitive.
The modern consumer benefit doctrine means predatory pricing is impossible to prosecute in 99% of cases. I’m not saying it’s right, but legally it is toothless.
This is true... in the US (though there is still that 1%). Anthropic operates globally and the US isn't the only country who ever realized it might be an issue.
> Predatory pricing is selling something below cost to acquire/maintain market dominance.
Yet they have to acquire market dominance in a meaningful market first if you want to prosecute, otherwise it's just a failed business strategy. Like that company selling movie tickets bellow cost.
The API is really expensive compared to a Max subscription! So they're probably making a lot of money (or at least losing much less) via the API. I don't think it's going anywhere. Worst case scenario they could raise the price even more.
The Claude subscription (i.e. the pro and max plans, not the API) is sold at what appears to be well below cost in what appears to be a blatant attempt to preserve/create market dominance for claude code, destroying competitors by making it impossible to compete without also having a war chest of money to give away.
You’re making a big assumption. LLM providers aren’t necessarily taking a loss on the marginal cost of inference. It’s when you include R&D and training costs that it requires the capital inputs. They’ve come out and said as much.
The Claude Code plans may not be operating at a loss either. Most people don’t use up 100% of their plan. Few people do. A lot of it goes idle.
Are you suggesting Anthropic has a “duty to deal” with anyone who is trying to build competitive products to Claude Code, beyond access to their priced API? I don’t think so. Especially not to a product that’s been breaking ToS.
A regulatory duty to deal is the opposite of setting your own terms. Yes, citing a ToS is acceptable in this scenario. We can throw ToS out if we all believed in duty to deal.
They cannot actually do this as long as they keep Claude code open source. It is always going to be trivial to replicate how it sends requests in a third party tool.
Yes, there is no source code in here. This is their scripts / tooling / prompts repo. The actual code that powers their CC terminal CLI does not exist anywhere on their public GitHub
It is available on npm but it’s a wasm file last I checked. You also don’t need it to find their endpoints, people are just seeing what networks calls are made when they use Claude Code and then try to get other agents to call those endpoints.
The hard part is that they have an Anthropic-compatible API that’s different than completion/responses.
Obviously Anthropic are within their rights to do this, but I don’t think their moat is as big as they think it is. I’ve cancelled my max subscription and have gone over to ChatGPT pro, which is now explicitly supporting this use case.
Is opencode that much better than Codex / Claude Code for cli tooling that people are prepared forsake[1] Sonnet 4.5/Opus 4.5 and switch to GPT 5.2-codex ?
The moat is Sonnet/Opus not Claude Code it can never be a client side app.
Cost arbitrage like this is short lived, until the org changes pricing.
For example Anthropic could release say an ultra plan at $500-$1000 with these restrictions removed/relaxed that reflects the true cost of the consumption, or get cost of inference down enough that even at $200 it is profitable for them and they will stop caring if higher bracket does not sell well, Then $200 is what market is ready to pay, there will be a % of users who will use it more than the rest as is the case in any software.
Either way the only money here i.e. the $200(or more) is only going to Anthropic.
[1] Perceived or real there is huge gulf in how Sonnet 4.5 is seen versus GPT 5.2-codex .
The combination of Claude Code and models could be a moat of its own; they are able to use RL to make their agent better - tool descriptions, reasoning patterns, etc.
Are they doing it? No idea, it sounds ridiculously expensive; but they did buy Bun, maybe to facilitate integrating around CC. Cowork, as an example, uses CC almost as an infrastructure layer, and the Claude Agent SDK is basically LiteLLM for your Max subscription - also built on/wrapping the CC app. So who knows, the juice may be worth the RL squeeze if CC is going to be foundational to some enterprise strategy.
Also IMO OpenCode is not better, just different. I’m getting great results with CC, but if I want to use other models like GLM/Qwen (or the new Nvidia stuff) it’s my tool of choice. I am really surprised to see people cancelling their Max subscriptions; it looks performative and I suspect many are not being honest.
Why would they not be use RL to learn if its OpenCode instead of Claude Code?
The tool calls,reasoning etc are still sent, tracked and used by Anthropic, the model cannot function well without that kind of detail.
OpenCode also get more data if they to train their own model with, however at this point only few companies can attempt to do foundational model training runs so I don't think the likes of Anthropic is worried about a small player also getting their user data.
---
> it looks performative and I suspect many are not being honest.
Quite possible if they were leveraging the cost arbitrage i.e. the fact at the actual per token cost was cheaper because of this loophole. Now their cost is higher, they perhaps don't need/want/value the quality for the price paid, so will go to Kimi K2/ Grok Code/ GLM Air for better pricing, basically if all you value is cost per token this change is reason enough to switch.
These are kind of users Anthropic perhaps doesn't want. Somewhat akin to Apple segmenting and not focusing on the budget market.
I’ve used both Claude and Codex extensively, and I already preferred Codex the model. I didn’t like the harness, but recently pi got good enough to be my daily driver, and I’ve since found that it’s much better than either CC or Codex CLI. It’s OSS, very simple and hackable, and the extension system is really nice. I wouldn’t want to go back to Claude Code even if I were convinced the model were much better - given that I already preferred the alternative it’s a no-brainer. OpenAI have officially allowed the use of pi with their sub, so at least in the short term the risk of a rug pull seems minimal.
I hope the upcoming DeepSeek coding model puts a dent in Anthropic’s armor.
Claude 4.5 is by far the best/fastest coding model, but the company is just too slimy and burning enough $$$ to guarantee enshitification in the near future.
Honestly, I'm a big Claude Code fan, even despite how bad its CLI application is, because it was so much better than other models. Anthropic's move here pretty much signals to me that the model isn't much better than other models, and that other models are due for a second chance.
If their model was truly ahead of the game, they wouldn't lock down the subsidized API in the same week they ask for 5-year retention on my prompts and permission to use for training. Instead, they would have been focusing on delivering the model more cheaply and broadly, regardless of which client I use to access it.
This is exactly like an opensource project called OpenVideo would pretend to be a Netflix/Prime/HBO/AppleTV+ client and allowing access to content that way, skipping the official clients.
Then they get angry when their use is blocked.
Only in this case they can 100% use the service via a paid API.
Everyone goes on and on how "anthtropic has the right to do this", sure, we also have the right to work around these blocks and fight against behavior that uses their position to create a walled garden and vendor lock-in using anti-competitive pricing and temporary monopoly on the 'best' model.
You're not wrong, but most people on this forum are generally positive about companies using private APIs (which this is) for a competitive advantage.
This is pretty undisputed I think... So if we're going to condemn anthropic for it, it'd be pretty one-sided unless we also took it up with any other companies doing so, like Apple, Google, ... And frankly basically all closed source companies.
It's just coincidentally more obvious with this Claude code API because the only difference between it and the public one is the billing situation...
The only basis we'd have to argue otherwise is that the subscription predates Claude code
Am I understanding it correctly, based on these tweets [1][2], that both Codex and Copilot teams or at least team members mentioned potentially letting people make use of their quotas in third party tools?
I really would like further clarification on those points as I would be pretty interested for a product I'm building if it was indeed made possible.
This is definitely Barbara Streisanding right now. I had never heard of OpenCode. But I sure have now! Will have to check it out. Doubt I’ll end up immediately canceling Claude Code Max, but we’ll see.
I don’t know if the Streisand Effect is relevant here since Anthropic will block any other uses of their private APIs, not just OpenCode. The private Claude Code API was never advertised nor sold as a general purpose API for use with any tool.
OpenCode is an interesting tool but if this is your first time hearing of it you should probably be aware of their recent unauthenticated RCE issues and the slow response they’ve had to fixing it: https://news.ycombinator.com/item?id=46581095 They say they’re going to do better in the future but it’s currently on my list of projects to keep isolated until their security situation improves.
Imo I don't trust ANY of these tools to run in non-isolated environments.
All of these tools are either
- created by companies powered by VC money that never face consequences for mishandling your data
- community vibecoded with questionable security practices
These tools also need to have a substantial amount of access to be useful so it is really hard to secure even if you try. Constantly prompting for approval leads to alert fatigue and eventually a mistake leading to exfiltration.
I suggest just stick to LXC or VM. Desktop (including linux) userland security is just bad in general. I try to keep most random code I download for one off tasks to containers.
agreed. This is definitely free PR for OpenCode. I didn't try it myself until I heard the kerfuffle around Anthropic enforcing their ToS. It definitely has a much nicer UX than claude-code, so I might give the GPT subscription a shot sometime, given that it's officially supported w/ 3rd party harnesses, and gpt 5.2 doesn't appear to be that far behind Opus (based on what other people say).
OpenCode is kind of a security disaster though: https://news.ycombinator.com/item?id=46581095. To be clear, I know all software has bugs, including security bugs. But that wasn't an obscure vulnerability, that was "our entire dev team fundamentally has no fucking clue what they're doing, and our security reporting and triage process is nonexistent". No way am I entrusting production code and secrets to that.
So is Claude. They nuked everyone's claude app a few days ago by pushing a shoddy changelog that crashed the app during init. Team literally doesnt understand how to implement try...catch. The thing clearly was vibe coded into existence.
i've been on claude code since before they even HAD subscriptions (api only) and since getting max from day 1 - I haven't once have assumed that access was allowed outside of CC. anyone who thinks otherwise is leaning into that cognitive dissonance
Soft plug: the team at nori just announced our own CLI today. Most people build on top of the provider layer, but we build on top of the agent layer. This means that you can use your subscriptions, and you get the benefit of getting the best system prompts and tools that the base models were fine tuned with.
At some point it stops making sense. You cannot use "the good model" just for the hard bits without basically hand writing you own harness. Even then, it will need full, uncached context.
Feels like consulting a premium lawyer to ask how much time is it.
When using their web UI with Firefox and ublock origin it regularly freezes the tab when the answer is written out. Someone at Anthropic had to create a letter-by-letter typing animation with GIF image and sentry callbacks every five seconds, which ends up in an infinite loop.
I've seen reports about this bug affecting Firefox users since Q3 2025. They were reported over various channels.
Not a fan of them prioritizing the combat against opencode instead of fixing issues that affect paying users.
It also happens with extensions and Firefox adblocker disabled. Might be connected to one of the Firefox anti tracking features, but I was unable to figure it out. The profiler shows an infinite loop.
I've found several reports about this issue. Seems they don't care about Firefox.
It’ll be interesting to see how far they take this cat and mouse game. Will “model attestation” become a new mechanism for enforcing tight coupling between client and inference endpoint? It could get weird, with secret shibboleths inserted into model weights…
You can't control it to the level of individual LLM requests and orchestration of those. And that is very valuable, practically required, to build a tool like this. Otherwise, you just have a wrapper over another big program and can barely do anything interesting/useful to make it actually work better.
What can't you do exactly? You can send Claude arbitrary user prompts—with arbitrary custom system prompts—and get text back. You can then put those text responses into whatever larger system you want.
wow. ACP is used within zed so I guess zed is safe with ACP using claude code
I wonder if Opencode could use ACP protocol as well. ACP seems to be a good abstraction, I should probably learn more about it. Any TLDR's on how it works?
According to Opus, ACP is designed specifically for IDE clients (with coding agent “servers”), and there’s some impedance mismatch here that would need to be resolved for one agent cli to operate as a client. I havent validated this though.
—-
1. ACP Servers Expect IDE-like Clients
The ACP server interface in Claude Code is designed for:
∙ Receiving file context from an IDE
∙ Sending back edits, diagnostics, suggestions
∙ Managing a workspace-scoped session
It’s not designed for another autonomous agent to connect and say “go solve this problem for me.”
2. No Delegation/Orchestration Semantics in ACP
ACP (at least the current spec) handles:
∙ Code completions
∙ Chat interactions scoped to a workspace
∙ Tool invocations
It doesn’t have primitives for:
∙ “Here’s a task, go figure it out autonomously”
∙ Spawning sub-agents
∙ Returning when a multi-step task completes
3. Session & Context Ownership
Both tools assume they own the agentic loop. If OpenCode connects to Claude Code via ACP, who’s driving? You’d have two agents both trying to:
∙ Decide what tool to call next
∙ Maintain conversation state
∙ Handle user approval flows
Switching models is too easy and the models are turning into commodities. They want to own your dev environment, which they can ultimately charge more when compared to access to their model.
I think the focus on OpenCode is distorting the story. If any tool tried to use the CC API instead of the regular API they’d block it.
Claude Code as a product doesn’t use their pay per call API, but they’ve never sold the Claude Code endpoint as a cheaper way to access their API without paying for the normal API
While Anthropic can choose whatever tool uses their api or subscription but I never fully understood what they gain from having the subscription explicitly only work for claude code. Is the issue that it disincentivizes the use of their API?
They gave Claude Code a discount to make it work as a product.
The API is priced for all general purpose usage.
They never sold the Claude Code endpoint as a cheaper general purpose API. The stories about “blocking OpenCode” are getting kind of out of hand because they’d block any use of the Claude Code endpoint that wasn’t coming from their Claude Code tool.
Owning the client gives them full control over which model to use for which query, prompt caching, rate limiting and lots more. So they can drive massive savings for the ~same output over just giving unrestricted access to the API.
The issue is that claude code is cheap because it uses API's unused capacity. These kind of circumventions hurt them both ways, one they dont know how to estimate api demand, and two, the nature of other harnesses is more bursty (eg: parallel calls) compared to claude code, so it screws over other legit users. Claude code very rarely makes parallel calls for context commands etc. but these ones do.
re the whole unused capacity is the nature of inference on GPUs. In any cluster, you can batch inputs (ie takes same time for say 1 query or 100 as they can be parallelized) and now continuous batching[1] exists. With API and bursty nature of requests, clusters would be at 40%-50% of peak API capacity. Makes sense to divert them to subscriptions. Reduces api costs in future, and gives anthropic a way to monetize unused capacity. But if everyone does it, then there is no unused capacity to manage and everyone loses.
Have had max for awhile, funny thing opencode still sorta works with my cc max subscription. That said after awhile open code just hangs. My workflow involves saving state frequently. I cancel open back up and continue then it’s performant for maybe 2-3 token context windows, repeat
you can get around this by making an agent in opencode and that agent should not mention opencode at all, e.g. "You're an agent that uses Claude Opus..." and it will just work.
This level of hypocrisy is comical. Exploiting the pricing gap between API usage and subscription leads to vastly increased efficiency and productivity therefore it needs to be legally protected. That's the argument when it comes to LLM training and copyright.
While the subscription is definitely subsidized (technically cross-subsidized, because the subsidy is coming from users who pay but barely use it), Claude Code also does a ton of prompt caching that reduces LLM dependency. I have done many hours-long coding sessions and built entire websites using the latest Opus and the final tally came to like $4, whereas without caching it would have been $25-30.
Are you saying CC does caching that opencode does not? What does Anthropic care? They limit you based on tokens, so if other agents burn more then users will simply get less work done, not use more tokens, which they can't. I don't think Anthropic's objection is technical.
Cry me a river - I never stop hearing how developers think their time is so valuable that no amount of AI use could possibly not be worth it. Yet suddenly, paying for what you use is "too expensive".
I'm getting sick of costs being distorted. It's resulting in dysfunctional methodologies where people are spinning up ridiculous number agents in the background, burning tokens to grind out solutions where a modicum of oversight or direction from a human would result in 10x less compute. At very least the costs should be realised by the people doing this.
Well, they are paying. Just not for the product Anthropic wants to sell. Really at root this is a marketing failure. They really, really want to push Claude CLI as a loss leader, and are having to engage in this disaster of a anti-PR campaign to plug all the leaks from people sneaking around.
The root cause is and remains their pricing: the delta between their token billing and their flat fee is just screaming to be exploited by a gray market.
I believe LLM providers should ultimately be utilities from a consumer perspective, like water suppliers. I own the faucet, washer, bathtub, and can switch suppliers at will. I’ve been working on a FOSS client for them for nearly three years.
I hope that why the following is purely a factual distinction, not an excuse or an attempt to empathize.
The difference between the other entities named and OpenCode is this:
OpenCode uses people’s Claude Code subscriptions. The other entities use the API.
Specifically, OpenCode reverse‑engineers Claude Code’s OAuth endpoints and API, then uses them. This is harmful from Anthropic's perspective because Claude Code is subsidized relative to the API.
Edit: I’m getting “You’re posting too fast” when replying to mr_mitm. For clarity, there is no separate API subscription. Anthropic wants you to use one of two funnels for coding with their LLMs:
1. The API (through any frontend), or
2. A subscription through an Anthropic‑owned frontend.
You're hitting an important point. I might go on a tangent here.
It's up to operating systems to offer a content consumption experience for end users which reverses the role of platforms back to their original, most basic offers. They all try to force you into their applications which are full of tracking, advertisements, upsells, and anti-consumer interface design decisions.
Ideally the operating system would untangle the content from these applications and allow the end user to consume the content in a way that they want. For example Youtube offers search, video and comments. The operating system should extract these three things and create a good UI around it, while discarding the rest. Playlists and viewing history can all be managed in the offline part of the application. Spotify offers music, search and lyrics but they want you to watch videos and use social media components in their very opinionated UIs, while actively fighting you to create local backup of your music library.
Software like adblockers, yt-dlp and streamlink are already solving parts of these issues by untangling content from providers for local consumption in a trusted environment. For me the fight by Anthropic against OpenCode fits into this picture.
These companies are acting hostile even towards paying customers, each of them trying to build their walled gardens.
It wasn't just hooking up a new faucet. It was hijacking an API key intended for ClaudeCode specifically. So in this metaphor it would be hooking up a secondary water pipe from the water company intended only for sprinklers they provide to your main water supply. The water company notices abnormal usage coming from the sprinkler water pipe and shuts it off, while leaving your primary water pipe alone.
Possibly a better comparison (though a bit dated now) would be AT&T (or whatever telephone monopoly one had/has in their locality) charging an additional fee to use a telephone that isn't sold/rented to them by AT&T.
Comcast pulled this on me recently through what I can only describe as malicious bundling.
Internet + shitty "security" software that only runs on their hardware + modem rental is cheaper than internet only + bring your own equipment. You can't buy the cheaper internet+security package without their hardware (or so they claimed).
Fwiw, your main point seems scattered across your post where sentences refer to supposed context established by other sentences. It's making it hard to understand your position.
Maybe try the style where you start off with your position in a self-contained sentence, and then write a paragraph elaborating on it.
It's exactly like water. Use their API, and you pay as much water as you drink. But visit them in their pub, and you get a pretty big buffet with lots of water for a one-time price.
Please stop spreading this nonsense. Anthropic is not blocking Opencode. You can use all their models within Opencode using API. Anthropic simply let Dax and team use unlimited plans for the past year or so. I don’t even know if it was official. I find this a bit comical and immature. You want to use the models, just pay for it. Why are people trying to nickel and dime on tools that they use day in day out?
You can clearly run the provided gist. Calling “You are OpenCode” in the system prompt fails, but not if you replace the name with another tool name (e.g. “You are Cursor”, “You are Devin”). Pretty blatant difference in behavior based on a blacklisted value.
I do not understand the stubbornness with wanting to use the auth part. On local, just call the claude code from your harness, or better there is a claude agent sdk, both of which have clear auth and are permitted acc to anthropic. But to say that they want to use this auth as a substitution for API is a different issue altogether.
The title is misleading if you don’t read the whole text: Anthropic is not blocking OpenCode from the API that they sell.
They’ve blocked OpenCode from accessing the private Claude Code endpoints. These were not advertised or sold as usable with anything else. OpenCode reverse engineered the API and was trying to use it.
The private API isn’t intended for use with other tools. Any tool that used it would get blocked.
> Any tool that used it would get blocked.
Isn't that misleading from Anthropic side? The gist shows that only certain tools are block, not all. They're selectively enforcing their ToS.
The gist showsmthat the first line ofmthe system prompt must be "You are Claude Code, Anthropic's official CLI for Claude."
That’s a reasonable attempt to enforce the ToS. For OpenCode, they also take the next step of additionally blocking a second line of “You are OpenCode.”
There might be more thorough ways to effect a block (e.g. requiring signed system prompts), but Anthropic is clearly making its preferences known here.
They can enforce their ToS however they like. It's their product and platform.
What do you mean by "not all"? They aren't obligated to block every tool/project trying to use the private API all the way to a lone coder making their own closed-source tool. That's just not feasible. Or did you have a way to do that?
> The gist shows that only certain tools are block, not all.
Are those other phrases actually used by any tools? I thought they were just putting phrases into the LLM arbitrarily. Any misuse of the endpoint is detected at scale they probably add more triggers for that abuse.
Expecting it to magically block different phrases is kind of silly.
> They're selectively enforcing their ToS.
Do you have anything to support that? Not a gist of someone putting arbitrary text into the API, but links to another large scale tool that gets away with using the private API?
Seems pretty obvious that they’re just adding triggers for known abusers as they come up.
OpenCode is doing nothing wrong and adversarial interoperability is the cornerstone of hacker ethos.
As such, the sentiment in this thread is chilling.
I do admit to feeling some schadenfreude over them reacting to their product being leeched by others.
I get it though, Anthropic has to protect their investment in their work. They are in a position to do that, whereas most of us are not.
They’re not literally blocking OpenCode. You can use OpenCode with their API like any other tool.
They’ve blocked the workaround OpenCode was using to access a private API that was metered differently.
Any tool that used that private endpoint would be blocked. They’re not pushing an agenda. They’re just enforcing their API terms like they would for any use.
after they exploited us by training without any limits on code without licensing it (including GPLed code) they now scramble to ban and restrict when we want to do the same to them. that's the schadenfreude...
Hey! It was a lot of work stealing everything from you, of course you have to pay me a premium to get access to it!
> protect their investment
Viewed another way, the preferential pricing they're giving to Claude Code (and only Claude Code) is anticompetitive behavior that may be illegal.
This is a misunderstanding of the regulations.
They’re not obligated to give other companies access to their services at a discounted rate.
They may however be obligated to not give customers access to their services at a discounted rate either - predatory pricing is at least some of the time and in some jurisdictions illegal.
Predatory pricing? They have a public API that anyone can use for a public rate. There is no predatory pricing here.
The Claude Code endpoint is a private API. They’re free to control usage of their private API.
Predatory pricing is selling something below cost to acquire/maintain market dominance.
The Claude subscription used for Claude Code is to all appearances being sold substantially below the cost to run it, and it certainly seems that this is being done to maintain Claude Code's market dominance and force out competitors who cannot afford to subsidize LLM inference in the same way such as OpenCode.
It's not a matter of there being a public API, I don't believe they are obligated to offer one at all, it's a matter of the Claude Subscription being priced fairly so that OpenCode (on top of, say, gemini) can be competitive.
The modern consumer benefit doctrine means predatory pricing is impossible to prosecute in 99% of cases. I’m not saying it’s right, but legally it is toothless.
This is true... in the US (though there is still that 1%). Anthropic operates globally and the US isn't the only country who ever realized it might be an issue.
> Predatory pricing is selling something below cost to acquire/maintain market dominance.
Yet they have to acquire market dominance in a meaningful market first if you want to prosecute, otherwise it's just a failed business strategy. Like that company selling movie tickets bellow cost.
Claude code is so successful that they could silence the api to protect the moat.
I’m surprised they didn’t go with the option of offering opus 4.6 to Claude code only.
The API is really expensive compared to a Max subscription! So they're probably making a lot of money (or at least losing much less) via the API. I don't think it's going anywhere. Worst case scenario they could raise the price even more.
That’s what OpenAI is doing with GPT-5.2-Codex
What makes it predatory?
The Claude subscription (i.e. the pro and max plans, not the API) is sold at what appears to be well below cost in what appears to be a blatant attempt to preserve/create market dominance for claude code, destroying competitors by making it impossible to compete without also having a war chest of money to give away.
You’re making a big assumption. LLM providers aren’t necessarily taking a loss on the marginal cost of inference. It’s when you include R&D and training costs that it requires the capital inputs. They’ve come out and said as much.
The Claude Code plans may not be operating at a loss either. Most people don’t use up 100% of their plan. Few people do. A lot of it goes idle.
If you check the actual tokens consumption and compare it to the API you will find a factor 10.
Training models cost tens of millions, their revenues from sub + api are well above hundreds of millions.
If you look at minimax IPO data, you can see that they spent 3x their revenue on "cloud bills".
So yes, it's probable that they do subsidize inference through subscriptions in order to capture market.
No inferrence provider is profitable, and most run on VC money to serve customers.
Now dig into copilot plus and how they price the premium requests. The math is not mathing, they are aggressively trying to capture the market.
Are you suggesting Anthropic has a “duty to deal” with anyone who is trying to build competitive products to Claude Code, beyond access to their priced API? I don’t think so. Especially not to a product that’s been breaking ToS.
No, but I think they should. Or anti-trust was enforced through some other means. Or at all really.
Citing the ToS is circular logic. They set the terms and can change them whenever they want!
A regulatory duty to deal is the opposite of setting your own terms. Yes, citing a ToS is acceptable in this scenario. We can throw ToS out if we all believed in duty to deal.
Seems like another donation to python is coming to mitigate this pr scandal
They cannot actually do this as long as they keep Claude code open source. It is always going to be trivial to replicate how it sends requests in a third party tool.
CC isn’t open sourced.
Neither was 99.99999% of the content they stole.
Any source for this claim?
all the scrapped data on internet?? are you that naive lol
Scraped != stolen.
LOL No if you didn't have explicit permision to use it for Training, you didn't have permission this is called stealing.
I meant "source available" I guess. Or am I missing something?
https://github.com/anthropics/claude-code
Yes, there is no source code in here. This is their scripts / tooling / prompts repo. The actual code that powers their CC terminal CLI does not exist anywhere on their public GitHub
It is available on npm but it’s a wasm file last I checked. You also don’t need it to find their endpoints, people are just seeing what networks calls are made when they use Claude Code and then try to get other agents to call those endpoints.
The hard part is that they have an Anthropic-compatible API that’s different than completion/responses.
Obviously Anthropic are within their rights to do this, but I don’t think their moat is as big as they think it is. I’ve cancelled my max subscription and have gone over to ChatGPT pro, which is now explicitly supporting this use case.
Is opencode that much better than Codex / Claude Code for cli tooling that people are prepared forsake[1] Sonnet 4.5/Opus 4.5 and switch to GPT 5.2-codex ?
The moat is Sonnet/Opus not Claude Code it can never be a client side app.
Cost arbitrage like this is short lived, until the org changes pricing.
For example Anthropic could release say an ultra plan at $500-$1000 with these restrictions removed/relaxed that reflects the true cost of the consumption, or get cost of inference down enough that even at $200 it is profitable for them and they will stop caring if higher bracket does not sell well, Then $200 is what market is ready to pay, there will be a % of users who will use it more than the rest as is the case in any software.
Either way the only money here i.e. the $200(or more) is only going to Anthropic.
[1] Perceived or real there is huge gulf in how Sonnet 4.5 is seen versus GPT 5.2-codex .
The combination of Claude Code and models could be a moat of its own; they are able to use RL to make their agent better - tool descriptions, reasoning patterns, etc.
Are they doing it? No idea, it sounds ridiculously expensive; but they did buy Bun, maybe to facilitate integrating around CC. Cowork, as an example, uses CC almost as an infrastructure layer, and the Claude Agent SDK is basically LiteLLM for your Max subscription - also built on/wrapping the CC app. So who knows, the juice may be worth the RL squeeze if CC is going to be foundational to some enterprise strategy.
Also IMO OpenCode is not better, just different. I’m getting great results with CC, but if I want to use other models like GLM/Qwen (or the new Nvidia stuff) it’s my tool of choice. I am really surprised to see people cancelling their Max subscriptions; it looks performative and I suspect many are not being honest.
Why would they not be use RL to learn if its OpenCode instead of Claude Code?
The tool calls,reasoning etc are still sent, tracked and used by Anthropic, the model cannot function well without that kind of detail.
OpenCode also get more data if they to train their own model with, however at this point only few companies can attempt to do foundational model training runs so I don't think the likes of Anthropic is worried about a small player also getting their user data.
---
> it looks performative and I suspect many are not being honest.
Quite possible if they were leveraging the cost arbitrage i.e. the fact at the actual per token cost was cheaper because of this loophole. Now their cost is higher, they perhaps don't need/want/value the quality for the price paid, so will go to Kimi K2/ Grok Code/ GLM Air for better pricing, basically if all you value is cost per token this change is reason enough to switch.
These are kind of users Anthropic perhaps doesn't want. Somewhat akin to Apple segmenting and not focusing on the budget market.
> I am really surprised to see people cancelling their Max subscriptions; it looks performative and I suspect many are not being honest.
Why do you think I'm not being honest? What am I supposedly not being honest about?
I’ve used both Claude and Codex extensively, and I already preferred Codex the model. I didn’t like the harness, but recently pi got good enough to be my daily driver, and I’ve since found that it’s much better than either CC or Codex CLI. It’s OSS, very simple and hackable, and the extension system is really nice. I wouldn’t want to go back to Claude Code even if I were convinced the model were much better - given that I already preferred the alternative it’s a no-brainer. OpenAI have officially allowed the use of pi with their sub, so at least in the short term the risk of a rug pull seems minimal.
What is pi?
https://shittycodingagent.ai
I hope the upcoming DeepSeek coding model puts a dent in Anthropic’s armor. Claude 4.5 is by far the best/fastest coding model, but the company is just too slimy and burning enough $$$ to guarantee enshitification in the near future.
I get way better results from Gemini fwiw.
Honestly, I'm a big Claude Code fan, even despite how bad its CLI application is, because it was so much better than other models. Anthropic's move here pretty much signals to me that the model isn't much better than other models, and that other models are due for a second chance.
If their model was truly ahead of the game, they wouldn't lock down the subsidized API in the same week they ask for 5-year retention on my prompts and permission to use for training. Instead, they would have been focusing on delivering the model more cheaply and broadly, regardless of which client I use to access it.
This is exactly like an opensource project called OpenVideo would pretend to be a Netflix/Prime/HBO/AppleTV+ client and allowing access to content that way, skipping the official clients.
Then they get angry when their use is blocked.
Only in this case they can 100% use the service via a paid API.
Everyone goes on and on how "anthtropic has the right to do this", sure, we also have the right to work around these blocks and fight against behavior that uses their position to create a walled garden and vendor lock-in using anti-competitive pricing and temporary monopoly on the 'best' model.
You're not wrong, but most people on this forum are generally positive about companies using private APIs (which this is) for a competitive advantage.
This is pretty undisputed I think... So if we're going to condemn anthropic for it, it'd be pretty one-sided unless we also took it up with any other companies doing so, like Apple, Google, ... And frankly basically all closed source companies.
It's just coincidentally more obvious with this Claude code API because the only difference between it and the public one is the billing situation...
The only basis we'd have to argue otherwise is that the subscription predates Claude code
https://www.anthropic.com/news/claude-pro (years ago)
But I didn't think we're strangers to companies pivoting the narrative like this
Am I understanding it correctly, based on these tweets [1][2], that both Codex and Copilot teams or at least team members mentioned potentially letting people make use of their quotas in third party tools?
I really would like further clarification on those points as I would be pretty interested for a product I'm building if it was indeed made possible.
[1] https://x.com/jaredpalmer/status/2009844004221833625
[2] https://x.com/thsottiaux/status/2009714843587342393
This is definitely Barbara Streisanding right now. I had never heard of OpenCode. But I sure have now! Will have to check it out. Doubt I’ll end up immediately canceling Claude Code Max, but we’ll see.
I don’t know if the Streisand Effect is relevant here since Anthropic will block any other uses of their private APIs, not just OpenCode. The private Claude Code API was never advertised nor sold as a general purpose API for use with any tool.
OpenCode is an interesting tool but if this is your first time hearing of it you should probably be aware of their recent unauthenticated RCE issues and the slow response they’ve had to fixing it: https://news.ycombinator.com/item?id=46581095 They say they’re going to do better in the future but it’s currently on my list of projects to keep isolated until their security situation improves.
Imo I don't trust ANY of these tools to run in non-isolated environments.
All of these tools are either
- created by companies powered by VC money that never face consequences for mishandling your data
- community vibecoded with questionable security practices
These tools also need to have a substantial amount of access to be useful so it is really hard to secure even if you try. Constantly prompting for approval leads to alert fatigue and eventually a mistake leading to exfiltration.
I suggest just stick to LXC or VM. Desktop (including linux) userland security is just bad in general. I try to keep most random code I download for one off tasks to containers.
I'm trying to put together an exe.dev-like self hosted solution using Incus/LXC. Early days but works as a proof of concept:
https://github.com/jgbrwn/shelley-lxc
A coding agent is just a massive RCE, what do you think happens when claude gets prompt injected? Although I don't defend not fixing an RCE.
Absolutely all coding agents should be run in sandboxed containers, 24/7, if you do otherwise, please don't cry when you're pwned.
agreed. This is definitely free PR for OpenCode. I didn't try it myself until I heard the kerfuffle around Anthropic enforcing their ToS. It definitely has a much nicer UX than claude-code, so I might give the GPT subscription a shot sometime, given that it's officially supported w/ 3rd party harnesses, and gpt 5.2 doesn't appear to be that far behind Opus (based on what other people say).
OpenCode is kind of a security disaster though: https://news.ycombinator.com/item?id=46581095. To be clear, I know all software has bugs, including security bugs. But that wasn't an obscure vulnerability, that was "our entire dev team fundamentally has no fucking clue what they're doing, and our security reporting and triage process is nonexistent". No way am I entrusting production code and secrets to that.
So is Claude. They nuked everyone's claude app a few days ago by pushing a shoddy changelog that crashed the app during init. Team literally doesnt understand how to implement try...catch. The thing clearly was vibe coded into existence.
Last week Claude Code (CC) had a bug that completely broke the Claude Code app because of a change in the CC changelog markdown file.
Claude Code’s creator has also said that CC is 100% AI generated these days.
i've been on claude code since before they even HAD subscriptions (api only) and since getting max from day 1 - I haven't once have assumed that access was allowed outside of CC. anyone who thinks otherwise is leaning into that cognitive dissonance
I want more customers like you, eat your slop and say thank you.
Soft plug: the team at nori just announced our own CLI today. Most people build on top of the provider layer, but we build on top of the agent layer. This means that you can use your subscriptions, and you get the benefit of getting the best system prompts and tools that the base models were fine tuned with.
Cliff posted a show hn earlier today here: https://news.ycombinator.com/item?id=46616562
Asked Opus a question on Openrouter. 0.30$
Asked Minimax 2.1 that question. 0.008$
At some point it stops making sense. You cannot use "the good model" just for the hard bits without basically hand writing you own harness. Even then, it will need full, uncached context.
Feels like consulting a premium lawyer to ask how much time is it.
When using their web UI with Firefox and ublock origin it regularly freezes the tab when the answer is written out. Someone at Anthropic had to create a letter-by-letter typing animation with GIF image and sentry callbacks every five seconds, which ends up in an infinite loop.
I've seen reports about this bug affecting Firefox users since Q3 2025. They were reported over various channels.
Not a fan of them prioritizing the combat against opencode instead of fixing issues that affect paying users.
How can you be sure the issue is not with ublock?
It also happens with extensions and Firefox adblocker disabled. Might be connected to one of the Firefox anti tracking features, but I was unable to figure it out. The profiler shows an infinite loop.
I've found several reports about this issue. Seems they don't care about Firefox.
It’ll be interesting to see how far they take this cat and mouse game. Will “model attestation” become a new mechanism for enforcing tight coupling between client and inference endpoint? It could get weird, with secret shibboleths inserted into model weights…
I would be so furious if fucking LLM agents are what finally give browser attestation a foothold on our hardware.
Given that Claude Code is a scriptable CLI tool with an SDK, why can't OpenCode just call Claude instead of reusing its auth tokens?
You can't control it to the level of individual LLM requests and orchestration of those. And that is very valuable, practically required, to build a tool like this. Otherwise, you just have a wrapper over another big program and can barely do anything interesting/useful to make it actually work better.
What can't you do exactly? You can send Claude arbitrary user prompts—with arbitrary custom system prompts—and get text back. You can then put those text responses into whatever larger system you want.
May as well just use Claude Code then.
Well, I do use Claude Code myself, but I'd thought the point of OpenCode was that it could combine the responses of multiple LLMs.
This is what ACP and https://github.com/zed-industries/claude-code-acp enables. ACP controls agents - there is native support in Copilot CLI and Gemini and adapters for claude code and codex.
wow. ACP is used within zed so I guess zed is safe with ACP using claude code
I wonder if Opencode could use ACP protocol as well. ACP seems to be a good abstraction, I should probably learn more about it. Any TLDR's on how it works?
According to Opus, ACP is designed specifically for IDE clients (with coding agent “servers”), and there’s some impedance mismatch here that would need to be resolved for one agent cli to operate as a client. I havent validated this though.
—-
1. ACP Servers Expect IDE-like Clients The ACP server interface in Claude Code is designed for: ∙ Receiving file context from an IDE ∙ Sending back edits, diagnostics, suggestions ∙ Managing a workspace-scoped session It’s not designed for another autonomous agent to connect and say “go solve this problem for me.”
2. No Delegation/Orchestration Semantics in ACP ACP (at least the current spec) handles: ∙ Code completions ∙ Chat interactions scoped to a workspace ∙ Tool invocations It doesn’t have primitives for: ∙ “Here’s a task, go figure it out autonomously” ∙ Spawning sub-agents ∙ Returning when a multi-step task completes
3. Session & Context Ownership Both tools assume they own the agentic loop. If OpenCode connects to Claude Code via ACP, who’s driving? You’d have two agents both trying to: ∙ Decide what tool to call next ∙ Maintain conversation state ∙ Handle user approval flows
opencode acp -- start ACP (Agent Client Protocol) server
Previous related discussion: https://news.ycombinator.com/item?id=46586766
I don't understand what's the threat from a CLI which is useless without AI models and Anthropic could be one of them?
Switching models is too easy and the models are turning into commodities. They want to own your dev environment, which they can ultimately charge more when compared to access to their model.
They’re afraid their customers will switch model provider in the future, so instead they made them switch now.
They want to be the next JetBrains.
I think the focus on OpenCode is distorting the story. If any tool tried to use the CC API instead of the regular API they’d block it.
Claude Code as a product doesn’t use their pay per call API, but they’ve never sold the Claude Code endpoint as a cheaper way to access their API without paying for the normal API
While Anthropic can choose whatever tool uses their api or subscription but I never fully understood what they gain from having the subscription explicitly only work for claude code. Is the issue that it disincentivizes the use of their API?
It’s basic market segmentation.
They gave Claude Code a discount to make it work as a product.
The API is priced for all general purpose usage.
They never sold the Claude Code endpoint as a cheaper general purpose API. The stories about “blocking OpenCode” are getting kind of out of hand because they’d block any use of the Claude Code endpoint that wasn’t coming from their Claude Code tool.
Perhaps concentrated use of Claude Code increases their perceived market value.
It also perhaps tries to preserve some moat around their product/service.
And telemetry and tooling reports and usage by cloud code signs PR on GitHub and things like that.
Are they ZDR with prompts and completions and possibly rely on usage statistics from their CLI to infer how people are using it?
Not at all. They train on your prompts and codebase unless you opt out.
Owning the client gives them full control over which model to use for which query, prompt caching, rate limiting and lots more. So they can drive massive savings for the ~same output over just giving unrestricted access to the API.
Wouldn’t most of the savings be done on the server side anyway? I would be very surprised if Claude code does those on the client side.
The issue is that claude code is cheap because it uses API's unused capacity. These kind of circumventions hurt them both ways, one they dont know how to estimate api demand, and two, the nature of other harnesses is more bursty (eg: parallel calls) compared to claude code, so it screws over other legit users. Claude code very rarely makes parallel calls for context commands etc. but these ones do.
re the whole unused capacity is the nature of inference on GPUs. In any cluster, you can batch inputs (ie takes same time for say 1 query or 100 as they can be parallelized) and now continuous batching[1] exists. With API and bursty nature of requests, clusters would be at 40%-50% of peak API capacity. Makes sense to divert them to subscriptions. Reduces api costs in future, and gives anthropic a way to monetize unused capacity. But if everyone does it, then there is no unused capacity to manage and everyone loses.
[1]: https://huggingface.co/blog/continuous_batching
Your suggested functionality is server side, not client side.
> it uses API's unused capacity
I see no waiting or scheduling on my usage - it runs, what appears to be, full speed till I hit my 4 hour / 7 day limit and then it stops.
Claude code is cheap (via a subscription) because it is burning piles of investor cash, while making a bit back on API / pay per token users.
Why would scheduling be a thing in this case? I might be missing something here.
With continuous batching, you don't wait for entire previous batch to finish. The request goes in as one finishes. Hence the wait time is negligible.
They have rate limits for this purpose. Many folks run claude code instances in parallel, which has roughly the same characteristics.
Not the same.
they have usage limits on subscription. I dont know about rate limits. Certainly not per request.
> This script demonstrates that Anthropic has specifically blocked
> the phrase "You are OpenCode" in system prompts
Have had max for awhile, funny thing opencode still sorta works with my cc max subscription. That said after awhile open code just hangs. My workflow involves saving state frequently. I cancel open back up and continue then it’s performant for maybe 2-3 token context windows, repeat
This is ironic timing given I was just banned from vibe coding and abusing my own desktop Hinge client relying on their API.
I didn't know, guessing some others don't either:
"The open source AI coding agent
Free models included or connect any model from any provider, including Claude, GPT, Gemini and more."
you can get around this by making an agent in opencode and that agent should not mention opencode at all, e.g. "You're an agent that uses Claude Opus..." and it will just work.
Didn’t they work around this last week by just putting “You are Claude” in the system prompt?
Related:
Anthropic blocks third-party use of Claude Code subscriptions
https://news.ycombinator.com/item?id=46549823
that's a separate enforcement of all 3rd party auth (on the 9th)
some of them worked around it, but it looked like they added something specifically for OpenCode today, which seems to have been worked around again after the OpenCode-specific block: https://github.com/anomalyco/opencode-anthropic-auth/commits...
Yeah, the pro/max access require Claude Code. Should use the API if you want to build a tool on it.
Well, using Claude Pro/Max Calude Code api without Claude Code, instead of their actual API they monetize goes against their ToS.
I don't like it too, but it is what it is.
If I gave free water refils if you used my brand XYZ water bottle, you should not cry that you don't get free refills to your ABC branded bottle.
It may be scummy, but it does make sense.
Meh, if you want access to the API then pay for the API. It's as simple as that.
This level of hypocrisy is comical. Exploiting the pricing gap between API usage and subscription leads to vastly increased efficiency and productivity therefore it needs to be legally protected. That's the argument when it comes to LLM training and copyright.
It’s because their models burn tokens like crazy. API use is way too expensive
Edit: or should I say, the subscription is artificially cheap
While the subscription is definitely subsidized (technically cross-subsidized, because the subsidy is coming from users who pay but barely use it), Claude Code also does a ton of prompt caching that reduces LLM dependency. I have done many hours-long coding sessions and built entire websites using the latest Opus and the final tally came to like $4, whereas without caching it would have been $25-30.
Are you saying CC does caching that opencode does not? What does Anthropic care? They limit you based on tokens, so if other agents burn more then users will simply get less work done, not use more tokens, which they can't. I don't think Anthropic's objection is technical.
> API use is way too expensive
Cry me a river - I never stop hearing how developers think their time is so valuable that no amount of AI use could possibly not be worth it. Yet suddenly, paying for what you use is "too expensive".
I'm getting sick of costs being distorted. It's resulting in dysfunctional methodologies where people are spinning up ridiculous number agents in the background, burning tokens to grind out solutions where a modicum of oversight or direction from a human would result in 10x less compute. At very least the costs should be realised by the people doing this.
> a modicum of oversight or direction from a human would result in 10x less compute.
Yeah, I noticed it. I use Claude, but I use it responsibly. I wonder how many "green" people run these instances in parallel. :D
Well, they are paying. Just not for the product Anthropic wants to sell. Really at root this is a marketing failure. They really, really want to push Claude CLI as a loss leader, and are having to engage in this disaster of a anti-PR campaign to plug all the leaks from people sneaking around.
The root cause is and remains their pricing: the delta between their token billing and their flat fee is just screaming to be exploited by a gray market.
I believe LLM providers should ultimately be utilities from a consumer perspective, like water suppliers. I own the faucet, washer, bathtub, and can switch suppliers at will. I’ve been working on a FOSS client for them for nearly three years.
I hope that why the following is purely a factual distinction, not an excuse or an attempt to empathize.
The difference between the other entities named and OpenCode is this:
OpenCode uses people’s Claude Code subscriptions. The other entities use the API.
Specifically, OpenCode reverse‑engineers Claude Code’s OAuth endpoints and API, then uses them. This is harmful from Anthropic's perspective because Claude Code is subsidized relative to the API.
Edit: I’m getting “You’re posting too fast” when replying to mr_mitm. For clarity, there is no separate API subscription. Anthropic wants you to use one of two funnels for coding with their LLMs: 1. The API (through any frontend), or 2. A subscription through an Anthropic‑owned frontend.
You're hitting an important point. I might go on a tangent here.
It's up to operating systems to offer a content consumption experience for end users which reverses the role of platforms back to their original, most basic offers. They all try to force you into their applications which are full of tracking, advertisements, upsells, and anti-consumer interface design decisions.
Ideally the operating system would untangle the content from these applications and allow the end user to consume the content in a way that they want. For example Youtube offers search, video and comments. The operating system should extract these three things and create a good UI around it, while discarding the rest. Playlists and viewing history can all be managed in the offline part of the application. Spotify offers music, search and lyrics but they want you to watch videos and use social media components in their very opinionated UIs, while actively fighting you to create local backup of your music library.
Software like adblockers, yt-dlp and streamlink are already solving parts of these issues by untangling content from providers for local consumption in a trusted environment. For me the fight by Anthropic against OpenCode fits into this picture.
These companies are acting hostile even towards paying customers, each of them trying to build their walled gardens.
I believe they want you to use the API subscription if you want to use their service with OpenCode. It's possible, just more expensive.
That is analogous to the water company charging you more if you use a faucet from another company. It's not a fair competition.
That's why we are supposed to have legislation to regulate that utilities and common carriers can't behave that way.
It wasn't just hooking up a new faucet. It was hijacking an API key intended for ClaudeCode specifically. So in this metaphor it would be hooking up a secondary water pipe from the water company intended only for sprinklers they provide to your main water supply. The water company notices abnormal usage coming from the sprinkler water pipe and shuts it off, while leaving your primary water pipe alone.
Possibly a better comparison (though a bit dated now) would be AT&T (or whatever telephone monopoly one had/has in their locality) charging an additional fee to use a telephone that isn't sold/rented to them by AT&T.
Comcast pulled this on me recently through what I can only describe as malicious bundling.
Internet + shitty "security" software that only runs on their hardware + modem rental is cheaper than internet only + bring your own equipment. You can't buy the cheaper internet+security package without their hardware (or so they claimed).
Fwiw, your main point seems scattered across your post where sentences refer to supposed context established by other sentences. It's making it hard to understand your position.
Maybe try the style where you start off with your position in a self-contained sentence, and then write a paragraph elaborating on it.
Also, they should try editing their post less frequently. Hard to have a discussion this way.
It's exactly like water. Use their API, and you pay as much water as you drink. But visit them in their pub, and you get a pretty big buffet with lots of water for a one-time price.
This is what the APIs are for. You pay for what you use, just like water.
We have a flat-rate minimum charge or a minimum tariff for water service here.
It means that even though the cost depends on usage, you are billed at least a fixed minimum amount, regardless of how little water you actually use.
Please stop spreading this nonsense. Anthropic is not blocking Opencode. You can use all their models within Opencode using API. Anthropic simply let Dax and team use unlimited plans for the past year or so. I don’t even know if it was official. I find this a bit comical and immature. You want to use the models, just pay for it. Why are people trying to nickel and dime on tools that they use day in day out?
You can clearly run the provided gist. Calling “You are OpenCode” in the system prompt fails, but not if you replace the name with another tool name (e.g. “You are Cursor”, “You are Devin”). Pretty blatant difference in behavior based on a blacklisted value.
I do not understand the stubbornness with wanting to use the auth part. On local, just call the claude code from your harness, or better there is a claude agent sdk, both of which have clear auth and are permitted acc to anthropic. But to say that they want to use this auth as a substitution for API is a different issue altogether.