Would be cool to have a $5-10/month plan that only works off-peak, for people who want to do the occasional side project after work. Right now it's hard to justify anything but Copilot (because it's cheaper, offers the same models, and I'm nowhere near the usage limits).
I suspect that any GPU cycle not spent on inference will just be dedicated to training (which as I understand it can “soak up” essentially unlimited compute at constant value per token), and I’d not expect to see time-based billing until that changes.
I canceled my plan today and wrote my reason as: now that I have a job again I don’t have the time or needs for the pro plan. If there was a $5 a month option, I would gladly take it to make use of Opus for my rare side ideas.
Presumably they have unused compute in those hours and figure they may as well enable people to use it and get more invested into their ecosystem.
What I wish Anthropic would do is be a lot more explicit about what windows apply when. Surely they have the data to say "you get X usage from hours A to B, Y usage from B to C"
Interesting to see more demand shaping mechanisms applied to LLM inference. Even though the "batch processing" feature is already available. I guess this "promotion" is to test the hypothesis of sliding along the spectrum towards more "real-time" demand shaping.
I just know there has to be some psychology in play with these promos. The promo during December got me to upgrade to the $100 plan, and I know I'm not the only one.
You're probably right. I've been thinking about why anthropic's revenue keeps soaring. I think in terms of "new users trying the product" we're definitely somewhere in the slowing part of the S-curve (at least in the US), but there are other growth contributors. Two bigs ones are people finding new use-cases and people figuring out how to scale up current use-cases to use more tokens. Perhaps little temporary-usage-boosts like this give people permission to attempt new use-cases or more scale and realize they could use a higher tiered plan.
I suspect it’s much more about understanding user behavior, i.e: given more allowance off-peak, do users change when they use Claude? And from there, that will inform how plans are designed long term. If they discover that offering higher off-peak limits meaningfully changes how/when users interact with the service, they can use discounted off-peak plans to flatten usage. I would be very surprised if this promotion had anything to do with encouraging people to upgrade.
There's definitely psychology in play, but I think it might be less "trying to get you to spend more" and more "trying to incentivize load-shifting", which (to me at least) is a lot less sinister-- my utility does this too for electricity, and nobody attributes malicious intent to it.
We all know these services see huge load spikes and sometimes service degradation when America wakes up, and I bet they'd appreciate it if as many "chug-and-plug" agent workflows moved to overnight hours as possible.
My assumption was always that the December promo was a combination – they were presumably way under capacity because everyone was on holiday given how enterprise-heavy they are, so giving people a bunch of extra usage with a loud promo meant a whole bunch of people would try Claude and see how good it had gotten at very little cost to Anthropic.
The psychology is to hook you on the usage. A lot of people see a little movement in the usage meter and get cold feet about heavy usage. The prior $70 credit deal and now this offering are to try to get people to dive in, and hopefully retain that usage pattern afterwards.
Anthropic's models are obviously superior at coding right now but using 2-3 $20 accounts between different providers is still a very effective way to get good value. Gemini CLI and Codex seem to be at least 2x more permissive on usage. The models are good enough.
Plus we are technologists, we want to try out different stuff and compare.
That's precisely what I do, with subscriptions to all of them. Gemini almost seems unlimited...like I never hit limits with it. Don't even know how to check my usage for the subscription plans on that.
But increasingly I'm using Claude for basically all real coding. I ask Gemini and Codex questions, but I'm honestly in awe at Opus' ridiculous capabilities.
DST shenanigans aside (we're in the "US has changed but Europe hasn't" window), 10:00 in SF is 18:00 in London. Meaning their peak time window is 13:00–19:00 London time, or 14:00–20:00 Berlin time.
So us European folks get promotional rates during the morning and evening.
EDIT: Actually, because the promo ends at the end of March, it'll all be within DST shenanigans. So peak times are 12:00–18:00 London, 13:00–19:00 Berlin.
Very much like electric utility time of day pricing, using economic incentives to shift demand to trough periods.
Perhaps an opportunity for them to improve workload scheduling orchestration, like submitting a job to a distributed computing cluster queue, to smooth demand and maximize utilization.
Everything bursty will use economic incentives to smooth the load. I'm not sure how they'd do that with workload scheduling orchestration when you have latency-sensitive loads and there are e.g. twice as many requests at midday as at midnight.
You decouple the workloads from human interaction (ie when you submit the job to the queue vs when it is scheduled to execute) so when they run is not a consideration, if possible. The economic incentives encourage solving this, and if it can’t be solved, it buckets customer cohort by willingness (or unwillingness) to pay for access during peak times.
Certainly, interactive workloads aren’t realistic for time shifting, but agentic coding likely is. Package everything up and ship it as a job, getting a bundle back asynchronously.
I don't know, my agentic coding is pretty interactive. Maybe once the plan is done, sure. That would be interesting, though OpenAI already does this with batch workloads.
The insanely competitive market for LLMs is great for us, but if I were one of the investors in these companies it wouldn't exactly fill me with confidence that my $500 billion spent on datacenters and Nvidia cards is going to get repaid ten times over like they're claiming. I'm still getting very strong "this is a commodity; margins will be driven inexorably to zero" vibes from these products.
Did you exhaust the five-hour usage limit already? As I understand it, the ”additional usage” refers to anything beyond the standard five-hour usage limit.
It's basically the whole time Wall Street & stock markets run. And the entire afternoon and early evening of Europe. Plenty of usage in this window, AWS-East|Azure-East max usage window.
Long ago in the ancient days of punchcards and IBM mainframes, you’d write your programs during the day, then submit them to run overnight and pick up your results in the morning. It would be funny and sort of romantic if time-based LLM pricing returned us to that: write your specs all day, run agents on them overnight, check out the results in the morning.
I don't really understand why AI providers don't charge like the electric company, or AWS. Instead of increasing usage limits, just charge less for off-hours use.
LLM inference is much more geographically fungible than electricity, so maybe it’s just not worth the complexity yet and there is enough (not highly latency sensitive) load on average globally.
So we now have just pure marketing slop on the HN front page? How is this interesting or "curious" again? The AI slop season is affecting HN in clever ways.
These promos should be based on when more renewable energy is available for inference not when less people are likely to be using the AI. We need to adjust usage to when supply is more renewable for both training and inference in order to better protect our grid and the planet.
I believe Claude is still designated a supply chain risk by the United States government. Whether this affects usage of it or not, that's up to each individual, but it's definitely a curious fact (by HN standards).
Would be cool to have a $5-10/month plan that only works off-peak, for people who want to do the occasional side project after work. Right now it's hard to justify anything but Copilot (because it's cheaper, offers the same models, and I'm nowhere near the usage limits).
I suspect that any GPU cycle not spent on inference will just be dedicated to training (which as I understand it can “soak up” essentially unlimited compute at constant value per token), and I’d not expect to see time-based billing until that changes.
I canceled my plan today and wrote my reason as: now that I have a job again I don’t have the time or needs for the pro plan. If there was a $5 a month option, I would gladly take it to make use of Opus for my rare side ideas.
Pay as you go. I never spent more than $10/month working on my side project (usually a few evenings per month).
Hard to justify? 20/month for like 5x output is a great deal (be it Claude or Codex or whatever), even if it lasts only 2-3 hours per day.
You’re not using Claude Code?
Claude Pro is $20/month.
the $20 pro plan would also have double offpeak limits - just set it to sonnet and you'll get a reasonable level of output
Presumably they have unused compute in those hours and figure they may as well enable people to use it and get more invested into their ecosystem.
What I wish Anthropic would do is be a lot more explicit about what windows apply when. Surely they have the data to say "you get X usage from hours A to B, Y usage from B to C"
This is a psyop to recruit more Australians I'm sure of it
Can't complain honestly!
Using timezone not UTC for a global service is a crime, especially mixed with daylight saving.
This is how they say Wall St is all using Anthropic without saying Wall St is all using Anthropic.
Interesting to see more demand shaping mechanisms applied to LLM inference. Even though the "batch processing" feature is already available. I guess this "promotion" is to test the hypothesis of sliding along the spectrum towards more "real-time" demand shaping.
I just know there has to be some psychology in play with these promos. The promo during December got me to upgrade to the $100 plan, and I know I'm not the only one.
You're probably right. I've been thinking about why anthropic's revenue keeps soaring. I think in terms of "new users trying the product" we're definitely somewhere in the slowing part of the S-curve (at least in the US), but there are other growth contributors. Two bigs ones are people finding new use-cases and people figuring out how to scale up current use-cases to use more tokens. Perhaps little temporary-usage-boosts like this give people permission to attempt new use-cases or more scale and realize they could use a higher tiered plan.
I suspect it’s much more about understanding user behavior, i.e: given more allowance off-peak, do users change when they use Claude? And from there, that will inform how plans are designed long term. If they discover that offering higher off-peak limits meaningfully changes how/when users interact with the service, they can use discounted off-peak plans to flatten usage. I would be very surprised if this promotion had anything to do with encouraging people to upgrade.
Interesting - the first thing my mind went to was the DoD supply chain risk designation, and wanting to boost metrics to calm investors nerves
There's definitely psychology in play, but I think it might be less "trying to get you to spend more" and more "trying to incentivize load-shifting", which (to me at least) is a lot less sinister-- my utility does this too for electricity, and nobody attributes malicious intent to it.
We all know these services see huge load spikes and sometimes service degradation when America wakes up, and I bet they'd appreciate it if as many "chug-and-plug" agent workflows moved to overnight hours as possible.
My assumption was always that the December promo was a combination – they were presumably way under capacity because everyone was on holiday given how enterprise-heavy they are, so giving people a bunch of extra usage with a loud promo meant a whole bunch of people would try Claude and see how good it had gotten at very little cost to Anthropic.
The psychology is to hook you on the usage. A lot of people see a little movement in the usage meter and get cold feet about heavy usage. The prior $70 credit deal and now this offering are to try to get people to dive in, and hopefully retain that usage pattern afterwards.
Anthropic's models are obviously superior at coding right now but using 2-3 $20 accounts between different providers is still a very effective way to get good value. Gemini CLI and Codex seem to be at least 2x more permissive on usage. The models are good enough.
Plus we are technologists, we want to try out different stuff and compare.
That's precisely what I do, with subscriptions to all of them. Gemini almost seems unlimited...like I never hit limits with it. Don't even know how to check my usage for the subscription plans on that.
But increasingly I'm using Claude for basically all real coding. I ask Gemini and Codex questions, but I'm honestly in awe at Opus' ridiculous capabilities.
/stats session shows you the remaining quota in Gemini CLI and when the quota resets, and they dropped the quota badly in the last few days.
Before that I would totally agree with you, it felt really endless
I found the $250 in free credit for Claude Code hard to actually use before it expired. I think I got down to less than $50
Travelling salesman problem in 2026 is Travelling Engineer Problem to find optimal location to maximize tokens usage.
That is doubled usage between 5AM and 11PM for anyone playing along from Sydney/Melbourne.
Are you sure? It's a 6h window on the page
JST here, it' basically add day.
Would be happy to utilize but didn't see a promocode or voucher. Do I need more coffee?
Living in Tasmania as competitive advantage
Tassie represent
Have they accounted for time zones? This is my daytime in Australia/Sydney
Just open the page, they specify the UTC-4 band.. you can adjust accordingly (you'll be fine)
Dear line manager, I will be taking a very long lunch 12-6pm in London's Chinatown then heading back to the office half cut to vibe code
This company is clearly on a mission. I would just like to know what that mission is. I mean this in a good way.
So afternoon in Germany or am I misreading?
DST shenanigans aside (we're in the "US has changed but Europe hasn't" window), 10:00 in SF is 18:00 in London. Meaning their peak time window is 13:00–19:00 London time, or 14:00–20:00 Berlin time.
So us European folks get promotional rates during the morning and evening.
EDIT: Actually, because the promo ends at the end of March, it'll all be within DST shenanigans. So peak times are 12:00–18:00 London, 13:00–19:00 Berlin.
Outside 4pm to 10pm
This is great, but i guess they are feeling the heat from Codex resetting limits in the last month quite a bit.
I think they're feeling the heat from growing too quickly so they want to incentivize people to spread the load more evenly.
Very much like electric utility time of day pricing, using economic incentives to shift demand to trough periods.
Perhaps an opportunity for them to improve workload scheduling orchestration, like submitting a job to a distributed computing cluster queue, to smooth demand and maximize utilization.
Everything bursty will use economic incentives to smooth the load. I'm not sure how they'd do that with workload scheduling orchestration when you have latency-sensitive loads and there are e.g. twice as many requests at midday as at midnight.
You decouple the workloads from human interaction (ie when you submit the job to the queue vs when it is scheduled to execute) so when they run is not a consideration, if possible. The economic incentives encourage solving this, and if it can’t be solved, it buckets customer cohort by willingness (or unwillingness) to pay for access during peak times.
Sure, but if I ask the LLM a question, I'd like it to respond now, instead of tonight.
Certainly, interactive workloads aren’t realistic for time shifting, but agentic coding likely is. Package everything up and ship it as a job, getting a bundle back asynchronously.
I don't know, my agentic coding is pretty interactive. Maybe once the plan is done, sure. That would be interesting, though OpenAI already does this with batch workloads.
The insanely competitive market for LLMs is great for us, but if I were one of the investors in these companies it wouldn't exactly fill me with confidence that my $500 billion spent on datacenters and Nvidia cards is going to get repaid ten times over like they're claiming. I'm still getting very strong "this is a commodity; margins will be driven inexorably to zero" vibes from these products.
I’m trying to figure out how this affects weekly limits, since those overlap peak hours. My observation is that it doesn’t. But I could be wrong.
If they are doing it “right” I think any off peak usage should count 50% toward your weekly limits.
Edit: it does look like they are doing it the "right" way.
> Does bonus usage count against my weekly usage limit?
> No. The additional usage you get during off-peak hours doesn’t count toward any weekly usage limits on your plan.
So the first 100% of 5-hour usage are billed against weekly usage at normal rates, but the second additional 100% are not counted?
I just watched my "weekly limit" get used while I ran a claude code command.
I'm not sure how to square that with the quote you gave.
Did you exhaust the five-hour usage limit already? As I understand it, the ”additional usage” refers to anything beyond the standard five-hour usage limit.
> Does bonus usage count against my weekly usage limit?
> No. The additional usage you get during off-peak hours doesn’t count toward any weekly usage limits on your plan.
Oops! Looks like we posted at the same time.
all weekend is off-peak
Wild conspiracy theory: this is targetting to decrease usuage from Indian users.
There is no way 5-11 AM PT is peak traffic
It's basically the whole time Wall Street & stock markets run. And the entire afternoon and early evening of Europe. Plenty of usage in this window, AWS-East|Azure-East max usage window.
I didn't understood "your five-hour usage" I thought plans were per interaction or per token, not per hour.
There's a limit that resets every five hours and one that resets every week.
My usage only shows daily and weekly, though. I never got that.
It has "current session" and "weekly". If you notice, "current session" is never more than five hours away from expiration.
Oh, you're right. I don't know why I've always misread "current session" as daily.
Thanks for clearing that up. It'll help me schedule stuff in the future.
For Claude Code, you use up 12% of your weekly allotment every session, so 8 sessions per week.
If you are only using a session a day, you're wasting a session. :)
You can pay either for API usage or a fixed monthly plan (which is way cheaper but you can't use it for applications, just personal use).
Long ago in the ancient days of punchcards and IBM mainframes, you’d write your programs during the day, then submit them to run overnight and pick up your results in the morning. It would be funny and sort of romantic if time-based LLM pricing returned us to that: write your specs all day, run agents on them overnight, check out the results in the morning.
They have this. It’s called batch pricing and it’s 50% off.
I find that incredibly optimistic.
But the best part is, those usage levels are hidden, arbitrary, and they change them all the time.
So they could “double” your usage by keeping it the same and then simply halving peak usage.
I don't really understand why AI providers don't charge like the electric company, or AWS. Instead of increasing usage limits, just charge less for off-hours use.
LLM inference is much more geographically fungible than electricity, so maybe it’s just not worth the complexity yet and there is enough (not highly latency sensitive) load on average globally.
So we now have just pure marketing slop on the HN front page? How is this interesting or "curious" again? The AI slop season is affecting HN in clever ways.
I still hate Claude for turning down limits. I use z.ai in Claude code now, haven't hit the limit yet.
I guess extra compute opened up after they were canned by Department of War.
Australia here we come.
They are learning from Codex
https://hascodexratelimitreset.today
AI psychosis intensifies
changes sleep schedule
Ah crap I was hoping to benefit more of my sub because I'm in an off-hours tz.
Wtf is ET? Is an alien time?
My fellow Californians would agree that, yes, ET is an alien time. https://en.wikipedia.org/wiki/Eastern_Time_Zone
Is this going to cause another outage?
These promos should be based on when more renewable energy is available for inference not when less people are likely to be using the AI. We need to adjust usage to when supply is more renewable for both training and inference in order to better protect our grid and the planet.
I believe Claude is still designated a supply chain risk by the United States government. Whether this affects usage of it or not, that's up to each individual, but it's definitely a curious fact (by HN standards).
That sounds to me similar to "Telegram banned by Russian government", more of a seal of approval than anything.
That is only relevant if you are in the government/military. The US government has not made using Claude Code a crime, yet.