The “Better Offline” podcast has been warning that this was coming for months now. It’s incredible to me that AI evangelists have managed to focus the entire conversation on LLM “usefulness” without even mentioning costs. It allows them to play a game of smoke and mirrors where no one actually knows the true cost of an API call and there is no predictable cost model. Most AI companies can be safely assumed to be running at a loss and if they can’t figure out how to make these models a lot more efficient, dreams of replacing software engineers will face the same reality as dreams of self driving semi trucks. Im looking forward to the AI winter myself… https://m.youtube.com/@BetterOfflinePod/videos
Reading the article and the linked Github post, as well as the original pricing announcement and the clarification post afterwards, this whole thing seems like some sort of Monty Python sketch. I can't believe that an actual enterprise targeted product comes up with something like:
> "AWS now defines two types of Kiro AI request. Spec requests are those started from tasks, while vibe requests are general chat responses. Executing a sub-task consumes at least one spec request plus a vibe request for "coordination"".
I still don't understand why pricing can't be as simple as it was initially and presented in a clear and understandable way: token cost this much, you used this many tokens, and that is it. Probably because if people would see how much they actually consume for real tasks, they would realize that the "vibes" cost more than an actual developer.
I think it’s clear that these tools are going to get more expensive.
What is not clear to me is that they’ll get expensive enough to not be worth it for a company.
A good engineer costs a lot of money and comes with limitations derived from being human.
Let’s say AI manages to be a 2x multiplier in productivity. Prices for that multiplier can rise a lot before they reach 120k/year for 40 hours a week, the point at which you’re better off hiring someone.
I both love Better Offline, but also very much see it's faults. Ed's angle is sharp and pointy, which is lovely, but having to take it as entertainment rather than informed journalism is a bit hard. His anti-AI basis is both rewarding, but also not well informed, is trying to go as hard as possible.
Honestly I liked some of the non-AI content a lot more (but AI seems to be more of a focus lately). He also had such an amazing run of fantastic guests: its nonsense to say this, but he's running through the list of awesome people to talk to fast, and I hope he's not afraid to invite many of these fantastic people back again!
Ed Zitron is a PR man who runs a PR company who found widespread traction when he started posting AI shittakes on twitter. He now claims to be a tech guy but has zero tech background. His entire schtick is saying smarmy things that gets clicks, and with AI skepticism he's found a very wide audience.
If he's right about something in this field, it's by accident and not a result of experience or research. He's a social media creation and one should consider his spicy takes in that light.
I agree, though honestly at this point I'm a little more cynical. I'm open to seeing the best that anyone can get these things to do regardless of cost.
Great tool, but nothing the cannot be done in Claude. I started using it when it first was posted here. The pricing they are offering though, is a bit absurd for retail in the current market.
>He estimated that light coding will cost him around $550 per month and full time coding around $1,950 per month. As an open source developer who builds for the community, "this pricing is a kick in the shins," he said.
Well maybe you shouldn't be using an enterprise AI coding toolset to do work that has historically been done for the love of coding and helping others. AI to do uncompensated labor is almost never going to work out financially like this. If it's really good enough to replace a decent engineer on a team, then those costs aren't "wallet-wrecking", it just means he needs to stay away from commercial products with commercial pricing. It's like complaining about the cost of one of VMWare's enterprise licenses for your home lab.
I will also add that it's rare these days to see a new AWS product or service that doesn't cost many times what it should. Most of these things are geared towards companies who are all in on the AWS platform and for whom the cost of any given service is a matter of whether it's worth paying an employee to do whatever AWS manages with that service.
AWS is so far behind on GenAI they’re just flailing at this point.
Their infrastructure is a commodity at best and their networking performance woes continue. Bedrock was a great idea poorly executed. Performance there is also a big problem. They have the Anthropic models which are good, but in our experience one is better off just using Anthropic directly. On the managed services front there’s no direction or clear plan. They seem to have just slapped “Q” on a bunch of random stuff as part of some disjointed panicked plan to have GenAI products. Each “Q” product is woefully behind other offerings in the market. Custom ML chips were agin a good idea poorly executed, failing to fully recognize that chips without an ecosystem (CUDA) does not make a successful go to market strategy.
I remain a general fan of AWS for basic infrastructure, but they’re so far behind on this GenAI stuff that one has to scratch your head at how they messed this up so badly. They also don’t have solid recognized leaders or talent in the space and it shows. AWS still generally doing well but recent financial results have shown chinks in the armor. Without a rapid turnaround it’s unlikely they’ll be the number one cloud provider for much longer.
It’s less that they’re flailing and more that it’s become some sort of lord of the flies culture with senior leaders directly competing with each other to try to take their bite of the pie.
The way AWS is structured with strongly owned independent businesses just doesn’t work, as GenAI needs a cohesive strategy which needs
1. An overall strategy
2. A culture that fosters collaboration not competition
Or at least an org charts to make them not compete with each other. (Example: Q developer vs Kiro)
I bet if you looked at the org charts you would see these teams don’t connect as they should.
I'm fine if AWS doesn't pursue AI that much, they should focus on infrastructure. It's a solid business as-is. AWS doesn't need "GenAI" to continue to do what they've been doing for a long time. Not doing "GenAI" the absolute best does not mean they won't be "number one cloud provider for much longer". Nobody is moving off AWS just because they don't have the worlds best "GenAI", which are honestly all pretty mediocre.
I agree. Although based on the last earnings call where folks were poking at why they’re behind it seems a sensitive spot for AWS leadership. They’re not used to lagging in a hot space, which seems to be driving the panicked disjointed strategy at the moment.
I too would like to see them just admit they’re behind, state it’s not a priority, and focus on what they do well which is the boring (but super important) basic infrastructure stuff.
> They also don’t have solid recognized leaders or talent in the space and it shows.
AWS was built by exceptional technical leaders like James Hamilton, Werner Vogels, and Tim Bray and I would include Bezos also, who people seem to forget has a Computer Science degree. But the company has consistently underpaid developers, while relying heavily on H-1B workers, and treating technical talent as poorly as Amazon treats delivery drivers.
When skilled engineers can get better opportunities elsewhere, they leave. AWS below market compensation, has driven away the technical expertise needed for innovation.
AWS has shifted from technical leadership to MBA driven management, and lately aggressively hiring senior middle management from Oracle. The combination of technical talent exodus, cultural deterioration and MBA style management made AWS poorly positioned for the AI era, where technical excellence and innovation speed are everything.
During major technological shifts like AI, you need a engineering first culture and inhouse technical skills. AWS has neither.
I started using it this weekend to build a greenfield project from scratch. Vibe requests ran out very quickly, so I just opened up VS Code in another window and had Copilot's Agent handle those. I just use Kiro for the Spec requests at the moment, which it does quite well. I think its a great tool, but if someone can copy what they're doing into Copilot, then I'll go back to using Copilot exclusively.
The pricing really turned me off after a fantastic initial experience.
Is pricing due to AWS massively overcharging because they know Enterprise might pay it due to ease of use with purchasing? Or is this realistic pricing for LLM coding assistance and AWS has decided to be upfront with actual costs?
Likely both. AWS was often 70+% more expensive on GPUs for a while and remains typically the most expensive provider when put in competitive scenarios. Remember that for Microsoft, Google, and others cloud is their low margin fun side business… for AWS/Amazon cloud is their high margin business that makes money.
Whereas others are willing to loose a bit to get ahead in a competitive space like GenAI, AWS has been freaking out since if the cloud gets driven to a low margin commodity then Amazon is in big trouble. This they’ll do everything possible to keep the margins high, which translates to “realistic” pricing based on actual costs in this case. And thus yes seemingly hoping enterprises will buy some expensive second rate product so they can say they have customers here and hoping they don’t notice better cheaper offerings are readily available.
This is also a signal what what the “real cost” of these services will be once the VC subsidies dry up.
> This is also a signal what what the “real cost” of these services will be once the VC subsidies dry up.
Sooooo true. Waiting for AI’s “uber” moment aka when uber suddenly needed to actually turn a profit and overnight drivers were paid less while prices rose 3x.
It's possible they have decided they are doing more VC route of burning money trying to gain market share in hopes they can lock everyone in then jack up the prices.
Obviously, they are not taking VC money but using revenue from other parts of the company.
Well, having personally used over $120 in Claude API credit on my $200/mo. Claude Code subscription... in a single day, without parallel tasks, yeah, it sounds like the actual price. (And keep in mind, Claude's API is still running on zero-margin, if not even subsidized, AWS prices for GPUs; combined with Anthropic still lighting money on fire and presumably losing money on the API pricing.)
The future is not that AI takes over. It's when the accountants realize for a $120K a year developer, if it makes them even 20% more efficient (doubt that), you have a ceiling of $2000/mo. on AI spend before you break even. Once the VC subsidies end, it could easily cost that much. When that happens... who cares if you use AI? Some developers might use it, others might not, it doesn't matter anymore.
This is also assuming Anthropic or OpenAI don't lose any of their ongoing lawsuits, and aren't forced to raise prices to cover settlement fees. For example, Anthropic is currently in the clear on the fair use "transformative" argument; but they are in hot water over the book piracy from LibGen (illegal regardless of use case). The worst case scenario in that lawsuit, although unlikely, is $150,000 per violation * 5 million books = $750B in damages.
> 120K a year developer, if it makes them even 20% more efficient (doubt that), you have a ceiling of $2000/mo
I don't think businesses sees it this way. They sort of want you to be 20% more efficient by being 20% better (with no added cost). I'm sure the math is, if their efficiency is increased by 20% then than means we can reduce head count by 20% or not hire new developers.
Oh its much worse than that - they think that most developers don't do anything and the core devs are just supported by the ancillary devs, 80% of the work in core devs and 20% otherwise.
In many workplaces this is true.
That means an "ideal" workspace is 20% of the size of its current setup, with AI doing all the work that the non-core devs used to do.
> Claude's API is still running on zero-margin, if not even subsidized, AWS prices for GPUs; combined with Anthropic still lighting money on fire and presumably losing money on the API pricing.
Source? Dario claims API inference is already “fairly profitable”. They have been optimizing models and inference, while keeping prices fairly high.
> dario recently told alex kantrowitz the quiet part out loud: "we make improvements all the time that make the models, like, 50% more efficient than they are before. we are just the beginning of optimizing inference... for every dollar the model makes, it costs a certain amount. that is actually already fairly profitable."
Most of these “we’re profitable on inference” comments are glossing over the depreciation cost of developing the model, which is essentially a capital expense. Given the short lifespan of models it seems unlikely that fully loaded cost looks pretty. If you can sweat a model for 5 years then the financials would likely look decent. With new models every few months, it’s likely really ugly.
Interesting. But it would depend on how much of model X is salvaged in creating model X+1.
I suspect that the answer is almost all of the training data, and none of the weights (because the new model has a different architecture, rather than some new pieces bolted on to the existing architecture).
So then the question becomes, what is the relative cost of the training data vs. actually training to derive the weights? I don't know the answer to that; can anyone give a definitive answer?
There are some transferable assets but the challenge is the commoditization of everything that means others have easy access to “good enough” assets to build upon. There’s very little moat to build in this business and that’s making all the money dumped into it looking a bit froth and ready to implode.
GPT-5 is a bellwether there. OpenAI had a huge head start and basically access to whatever money and resources they needed and after a ton of hype released a pile of underwhelming meh. With the pace of advances slowing rapidly the pressure will be on to make money from what’s there now (which is well short of what the hype had promised).
If it was a small player, everybody would laugh and forget about them.
But now we're talking about AWS... so aren't other players going to see this as an opportunity to start increasing their pricing and stop bleeding millions?
Past a certain point, using tools like RooCode or anything else that lets you connect to arbitrary APIs feels like the way to go - whether you want to pay for one provider directly, or just use something like OpenRouter.
That way you have:
* the ability to use regular VSC instead of a customized fork (that might die off)
* the ability to pick what plugin suits your needs best (from a bunch available, many speak with both OpenAI and Ollama compatible APIs)
* the ability to pick whatever models you want and can afford, local ones (or at least ones running on-prem) included
If vibe coding fulfills its promises, even those crazy numbers are a small percentage of the price of a full-time dev. I'm not saying it does, but I'm just following along with the alleged value prop.
The AWS logo is always smiling when startups who attempt to scale as if they are Google want to use their services and burn all that VC money and continue to raise every month to avoid shutting down.
Now the cost of using their tools like Kiro will just make AWS laugh at all the free money they are getting including their hidden charges on simple actions.
Ask yourself if you were starting a startup right now if you really need AWS.
And it's still the early customer-acquisition era for all of this stuff. Wait until the dust settles and you see what pricing and user experience will be like in the coming enshittification era.
What if we get AGI, but it's too expensive, and you can't build power plants because the green lobby? Plot twist: AGI arrives and despises environmentalist. Seriously, that wasn't how it was supposed to go according to the dominant narrative (TM). AGI was supposed to hate humans and love the earth.
I'm working on a new fictional meta-narrative where AGI with the dominant belief in commerce, above all else, above nations, above politics, above war, above morality is what happens when super-intelligence emerges.
The “Better Offline” podcast has been warning that this was coming for months now. It’s incredible to me that AI evangelists have managed to focus the entire conversation on LLM “usefulness” without even mentioning costs. It allows them to play a game of smoke and mirrors where no one actually knows the true cost of an API call and there is no predictable cost model. Most AI companies can be safely assumed to be running at a loss and if they can’t figure out how to make these models a lot more efficient, dreams of replacing software engineers will face the same reality as dreams of self driving semi trucks. Im looking forward to the AI winter myself… https://m.youtube.com/@BetterOfflinePod/videos
Reading the article and the linked Github post, as well as the original pricing announcement and the clarification post afterwards, this whole thing seems like some sort of Monty Python sketch. I can't believe that an actual enterprise targeted product comes up with something like:
> "AWS now defines two types of Kiro AI request. Spec requests are those started from tasks, while vibe requests are general chat responses. Executing a sub-task consumes at least one spec request plus a vibe request for "coordination"".
I still don't understand why pricing can't be as simple as it was initially and presented in a clear and understandable way: token cost this much, you used this many tokens, and that is it. Probably because if people would see how much they actually consume for real tasks, they would realize that the "vibes" cost more than an actual developer.
Just give me dollar amounts, I feel like I'm paying these companies with vbucks at this point
Clear pricing makes it easy for you to control costs.
Vibe pricing makes it easy for the vendor to maximize revenue.
They have a little incentive to make pricing transparent.
If you're care about how much money you're spending, you're not in AWS's target market.
I think it’s clear that these tools are going to get more expensive.
What is not clear to me is that they’ll get expensive enough to not be worth it for a company.
A good engineer costs a lot of money and comes with limitations derived from being human.
Let’s say AI manages to be a 2x multiplier in productivity. Prices for that multiplier can rise a lot before they reach 120k/year for 40 hours a week, the point at which you’re better off hiring someone.
> It’s incredible to me that AI evangelists have managed to focus the entire conversation on LLM “usefulness” without even mentioning costs.
LLMs are highly useful.
AI-assisted services are in general costly.
There are LLMs you can run locally on your own hardware. You pay AI-assisted services to use their computational resources.
I both love Better Offline, but also very much see it's faults. Ed's angle is sharp and pointy, which is lovely, but having to take it as entertainment rather than informed journalism is a bit hard. His anti-AI basis is both rewarding, but also not well informed, is trying to go as hard as possible.
Honestly I liked some of the non-AI content a lot more (but AI seems to be more of a focus lately). He also had such an amazing run of fantastic guests: its nonsense to say this, but he's running through the list of awesome people to talk to fast, and I hope he's not afraid to invite many of these fantastic people back again!
Ed Zitron is a PR man who runs a PR company who found widespread traction when he started posting AI shittakes on twitter. He now claims to be a tech guy but has zero tech background. His entire schtick is saying smarmy things that gets clicks, and with AI skepticism he's found a very wide audience.
If he's right about something in this field, it's by accident and not a result of experience or research. He's a social media creation and one should consider his spicy takes in that light.
I agree, though honestly at this point I'm a little more cynical. I'm open to seeing the best that anyone can get these things to do regardless of cost.
"Vibe requests are useless because the vibe agent constantly nags me to switch to spec requests, claiming my chats are 'too complex'"
How can you trust a tool that refuses to do the work to just take more money from you?
Great tool, but nothing the cannot be done in Claude. I started using it when it first was posted here. The pricing they are offering though, is a bit absurd for retail in the current market.
>He estimated that light coding will cost him around $550 per month and full time coding around $1,950 per month. As an open source developer who builds for the community, "this pricing is a kick in the shins," he said.
Well maybe you shouldn't be using an enterprise AI coding toolset to do work that has historically been done for the love of coding and helping others. AI to do uncompensated labor is almost never going to work out financially like this. If it's really good enough to replace a decent engineer on a team, then those costs aren't "wallet-wrecking", it just means he needs to stay away from commercial products with commercial pricing. It's like complaining about the cost of one of VMWare's enterprise licenses for your home lab.
I will also add that it's rare these days to see a new AWS product or service that doesn't cost many times what it should. Most of these things are geared towards companies who are all in on the AWS platform and for whom the cost of any given service is a matter of whether it's worth paying an employee to do whatever AWS manages with that service.
AWS is so far behind on GenAI they’re just flailing at this point.
Their infrastructure is a commodity at best and their networking performance woes continue. Bedrock was a great idea poorly executed. Performance there is also a big problem. They have the Anthropic models which are good, but in our experience one is better off just using Anthropic directly. On the managed services front there’s no direction or clear plan. They seem to have just slapped “Q” on a bunch of random stuff as part of some disjointed panicked plan to have GenAI products. Each “Q” product is woefully behind other offerings in the market. Custom ML chips were agin a good idea poorly executed, failing to fully recognize that chips without an ecosystem (CUDA) does not make a successful go to market strategy.
I remain a general fan of AWS for basic infrastructure, but they’re so far behind on this GenAI stuff that one has to scratch your head at how they messed this up so badly. They also don’t have solid recognized leaders or talent in the space and it shows. AWS still generally doing well but recent financial results have shown chinks in the armor. Without a rapid turnaround it’s unlikely they’ll be the number one cloud provider for much longer.
It’s less that they’re flailing and more that it’s become some sort of lord of the flies culture with senior leaders directly competing with each other to try to take their bite of the pie.
The way AWS is structured with strongly owned independent businesses just doesn’t work, as GenAI needs a cohesive strategy which needs
1. An overall strategy
2. A culture that fosters collaboration not competition
Or at least an org charts to make them not compete with each other. (Example: Q developer vs Kiro)
I bet if you looked at the org charts you would see these teams don’t connect as they should.
I'm fine if AWS doesn't pursue AI that much, they should focus on infrastructure. It's a solid business as-is. AWS doesn't need "GenAI" to continue to do what they've been doing for a long time. Not doing "GenAI" the absolute best does not mean they won't be "number one cloud provider for much longer". Nobody is moving off AWS just because they don't have the worlds best "GenAI", which are honestly all pretty mediocre.
I agree. Although based on the last earnings call where folks were poking at why they’re behind it seems a sensitive spot for AWS leadership. They’re not used to lagging in a hot space, which seems to be driving the panicked disjointed strategy at the moment.
I too would like to see them just admit they’re behind, state it’s not a priority, and focus on what they do well which is the boring (but super important) basic infrastructure stuff.
> They also don’t have solid recognized leaders or talent in the space and it shows.
AWS was built by exceptional technical leaders like James Hamilton, Werner Vogels, and Tim Bray and I would include Bezos also, who people seem to forget has a Computer Science degree. But the company has consistently underpaid developers, while relying heavily on H-1B workers, and treating technical talent as poorly as Amazon treats delivery drivers.
When skilled engineers can get better opportunities elsewhere, they leave. AWS below market compensation, has driven away the technical expertise needed for innovation.
AWS has shifted from technical leadership to MBA driven management, and lately aggressively hiring senior middle management from Oracle. The combination of technical talent exodus, cultural deterioration and MBA style management made AWS poorly positioned for the AI era, where technical excellence and innovation speed are everything.
During major technological shifts like AI, you need a engineering first culture and inhouse technical skills. AWS has neither.
Here's a good analysis why this is happening in AI in general: https://blog.dshr.org/2025/08/the-drugs-are-taking-hold.html
I started using it this weekend to build a greenfield project from scratch. Vibe requests ran out very quickly, so I just opened up VS Code in another window and had Copilot's Agent handle those. I just use Kiro for the Spec requests at the moment, which it does quite well. I think its a great tool, but if someone can copy what they're doing into Copilot, then I'll go back to using Copilot exclusively.
The pricing really turned me off after a fantastic initial experience.
Aren't they just using bog standard Anthropic models? Their 'secret sauce' is the prompts.
Is pricing due to AWS massively overcharging because they know Enterprise might pay it due to ease of use with purchasing? Or is this realistic pricing for LLM coding assistance and AWS has decided to be upfront with actual costs?
Likely both. AWS was often 70+% more expensive on GPUs for a while and remains typically the most expensive provider when put in competitive scenarios. Remember that for Microsoft, Google, and others cloud is their low margin fun side business… for AWS/Amazon cloud is their high margin business that makes money.
Whereas others are willing to loose a bit to get ahead in a competitive space like GenAI, AWS has been freaking out since if the cloud gets driven to a low margin commodity then Amazon is in big trouble. This they’ll do everything possible to keep the margins high, which translates to “realistic” pricing based on actual costs in this case. And thus yes seemingly hoping enterprises will buy some expensive second rate product so they can say they have customers here and hoping they don’t notice better cheaper offerings are readily available.
This is also a signal what what the “real cost” of these services will be once the VC subsidies dry up.
> This is also a signal what what the “real cost” of these services will be once the VC subsidies dry up.
Sooooo true. Waiting for AI’s “uber” moment aka when uber suddenly needed to actually turn a profit and overnight drivers were paid less while prices rose 3x.
Yeah, honestly, the 'upstarts' in the space need to get users and burn VC money to do so, and part of that is selling below a reasonable market price.
Amazon / AWS might not want (or need) to play that game.
What about MS and Copilot?
It's possible they have decided they are doing more VC route of burning money trying to gain market share in hopes they can lock everyone in then jack up the prices.
Obviously, they are not taking VC money but using revenue from other parts of the company.
Well, having personally used over $120 in Claude API credit on my $200/mo. Claude Code subscription... in a single day, without parallel tasks, yeah, it sounds like the actual price. (And keep in mind, Claude's API is still running on zero-margin, if not even subsidized, AWS prices for GPUs; combined with Anthropic still lighting money on fire and presumably losing money on the API pricing.)
The future is not that AI takes over. It's when the accountants realize for a $120K a year developer, if it makes them even 20% more efficient (doubt that), you have a ceiling of $2000/mo. on AI spend before you break even. Once the VC subsidies end, it could easily cost that much. When that happens... who cares if you use AI? Some developers might use it, others might not, it doesn't matter anymore.
This is also assuming Anthropic or OpenAI don't lose any of their ongoing lawsuits, and aren't forced to raise prices to cover settlement fees. For example, Anthropic is currently in the clear on the fair use "transformative" argument; but they are in hot water over the book piracy from LibGen (illegal regardless of use case). The worst case scenario in that lawsuit, although unlikely, is $150,000 per violation * 5 million books = $750B in damages.
> 120K a year developer, if it makes them even 20% more efficient (doubt that), you have a ceiling of $2000/mo
I don't think businesses sees it this way. They sort of want you to be 20% more efficient by being 20% better (with no added cost). I'm sure the math is, if their efficiency is increased by 20% then than means we can reduce head count by 20% or not hire new developers.
Oh its much worse than that - they think that most developers don't do anything and the core devs are just supported by the ancillary devs, 80% of the work in core devs and 20% otherwise.
In many workplaces this is true. That means an "ideal" workspace is 20% of the size of its current setup, with AI doing all the work that the non-core devs used to do.
> Claude's API is still running on zero-margin, if not even subsidized, AWS prices for GPUs; combined with Anthropic still lighting money on fire and presumably losing money on the API pricing.
Source? Dario claims API inference is already “fairly profitable”. They have been optimizing models and inference, while keeping prices fairly high.
> dario recently told alex kantrowitz the quiet part out loud: "we make improvements all the time that make the models, like, 50% more efficient than they are before. we are just the beginning of optimizing inference... for every dollar the model makes, it costs a certain amount. that is actually already fairly profitable."
https://ethanding.substack.com/p/openai-burns-the-boats
Most of these “we’re profitable on inference” comments are glossing over the depreciation cost of developing the model, which is essentially a capital expense. Given the short lifespan of models it seems unlikely that fully loaded cost looks pretty. If you can sweat a model for 5 years then the financials would likely look decent. With new models every few months, it’s likely really ugly.
Interesting. But it would depend on how much of model X is salvaged in creating model X+1.
I suspect that the answer is almost all of the training data, and none of the weights (because the new model has a different architecture, rather than some new pieces bolted on to the existing architecture).
So then the question becomes, what is the relative cost of the training data vs. actually training to derive the weights? I don't know the answer to that; can anyone give a definitive answer?
There are some transferable assets but the challenge is the commoditization of everything that means others have easy access to “good enough” assets to build upon. There’s very little moat to build in this business and that’s making all the money dumped into it looking a bit froth and ready to implode.
GPT-5 is a bellwether there. OpenAI had a huge head start and basically access to whatever money and resources they needed and after a ton of hype released a pile of underwhelming meh. With the pace of advances slowing rapidly the pressure will be on to make money from what’s there now (which is well short of what the hype had promised).
If it was a small player, everybody would laugh and forget about them.
But now we're talking about AWS... so aren't other players going to see this as an opportunity to start increasing their pricing and stop bleeding millions?
only if users actually pay
Past a certain point, using tools like RooCode or anything else that lets you connect to arbitrary APIs feels like the way to go - whether you want to pay for one provider directly, or just use something like OpenRouter.
That way you have:
If vibe coding fulfills its promises, even those crazy numbers are a small percentage of the price of a full-time dev. I'm not saying it does, but I'm just following along with the alleged value prop.
Yup. I’m not giving 1/5th of my salary away.
Google’s free Gemini CLI might be catching up to the $200/month Claude Code.
You hit the limits on this like immediately (it’s 1000 requests which each tool call is a request and even then I’m convinced pro isn’t 1000).
Also data retention Gemini has which you have to download a vs code extension to turn off for the CLI.
It isn’t an enterprise product it’s a way to get data for tool calling for training as far as I see it (as it currently stands).
The AWS logo is always smiling when startups who attempt to scale as if they are Google want to use their services and burn all that VC money and continue to raise every month to avoid shutting down.
Now the cost of using their tools like Kiro will just make AWS laugh at all the free money they are getting including their hidden charges on simple actions.
Ask yourself if you were starting a startup right now if you really need AWS.
Remember, you are not Google.
This is like the Kiro team read one of Corey Quinn's many pieces on Cloud NAT pricing and was like "Hold my beer".
I know he thought it was "not terrible" at first, I'm excited to see his take on the pricing now :)
Tools that don’t control the model as well are doomed to fail due to costs.
I cannot imagine anyone signing up for that. How will this save salaries ? Looks like job security for good developers.
And it's still the early customer-acquisition era for all of this stuff. Wait until the dust settles and you see what pricing and user experience will be like in the coming enshittification era.
It is interesting that AWS cannot code, let alone vibe code, Kiro but relies yet again on a fork from Github.
The prices are for corporations who buy the hype until they find out in a year that vibe coding is utterly useless.
Many of they are all based on code OSS.
Windsurf started out as just extensions, as did continue.dev.
I wonder what is the delta in the API support needed to use vscode + paid extension vs. code OSS + bundled extensions.
What if we get AGI, but it's too expensive, and you can't build power plants because the green lobby? Plot twist: AGI arrives and despises environmentalist. Seriously, that wasn't how it was supposed to go according to the dominant narrative (TM). AGI was supposed to hate humans and love the earth.
I'm working on a new fictional meta-narrative where AGI with the dominant belief in commerce, above all else, above nations, above politics, above war, above morality is what happens when super-intelligence emerges.
You are a petulant child: "you can't build power plants because the green lobby"
China is massively building out solar power to meet their needs. https://www.youtube.com/watch?v=MX_PeNzz-Lw&t=50