Context: a few weeks ago, Anthropic signed a deal to buy "multiple gigawatts of next-generation TPU capacity" from Google and Broadcom [1]. There have been several previous deals, too.
Some people call this sort of thing a "circular deal", but perhaps a better way to think of it is as a very large-scale version of vendor financing? The simple version of vendor financing is when a vendor gives a retailer time to pay for goods they purchased for resale. This is effectively a loan that's backed by the retailer's ability to resell the goods. There's a possibility that the retailer goes broke and doesn't pay, but the vendor has insight into how well the retailer is doing, so they know if they're a good risk.
Similarly, Google likely knows quite a lot about Anthropic because Anthropic buys computing services from Google for resale. They're making an equity investment rather than a loan, but the money will be coming back to Google, assuming Anthropic's sales continue to rise as fast as they have been.
Also, if you own Google stock, some small part of that is an investment in Anthropic?
The risks are different, but there's no getting around that the value of any investment is based on future cash flows and that's speculating about the future.
To the extent that Google and Anthropic are competing for AI business, Google is somewhat hedged against Anthropic winning market share. They still get data center revenue and they own equity.
Lots and lots of vendor financing during the dotcom era, and it ended up being a material part of those vendors' own difficulties. Especially when service providers were concerned (e.g. the huge crash in optical in particular).
Obviously it's not a perfect comparison, but you have to wonder how much of NVIDIA's income (for instance) is ultimately funded by its own money.
I think the subtext of the last few weeks is the Anthropic was becoming severely capacity constrained (or approaching that). They seem to have had to sign two somewhat adverse contracts with Amazon and Google in short succession. suddenly model quality is back up again.
What is all this AI doing? People are spending 10’s to 100’s of billions and no service or technology seems better or cheaper. Everything is more expensive and worse.
- Development velocity is very noticeably much higher across the board. Quality is not obviously worse, but it's LLM assisted, not vibe coding (except for experiments and internal tools).
- Things that would have been tactically built with TypeScript are now Rust apps.
- Things that would have been small Python scripts are full web apps and dashboards.
- Vibe coding (with Claude Desktop, nobody is using Replit or any of the others) is the new Excel for non tech people.
- Every time someone has any idea it's accompanied by a multi page "Clauded" memo explaining why it's a great idea and what exactly should be done (about 20% of which is useful).
- 80% of what were web searches now go to Claude instead (for at least a significant minority of people, could easily be over 50%).
- Nobody talks about ChatGPT any more. It's Claude or (sometimes) Gemini.
- My main job isn't writing code but I try to keep Claude Code (both my personal and corpo accounts) and OpenCode (also almost always Claude, via Copilot) busy and churning away on something as close to 100% of the time as I can without getting in the way of my other priorities.
We (~20 people) are probably using 2 orders of magnitude more inference than we were at the start of the year and it's consolidated away from cursor, ChatGPT and Claude to just be almost all Claude (plus a little Gemini as that's part of our Google Whateverspace plan and some people like it, mostly for non-engineering tasks).
No idea if any of this will make things better, exactly, but I think we'd be at a severe competitive disadvantage if we dropped it all and went back how things were.
Is your team measuring how much of your code is being written with claude and comparing amongst the team, like what works best in your codebase? How are you learning from each other?
I’m making a team version of my buildermark.dev open source project and trying to learn about how teams would like to use it.
Different teams are using it in very different ways so it can be tough to compare meaningfully.
Backends handling tens to hundreds of thousands of messages per second with extremely high correctness and resilience requirements are necessarily taking a different approach to less critical services that power various ancillary sites/pages or to front end web apps.
That said there's a lot of very open discussion around tooling, "skills", MCP, etc., harnesses, and approaches and plenty of sharing and cross-pollination of techniques.
It would be great to find ways to better quantify the actual value add from LLMs and from the various ways of using them, but our experience so far is that the landscape in terms of both model capability and tooling is shifting so fast that that's quite hard to do.
Thanks for the feedback. I agree that it’s changing very fast, which is why my thesis is that this tooling will be needed to help everyone on the team keep up.
I a am hobbyist playing around. Recently dropped CC (which gave me a sense of awe 2 months ago) because of recent shenanigans, then GH Copilot but couldn't understand their cost structure, ran out of quota half month in, now on Codex. I don't really see any difference for little stuff.
Have you shipped anything? It's all romantic, but except for layoffs, who's done anything with this? I am not a pessimist and it's too late for that anyway, but what's been done besides uncovering some 0-days?
It sounds very similar to my shop. I have QA people and Product Managers using Claude to develop better integration and reporting tools in Python. Business users are vibe coding all kinds of tools shared as Claude Artifacts, the more ambitious ones are building single page app prototypes. We ported one prototype to Next.js and hosted on Vercel in a couple of days and then handed it back to them with a Devcontainer and Claude Code so they can iterate on it themselves; and we also developed all the security infrastructure, scaffolding, agent instructions & policy required to do this for low stakes apps in a responsible way.
It hardly seems worth it to try to iterate on design when they can just build a completely functional prototype themselves in a few hours. We're building APIs for internal users in preference to UIs, because they can build the UIs themselves and get exactly what they need for their specific use cases and then share it with whoever wants it.
We replaced an expensive, proprietary vendor product in a couple of weeks.
I have no delusions about the scale or complexity limits of these projects. They can help with large, complex systems but mostly at the margins: help with impact analysis, production support, test cases, code review. We generate a lot of code too but we're not vibe coding a new system of record and review standards have actually increased because refactoring is so much cheaper.
The fact is that ordinary businesses have a LOT of unmet demand for low stakes custom software. The ones that lean into this will not develop superpowers but I do think they will out-compete slow adopters and those companies will be forced to catch up in the next few years.
I develop presentations now by dumping a bunch of context in a folder with a template and telling Claude Cowork what I want (it does much better than web version because of its python and shell tools and it can iterate, render, review, repeat until its excellent). The copy is quite good, I rewrite less than a third of it and the style and graphics are so much better than I could do myself in many hours.
No one likes reading a bunch of vibe coded slop and cultural norms about this are still evolving; but on balance its worth it by far.
I'm burning an insane number of tokens 8-12 hours a day for the dramatic improvement of some internal tooling at a big tech company. Using it heavily for an unannounced future project as well.
We suddenly have a proliferation of new internal tools and resources, nearly all of which are barely functional and largely useless with no discernible impact on the overall business trajectory but sure do seem to help come promo time.
Barely an hour goes by without a new 4-page document about something that that everyone is apparently ment to read, digest and respond to, despite its 'author' having done none of those steps, it's starting to feel actively adversarial.
Without good management AI is just a new way to make terrible work in unprecedented quantities.
With good management you will get great work faster.
The distinguishing feature between organisations competing in the AI era is process. AI can automate a lot of the work but the human side owns process. If it’s no good everything collapses. Functional companies become hyper functional while dysfunctional companies will collapse.
Bad ideas used to be warded off by workers who in some shape or form of malicious compliance just would slow down and redirect the work while advocating for better solutions.
That can’t happen as much anymore as your manager or CEO can vibe code stuff and throw it down the pipeline for the workers to fix.
If you have bad processes your company will die, or shrivel or stagnate at best. Companies with good process will beat you.
My main use of vibecoding is creating dozens of internal tools that have sped up tasks, or made tasks possible that were previously not. These tools would have taken weeks of time to build manually and would have been hard to justify, rather than just struggling with manual processes every now and again. AI has been life-changing in creating these kinda janky tools with janky UI that do everything they're supposed to perfectly, but are ugly as hell.
Are you able to describe any of those internal tools in more detail? How important are they on average? (For example, at a prior job I spent a bit of time creating a slackbot command "/wtf acronym" which would query our company's giant glossary of acronyms and return the definition. It wasn't very popular (read: not very useful/important) but it saved myself some time at least looking things up (saving more time than it took to create I'm sure). I'd expect modern LLMs to be able to recreate it within a few minutes as a one-shot task.)
My team has also adopted this - it's much easier to add another layer than to refine or simplify what exists. We have AI skills to help us debug microservices that call microservices that have circular dependencies.
This was possible before but someone would maybe notice the insane spaghetti. Now it's just "we'll fix it with another layer of noodles".
Unfortunately I saw this pre-AI with microservices, where while empowering developers with their beloved microservices, we create intense complexity and deployment headaches. AI will fix the slop with an obscuring layer of complexity on top.
I answered this in a different comment below, but a lot of the friction is around the amount of time it takes to test/review/submit etc, and a lot of this is centered around tooling that no one has had the time to improve, perf problems in clunky processes that have been around longer than anyone individual, and other things of this nature. Addressing these issues is now approachable and doable in one's "spare time".
The point of that friction is to keep the human in the loop wrt code quality, it's not meant to be meaningless busywork. It's difficult to believe that you sustain the benefit of those systems. Anthropic and Microsoft publicly failed to keep up code quality. They would probably be in a better spot currently if they used neither, no friction, no AI. But that friction exists for a reason and AI doesn't have the "context length" to benefit from it.
This the the difference between intentional and incidental friction, if your CI/CD pipeline is bad it should be improved not sidestepped. The first step in large projects is paving over the lower layer so that all that incidental friction, the kind AI can help with, is removed. If you are constantly going outside that paved area, sure AI will help, but not with the success of the project which is more contingent on the fact that you've failed to lay the groundwork correctly.
it's crazy that the experiences are still so wildly varying that we get people that use this strategy as a 'valid' gotcha.
AI works for the vast majority of nowhere-near-the-edge CS work -- you know, all the stuff the majority of people have to do every day.
I don't touch any kind of SQL manually anymore. I don't touch iptables or UFW. I don't touch polkit, dbus, or any other human-hostile IPC anymore. I don't write cron jobs, or system unit files. I query for documentation rather than slogging through a stupid web wiki or equivalent. a decent LLM model does it all with fairly easy 5-10 word prompts.
ever do real work with a mic and speech-to-text? It's 50x'd by LLM support. Gone are the days of saying "H T T P COLON FORWARD SLASH FORWARD SLASH W W W".
this isn't some untested frontier land anymore. People that embrace it find it really empowering except on the edges, and even those state-of-the-art edge people are using it to do the crap work.
This whole "Yeah, well let me see the proof!" ostrich-head-in-the-sand thing works about as long as it takes for everyone to make you eat their dust.
None of that is concrete though; it's all alleged speed-ups with no discernable (though a lot of claimed) impact.
> This whole "Yeah, well let me see the proof!" ostrich-head-in-the-sand thing works about as long as it takes for everyone to make you eat their dust.
People will stop asking for the proof when the dust-eating commences.
That's all well and good, but what happens when the price to run these AIs goes up 10x or even 100x.
It's the same model as Uber, and I can't afford Uber most of the time anymore. It's become cost prohibitive just to take a short ride, but it used to cost like $7.
It's all fun and games until someone has to pay the bill, and these companies are losing many billions of dollars with no end in sight for the losses.
I doubt the tech and costs for the tech will improve fast enough to stop the flood of money going out, and I doubt people are going to want to pay what it really costs. That $200/month plan might not look so good when it's $2000/month, or more.
It's an important concern for those footing the bill, but I expect companies really in the face of being impacted by it to be able to do a cost-benefit calculation and use a mix of models. For the sorts of things GP described (iptables whatever, recalling how to scan open ports on the network, the sorts of things you usually could answer for yourself with 10-600 seconds in a manpage / help text / google search / stack overflow thread), local/open-weight models are already good enough and fast enough on a lot of commodity hardware to suffice. Whereas now companies might say just offload such queries to the frontier $200/mo plan because why not, tokens are plentiful and it's already being paid for, if in the future it goes to $2000/mo with more limited tokens, you might save them for the actual important or latency-sensitive work and use lower-cost local models for simpler stuff. That lower-cost might involve a $2000 GPU to be really usable, but it pays for itself shortly by comparison. To use your Uber analogy, people might have used it to get to downtown and the airport, but now it's way more expensive, so they'll take a bus or walk or drive downtown instead -- but the airport trip, even though it's more expensive than it used to be, is still attractive in the face of competing alternatives like taxis/long term parking.
We’re seeing the exact same where I work. Our main Slack channels have become inundated with “new tool announcements!”, multiple per day, often solving duplicate problems or problems that don’t exist. We’ve had to stop using those channels for any real conversation because most people are muting them due to the slop noise.
And what’s worse is that when someone does build a decent tool, you can’t help but be skeptical because of all the absolute slop that has come out. And everyone thinks their slop doesn’t stink, so you can’t take them at their word when they say it doesn’t. Even in this thread, how are you to know who is talking about building something useful vs something they think is useful?
A lot of people that have always wanted to be developers but didn’t have the skills are now empowered to go and build… things. But AI hasn’t equipped them with the skill of understanding if it actually makes sense to build a thing, or how to maintain it, or how to evolve it, or how to integrate it with other tools. And then they get upset when you tell them their tool isn’t the best thing since sliced bread. It’s exhausting, and I think we’ve yet to see the true consequences of the slop firehose.
>Barely an hour goes by without a new 4-page document about something that that everyone is apparently ment to read, digest and respond to, despite its 'author' having done none of those steps, it's starting to feel actively adversarial.
well, isn't that what AI can be used effectively for - to generate [auto]response to the AI generated content.
Im convinced none of these people have any training in corporate finance. For if they did they'd realise they were wasting money.
I guess you gotta look busy. But the stick will come when the shareholders look at the income statement and ask... So I see an increase in operating expenses. Let me go calculate the ROIC. Hm its lower, what to do? Oh I know, lets fire the people who caused this (it wont be the C-Suite or management who takes the fall) lmao.
Do you really think companies have started spending millions on tokens and no one from finance has been involved?
You could argue that all the spending is wasted (doubtless some is), but insisting that the decision is being made in complete ignorance of financial concerns reeks of that “everyone’s dumb but me” energy.
> Do you really think companies have started spending millions on tokens and no one from finance has been involved?
Oh, they were involved all right. They ran their analyses and realized that the increase in Acme Corp's share price from becoming "AI-enabled" will pay for the tokens several times over. For today. They plan to be retired before tomorrow.
More that there is a poor incentive structure. Just like how PE can make money by leveraged buyouts and running businesses into the ground. Many of the financial instruments that make both that and the current AI bubble possible were legal then made illegal within the lifetimes of the last 16 presidents.
Round-tripping used to be regulated. SPVs used to be regulated. If you need a loan you used to have to go to something called a bank, now it comes from ???? who knows drug cartels, child traffickers, blackstone, russians & chinese oligarchs. Even assuming it doesn't collapse tommorow why should they make double digit returns on AI datacenters built on the backs of Americans?
AI is truly perfect for internal tooling. Security is less or no concern, bugs are more acceptable, performance / scalability rarely a concern. Quickest way to get things done, and speed up production development, MVP development etc.
"We are writing down X billions over 4 years, and have cancel several ambitious programs related to our AI experiments. We were following standard practice in the industry, so [shareholders] can't blame us for these chickens coming to roost. If everyone is guilty, is anyone really guilty?"
No problems at all except, unauthorized access to a model they were claiming was a weapon and couldn't be released to the public and having their cli code leaked in the last two weeks. Everything's just fine
I am, oddly, able to get really quite a lot of mileage out of $20/mo of OpenAI plan, and I have never encountered a usage limit. I have gotten warnings that I was close a couple times.
I wonder what I’m doing differently.
I did spend quite a bit of time, mostly manually, improving development processes such that the agent could effectively check its work. This made a difference between the agent mostly not working and mostly working. Maybe if I had instead spent gobs of money it would have worked output tooling improvements?
I’m not them but we have vastly improved our internal pipeline monitoring/triage/root cause/etc by having a new system that basically its whole purpose is to hook into all of our other systems and consolidate it under a single view with an emphasis on shortening the amount of time it takes to triage and refine issues.
This will have previously been too ambitious to ever scope but we’ve been able to build essentially all of it in just two months. Since it sits on top of our other systems and acts as more of a window/pass through control pane, the fact that it’s vibe coded poses little risk since we still have all the existing infrastructure under it if something goes awry.
I have some coworker who says something similar, he vibe coded tons of cryptic code, which indeed solves some problem though could be way more compact and well structured. Now it is hitting complexity limitation, since llm now cant comprehend it, and human cant comprehend it by large a margin.
Its a bit of workspace politics, I would need to call that guy out to tell that he is not hyper-performer, but just pushed lots of low quality code which will produce lots of negative impact in a long term.
Also, I am not sure if it is trivial to implement. The code is injected into many scenarios and workflows, so replacement will be painful and risky if new solution break some edge case.
It sounds like you might have some larger process problems if someone can just inject a bunch of vibe-coded slop into critical workflows while more discerning eyes are dubious of the quality/reliability etc.
In some sense, sure. There’s a lot of processes that weren’t previously needed, because sloppy people who couldn’t or wouldn’t think things through were mostly incapable of producing PRs that passed all the existing tests.
its partially/largely management problem. One of tier1 productivity metric in the group is # of LoC created by engineers, so it creates dynamics of people exchanging favors of pushing AI slop to codebase, or be labeled as low performers.
it will comprehend it well enough to complicate it further into a rats-nest that only Opus 4.9 can comprehend, and so on. Good luck if you run into a bug before the N+1 version launches.
I guess that's one way to tout a technology as revolutionary without actually needing to provide any proof of it. Just say you're using it for "internal tooling" and "unannounced projects", that way nobody can look at them and notice they're indistinguishable from the slop that clogs up Show HN nowadays.
It's better than the "here's my code, it a giant pile of spaghetti but only luddites care about code quality and maintainability anyway" method, at least.
I'm wondering whether the layoffs are partly targeting people who haven't adapted to using AI tools, particularly those who are openly dismissive of AI-assisted work.
It's a great tool, and at 1/10 or 1/100th the cost of actual developers. In the context of yc I guess watch out getting re-disrupted by a smaller team faster than before. But that's really the trend the past 40 years so nothing is new. Well maybe the velocity combined with us loosing it's footing at the same time.
But yea it's not gonna make facebook 20% better tomorrow just that you need 5 people instead of 40 to build the next facebook.
It's not just code generation, either - more and more people in my own org are using Claude Code for infrastructure automation, devops, etc. Obviously some amount of code in there, but an absolute ton of tokens being consumed just dealing with Kubernetes work at scale.
I'm spending a ton of tokens because it insists on manually correcting code that fails the linter, despite the instructions in the AGENTS.md to run the linter with autocorrect.
And also because the Plan agent generates a huge plan, asks me a couple yes/no questions with an obvious answer, and then regenerates the entire plan again. Then the Build agent gets confused anyway and does something else, and I have to round-trip about 5 times with that full context each time.
I can say in one role in my job, I'm getting a lot of use and I know my colleagues are at least trying a lot of things. One use is a first-pass review of animal care and use protocols. The Claude project was given all of the relevant policies and guidelines as well as a fairly long prompt that explains the things we look for in protocol review. It's checking some things that the software we use makes very tedious to check and raising inconsistencies between sections. Some places have a full time "protocol reader" who does this kind of first check, but we've never had that, so it's helpful.
Another project I'm seeing in the same realm is taking an approved protocol and some study results and checking that the records of what was done match what they said they could do in the approved protocol. It can also make sure that surgical records have all the things they should have. This can help meet one of the requirements from the national accreditation organization to do "post approval monitoring".
Another way I've used it is to have it collate and compare a particular kind of policy across many institutions who transparently put their policies online. Seeing the commonality between the policies and where some excel helped me rewrite our policy.
This is work that just wasn't happening before or, more accurately, it was being spread over lots of people, and any improvement in efficiency or consistency is hard to measure.
For myself, its a massive boost when solo developing. Perhaps this is a different use case than most. It can work across multiple programming languages and frameworks that I had zero experience in. I use my existing knowledge of programming to ensure the new code written is correct. Also it really excels at translating from one language/framework to another. I can spend time getting it working well in a platform I know then just ask it to convert to another platform. It gets it 90% right in the first prompt, then its just a matter of fine-tuning, reviewing etc. This last 10% is where I supercharge my learning on those languages/framework. To lean all the new languages and frameworks would have taken me months before I would be productive. Now with a single prompt, we get 90% of the way there. That is incredible value for us.
>What is all this AI doing? People are spending 10’s to 100’s of billions and no service or technology seems better or cheaper. Everything is more expensive and worse.
That "more expensive" is someone's revenue. May be AI is the kind of technology that allows to make more and more revenue by making things more expensive and worse than by making them better and cheaper.
You seem to be under the impression that making services better or cheaper _for the consumer_ is the goal of any corporation. The goal is to make their own operations better and cheaper for them. They are laying off employees and adding features of questionable value as a pretext to raise prices. The playbook has not changed, it has only accelerated.
And yet.. building shit is no longer the sole domain of the software engineer.
That's the sea change.
I've literally had finance and GTM stand things up for themselves in the last few weeks. A few tweaks (obviously around security and access), and they are good to go.
They've gone from wrangling spreadsheets to smooth automated workflows that allow them to work at a higher level in a matter of months.
That's what all this AI is doing. The shit we could never get the time to get around to doing.
The only thing that matters is the impact on the financials. The shareholders (the people who employ you) dont care about any of this if it does not enhance value.
They're pointing out that run-rate revenue is based on essentially sampling revenue over some limited time interval, then extrapolating from there assuming revenue always occurs at the same rate (or greater) over all similar intervals in the future. More specifically, they're pointing out that estimates of ARR derived from this kind of sampling are fundamentally prone to error and can be arbitrarily inflated based on how the time interval is sampled.
As far as I understand run rate revenue is just a fancy way of saying that "the last month we had sales, and if that continues for a year we will have a AAR of 30B. meaning it's not 30B yet, but the sales numbers indicates that we get there by continue selling at the current speed. But to have revenue of $100 and get $30B in ARR I guess the period looked at needs to be seconds....
(Run Rate = Revenue in Period / # of Days in Period x 365)
Not even that. It's not based on actual sales in, for example, the past month. It's based on an expected continuous growth based on the growth of the past month (or whatever period you pick).
I don't follow Anthropic closely enough to know what claims its CEO has made, but it is factual that Altman is a pathological liar. You can observe this for yourself by reading and listening to the things he says and then comparing them to reality. We have years of evidence to look back on. The chasm between Altman's reality and everyone else's is so large and so well-known that it was one of the chief factors cited by the board when he was fired.
(I would then argue that he was re-hired specifically because others involved with OpenAI understood that it is literally his job to lie and that OpenAI would not be where it is today as a corporate behemoth rather than a research non-profit without a world-class liar marketing it, but that is merely conjecture.)
I mean.. kinda everything about Mythos for example? Anthropic has a good product, but they also pretty consistently say some stupid ass shit if you're being generous, and blatant lies if you aren't
I agree about the core motivation behind these deals, however I'm skeptical as to how "suddenly" we'll see substantial improvements. Despite their size, I'd be surprised if Google or Amazon had uncommitted chunks of Anthropic-scale, top-tier AI compute sitting around waiting to be activated.
They're already over-subscribed and waiting for new data centers (and power plants) to come online. I suspect Anthropic will get a modest amount of new capacity right away with more added over coming quarters. These two deals don't change the total amount of AI compute available on planet Earth over the next 18 months. Anthropic parting with high-value equity has now made them the new highest bidder for an already over-bid resource. I suspect the net impact will be Amazon & Google pushing prices even higher on everyone else as they reallocate compute to their new top whale.
> Despite their size, I'd be surprised if Google or Amazon had uncommitted chunks of Anthropic-scale, top-tier AI compute sitting around waiting to be activated.
I doubt it was idle capacity. But for a chunk of equity in Anthropic I imagine they are willing to deprioritize other, possibly internal, uses. Certainly anything that's not contractually obligated could be on the chopping block.
Perhaps the adversity of the contracts cancels out with their sudden success and increase in valuation and it ends up a wash compared to the counterfactual scenario where they would have speculated on high growth early on.
Well to a certain extent it also blunts competition, Gemini is less of a threat if their main investor is also backing Anthropic. The issue is when the pyramid scheme collapses...
Both Amazon and Google provide the Claude models via their Kiro and Antigravity IDEs respectively. It could also be investing in their attempt to own the IDE space.
It feels like the market is full Wiley Coyote on frontier model makers, and I like Anthropic's B2B business model.
But all progress points to a commodification of foundation models--Google first named it as "we have no moat, neither does anyone else." So there must be some secondary play driving this, right? Hardware sales? Hedging for search ad revenue?
Still feels mispriced. I think asset inflation leaves too much money desperate for the Next Big Thing.
Google does have a sort of temporary moat. They have a much better hardware supply line story than anyone else and the revenue to maintain that edge indefinitely.
This is the thing - Google is a real company with well established business, money of their own, hardware, server farms, etc. ChatGPT and Anthropic have none of that in the same way google does. They have an incentive to lie and 'fake it till you make it' so they can get out of the 'risk zone' of collapsing back in on themselves. Google can throw money at Gemini all day.
That may be true for OpenAI, less so for Antropic - which has much better margins. Both of these companies CEOs have come in public saying the same.
No doubt as of currently Google has a better business. But the same argument could have been said about Instagram or Whatsapp before Facebook (now Meta) acquired them.
Running AI at a loss long enough to kill the competition would run afoul of antitrust laws. Even more so since they’re bundling their AI products with their search monopoly.
Although I doubt this will stop them if they think it’s advantageous…
I thought that these type of antitrust laws are in no way enforced anymore in the tech industry. And that it's been that way for decades. I mean the sheer existence of Google shows that right? What about Maps, Mail, Books... basically everything apart from Search? Why would an AI Mode as one category of Search results be any different? They're not actively promoting Gemini in those search results. They're simply augmenting it with this new tool that exists now.
As long it further's American interests globally - monopoly is fine. Other countries need to take notice and start picking winners nationally in order to compete with the large American big tech firms.
Eh, I think this is actually not a specifically American thing. More of a neo-liberal mindset. Competition may be good in the long term. But a monopoly now may mean more money in your pocket now. The tech giants definitely give the US some geo-political power in some cases but in general the US would be better off with more competition.
ed: @er2d, can't reply to your comment for some reason, so doing it here:
I don't agree. In theory a monopoly decreases the necessity for R&D. Of course this becomes more complex if the R&D is funded or steered by the state. But look at the current state of LLMs. There is fierce competition between 3 US companies. But geopolitically it's the same as if there would be one monopoly. The US being the clear technological leader in an industry is not dependent on that industry being a domestic monopoly.
And for the Europe comment:
Also don't agree. Look at Boeing & Airbus. Both are companies where the US & EU have decided that they need to ensure the existence of a domestic airplane manufacturer. So in these cases they support these companies (often in violation of international trade laws). But it has nothing to do with monopolies. If a state decides to support a company to ensure its existence, a monopoly is the logical consequence and not the aim. Because if that industry would be profitable it wouldn't need to be supported in the first place.
But all these tech companies are not in industries that would move off-shore or stop existing because they're not profitable enough, so it's an entirely different setting.
> You know what's also really hard in a vacuum? Dissipating heat
Correct. The economics of space-based DCs comes down to permitting delays versus radiator mass.
At ISS-weight radiators (12 to 15 W/kg (EDIT: kg/kW)), you need almost decade-long delays on the ground (or 10+ percent interest rates) to make lifting worthwhile. Get down to current state-of-the-art in the 5 to 10 W/kg (EDIT: kg/kW) range, however, and you only need permiting delays of 2 to 3 years.
If there is a game-changing start-up waiting to be built, it's in someone commercialising a better vacuum-rated radiator.
I really couldn't have been more obscure, could I? :P
In 1932, "the first oil field in the Persian Gulf outside of Iran" was discovered in Bahrain [1]. (The same year Saudi Arabia announced unification [2].)
In the end, Saudi Arabia had larger reserves and wound up geopolitically dominating its first-moving rival. In commodities, the game tends to be scale in part through land grabbing. Less who got where first.
To close the analogy, if AI does wind up commoditised, the layers at which that commodity is held are probably between power and compute [3]. So if AI commoditises (commodifies?), Google selling computer (and indirectly power) to Anthropic and OpenAI is the smarter play than trying to advantage Gemini. (If AI doesn't commoditise, the opposite may be true–Google is supercharging a competitor.)
I haven't thought about any secondary play, but if these companies converge on Google's TPUs, they would probably eagerly slice from NVIDIA's current market.
> In September 2025, Google is in talks with several "neoclouds," including Crusoe and CoreWeave, about deploying TPU in their datacenter. In November 2025, Meta is in talks with Google to deploy TPUs in its AI datacenters.
I keep getting notification from my tooling that gemini models are overloaded so we switched you to openai. So I feel google is not ready to sell tpu’s just yet.
We have no moat could be a bad assessment. First, the models have personalities, and that matters. I like talking Claude better. OpenAI is really different from Grok. The ai models are an extension of the main concern of the company they’re in.
Also those personalities, quirks and choices accumulate. A lot of people talk about using Claude Code and Codex for different things. This is 100% my experience. Some people make better models, but on the top 3, there are often differences that are fixed only by switching between them. If I feel the need to switch between them, then there are significant enough differences and those differences will accumulate.
It is very difficult for me to see any amount of money being thrown at Anthropic as a bad idea.
The amount of new revenue that I am personally able to create for my clients, using Claude models for dev, and Claude models inside the insanely agile products delivered, is astounding.
If I was not currently experiencing this myself, and someone told me that this was possible, I would be calling them names.
You could say the same about Codex (and other tooling). Opus as a model is market leading (trading blows with the greatest that OpenAI is peddling), but there will be a reckoning when open weight models are good enough - and I'd argue we are almost there with some of the latest releases. If you hook up the latest OpenAI models to something like OpenCode, its a taste of what an open harness with a powerful model (outside of a providers ecosystem) will be able to offer developers in the future.
Would you mind sharing what you can and want about how the sausage is made? I would love to hear concrete cases where actual leverage is measurable. I‘m asking in good faith, not to attack your standpoint.
You’re paying the subsidized cost. Those margins will shrink once the real bill comes due. I really think everyone will look back at this time as the golden area of cheap AI. We are already seeing the costs (and restrictions/limits) creep up with the Western models.
I think the opposite. AI will get cheaper as models become more efficient and we solve the datacenter/energy problem. I bet 10 years from now AI, that is way better than what we have today, will be close to free.
100% agree. I have been trying to tell everyone to build their ideas, and exploit this environment where 100B of VC money into OpenAI/Anthropic = some percentage of money invested into your idea. This is the golden era of building! The music is gonna stop soon. Build now ffs!
To me it is more like software consultant speak than AI booster speak. And it is not exactly surprising that the people in a particular subculture all talk similarly.
Well, I hear it from people who are regular devs and not consultants, although it's more common with people who aren't really working in the trenches anymore.
Like ex-developer turned PM who is now vibe coding everything they can and thinks it's the greatest thing ever.
I believe that I am more of an AI realist. The agentic dev tools are really helping me out, but if I could wave a magic wand to make AI go away for a hundred years, I would do it.
I really hope that we can all laugh at how wrong I was.
However, I believe that the horrors will likely outweigh the benefits. Our global society/political systems are not ready for Stasi as a Service, mass unemployment, or any of this impending crap storm.
It's like insane hype marketing speak because that is genuinely the difference from what it was like to develop software 6 months ago. You see many people using the same language, often in comments that are otherwise stylistically quite different, because many people are experiencing the same thing.
I get that it's tedious to sit on tech forums listening to an endless stream of people insisting that suchandsuch technology is world-changing. Many people and probably most people who say that are wrong. But sometimes the world really does change.
I've had an account for a while too, and I do think that that GP comment has a style typical of "AI boosters" -- breathless, big on hyperbole, and low on detail.
To the GP: I'd like some details of these "insanely agile products". Is this insane agility reflected by your customers saying that they have a better, faster, more reliable product? How are you measuring this?
It feels like Anthropic is everybody's insurance policy against someone else winning the AI race. So you have Amazon, Google, Microsoft basically every major tech company pushing their own tech hard but simultaneously ensuring they have a survival level stake in Anthropic if they can't build or acquire their way to stay at frontier level performance themselves.
If you added up all the major AI valuations, it's apparently worth more than products Americans constantly buy and rely on for their main life. So either AI is going to be involved in every Americans life to a large degree, and paying real money for, or these valuations are insanely wrong.
there are plenty of people who basically believe this is the end of the human economy - there will be nothing left that isn't done by AI in the future. Even the bits left that humans do will be human facades on AI driven activity (like your hairdresser will be viewing you through AI powered glasses using AI powered scissors etc).
So from that point of view you can indeed look at it as the entire value of the economy should be invested into AI companies.
That is ultimately where it is headed and has been headed for over 100 years now.
The question is when will we get there.
If the answer is tomorrow, money means nothing and none of these investments matter. If the answer is 30 years, well lots of money to be made up until the inflection point of machines being able to design, build, and repair themselves.
Valuations are based on future expected earnings, not revenue. It cost Ford a lot of money to make that $60k car. The margins for AI companies are unknown but the market is pricing that they’ll be higher at one point. Not that they’ll attract more revenue from the average person.
> My neighbors just gave Ford $60k. It'll be a while until my neighbor gives Anthropic $60k.
How much of that 60K does Ford actually keep? And how much will it be once BYD is allowed in the US? The forecast for Ford is pretty much only downwards, the possible upside on AI is huge.
If every company in the F500 starts spending $2000+ on AI credits per employee, then every consumer product will indirectly be funding AI companies. I think it's already the case that companies small enough to avoid/skip getting O365 or Google Suite subscriptions will pay for AI first.
Ford probably made 3k profit on that car. Given the falling costs of inference, what are the chances your neighbor gives anthropic 3k in profit over the next few years? Not terribly bad.
> My neighbors just gave Ford $60k. It'll be a while until my neighbor gives Anthropic $60k.
AI company revenues aren't driven by consumer subscriptions.
The people doing $20 or even $200 per month plans for their side projects aren't driving the demand. It's going to be business customers spending $1000/month or more per developer and all of the companies feeding their business processes through the API like call centers, document processing, and everything else.
If you're thinking of AI companies as consumer plays you're only seeing the tip of the iceberg. We get cheap access to Claude because they want us playing with it so when it comes time for our employers to choose something we can all lobby for Anthropic.
I guess I’m not surprised that if one “added up all the major AI valuations,” it’s more than any single consumer purchase or even most single companies.
At 20 year depreciation it’s $250 a month. Close to Anthropic’s $200 model. IMHO at this point a lot of developers would rather walk than code manually.
Cable TV begs to differ. I grew up working poor and plenty of people around me dumped a lot of money into cable TV subscriptions, and $120 back in the late 90s is $240 now.
Computer costs keep collapsing. Image and audio generation is turned out to be less computer intensive than text (lol).
First company to launch 24/7 customized streaming AI slop wins!
I'm not sure exactly what kind of point you are making but the valuations are at least nominally based on the expected value of the business far into the future and aren't comparable to, say, purchases done over a year despite both being denoted in dollars.
googles multiple businesses and gemini isn't the largest one.
anthropic is the anchor external customer of tpu's and nvidia is worth more than all of google. If tpu's actually breakout as a viable alternative over the next few years for multiple clients the business could easily be worth as much as search, maybe more.
You essentially have to run in google to use them and that probably limits their ability to breakout. Anthropic might be doing this deal as a way to shore up their supply chain and cost of both inference and training by leveraging Google's hardware and chip manufacturing expertise.
there are literally not enough tpu's on earth for them to break out, every tpu thats been made is in use, the spike in demand is recent and google has heavy competition for foundry space.
Possibly because they just haven't been able to manufacture enough of them yet to be a viable business to others? They're fighting everyone else for foundry space and time.
Of course this is well known. Everything Microsoft does is for selfish capitalist reasons and everything Apple does is for altruistic philanthropic reasons.
> Microsoft in 1997 investing $150 million in Apple, saving it from near bankruptcy.
If only Apple could pass the favor forward. But no, they can't be bothered to invest even a single million in Asashi Linux to benefit their own hardware.
It just keeps the lights on for the whole industry.
The tech is great but valuations are out of control. It's cheaper to keep valuations high through these circular financing deals, rather than to allow for any deflation.
That was precisely my thought on seeing the news. I did not know about Google's existing entanglements with anthropic, but it seemed like a clear message - Do not panic on the money, do the work.
What's the explanation behind this? I am sure they use AI in their ad network (matching web sites with ad offerings, maybe generating ads automatically), but is there more to it?
I know AI companies are selling ad training into the models so the models know about your product. I'm not sure if that is what they were referring to, but it could be related.
I feel the same until I’m reminded I’m paying Anthropic $100 every month for something that’s indispensable to me now and would probably pay a lot more. Very inelastic demand as long as competition is low at the frontier.
$100/month isn't much for developer tooling. If you add up how much I spend on hardware upgrades, other SaaS products like backup services, software licenses, and other things it's easy to justify $100/month for a powerful tool.
I pay for my own AI provider subscriptions because keeping work and personal strictly separated is important for me. I do know some people who secretly pay $200/month for Claude and use it at their job even though it's not approved. I do not recommend doing that, but it shows that some people value this for their work.
For developers earning more than $10K per month, spending less than 1% of salary on tooling to make the job easier is easy to justify.
It's an actual bubble specific to AI. This investment is just another example of the bubble. Pre-2008, all the investment would be coming from banks. Post-2008, all the investment came from VCs... but VCs got tapped out, so AI companies went to bigger private capital. They tapped out all the private capital. So now they're making the rounds, making deals with any corporations left with tens/hundreds of billions in cash, because they're the only possible investors left. When all of them are tapped out, and without a release of pressure from the hardware market, the only investor left will be the government. After that it's kaplooie.
You'll notice that all the really big deals have fallen through, because they're based on promises and meeting objectives that can't be met. So it's likely that there will be really big writeoffs but not a huge implosion like 2001/2008. The real losers will be the retail investors who put all their money in a handful of stocks at ridiculous valuations.
OpenAI was created to counter the threat of Google controlling a possible AGI. What if we still end up in the same state in the end? Both Anthropic and OpenAI have abandoned any pretense of altruism at this point and find themselves overwhelmed bythe forces of capitalism.
Hopefully this money means more compute infrastructure to help Anthropic counter the efficiency changes that have created this perceived downtrend in claude quality.
Google buys Anthropic.
Microsoft buys Open AI (or vice versa depending on how things go).
SpaceGrok buys Cursor, limps along in 3rd place.
Meta is the last man standing, get's stuck with Oracle, dies.
And then hopefully some open source models save us from this nightmare before China commadatises everything.
Edit: I forgot Amazon. Who knows what they will do. They're the wildcard anyway.
OpenAI buying Microsoft.. I honestly think I'd like to see that.
Anything to invigorate the desktop.
Microsoft buying OpenAI.. 10 minutes later it's rebranded Copilot.. and.. nothing much changes in the world. Oh, except all the AI improvements are around Enterprise governance.
> the efficiency changes that have created this perceived downtrend in claude quality”
Why the euphemism? What Anthropic did was an aggressive degradation of their model to save compute, and it's not just “perceived downtrend”, Anthropic themselves have acknowledged the quality of service degradation.
At this point if you have cash or compute credits laying around in the tens of billions, better to hedge your bets than to find out the winner that took all was not you.
Unless none of the current crop of AI companies is “the winner,” either because a newcomer appears or the craze fizzles… in which case have $40B in the bank seems superior.
Weren't there reports of Anthropic's stock trading on secondary markets at $1T valuation recently? Now Google invests at a $350B valuation. I get valuations are often times just smoke and mirrors, but this seems like a pretty big disconnect. What's going on there?
There's always backroom negotiations going on with investments like these. Private valuations are normally hyped-up, and with the current batch of AI companies, 100x so.
I assume Anthropic said something like "We'll give you 3% of our company for $30B, since we're valued at $1T now! So cheap!", and Google immediately came back with "Hell no. We'll give you even more, $40B... but it's for 11% of the company. Take it or leave it." With all the issues they're having, what leverage does Anthropic have at that point?
Basically, Google made them an offer they couldn't refuse.
4% seems reasonable, it's pretty much standard across the board in Europe (median sits around 6% if I recall correctly), not many companies can pull 10% profit. For example in Spain, major conglomerates like INDITEX have a 11%, Iberdrola has a 10%. We also don't use the same metrics and parameters as the US for profit, so the values are skewed.
That said, certain sectors like software (as in custom enterprise grade software dev) pull revenues that are much much higher sitting around 35%, but it's not that common.
In the last couple of weeks, seeing all the announcements of new models by OAI, Anthropic and Chinese companies I was thinking if Google has something up their sleeve, but this news suggests otherwise.
Urs used to talk (internally) about not publishing "industry-enabling papers" which is why most Google infrastructure papers were describing something that had already been turned off, or was already in the process of being replaced by the next system (GFS, Vitess, etc). The things that did get published were either considered not key advantages, that other companies simply cannot do, things that other companies wouldn't bother doing, or experiments that never worked at all. There were exceptions of course. But it led to a public perception of the Google stack involving mostly technologies that were long dead or were never adopted.
"Attention Is All You Need" was a very very different thing and I also wonder if they are glad they published it. But I imagine if they hadn't, the motivation for researchers to leave Google would have been even larger.
It makes every bit as much sense as investing in Snap while still operating their own social network product. Seems to have worked out fine (for Google, not Snap).
Google, Microsoft, Oracle, Meta, Nvidia. All their stock gains in the last 2 or so years were because of the AI hype. And who knows how much money the borrowed and promises they made on the assumption that their stock will continue to rise in the same pace for years to come. When one domino falls, they will follow. So they have every incentive to keep the music going for one of their "friends".
Regardless of if this is “vendor financing” or “circular financing” the history books are riddled with this sort of stuff ending very badly.
It’s concerning that the only thing that seems to be keeping the AI bubble inflated at this point is money from the folks selling things to AI companies. That’s very much not a good sign no matter how you spin it.
I’m a fan of AI and there’s clearly value to it… however that value seems completely out of whack with the money pumping into the ecosystem and at some point such irrational behaviors break.
It’s pretty wild how badly Altman siding with Hegseth has backfired. (And how competently Dario has played his hand.)
I don’t think that’s the ultimate cause of the turnaround in fortunes. But it strikes me, at least from the investor and potentially urban-consumer perspectives, as a pivotal moment in both companies’ fortunes.
Ant's recent rise has little to none to do with retail subscribers, it is Claude Code with Opus 4.5+, followed by their Mythos stunt
I would say the flood of $20 Claude Subscribers due to news cycle backfired on them, now everyone is getting worse outputs and exposed their shortage on compute, which they can't fix anytime soon.
Pretty much everyone I know has both cc and codex now, just because how unreliable cc has become.
> would say the flood of 20+ Claude Subscribers due to news cycle backfired
This is a good hypothesis. I suspect we are both correct.
The PR boost from Anthropic standing its ground drove signups. That, in turn, drove investors. But the users also drove utilization, which degraded quality across the board.
My hypothesis rests on Anthropic’s user mix having significantly shifted to consumers (versus enterprise) after the mix-up. Whenever we get public numbers it would be interesting to test that.
I think it was psychological to a degree. For many consumers OpenAI, or at least ChatGPT was AI. The controversy was enough for folks to be introduced to competitors in the AI space and suddenly OpenAI's success felt a lot less inevitable.
I agree with OP though that this won't actually be the cause of OpenAI's downfall, should it happen. But I still think it's an interesting inflection point.
Anecdotally a whole lot more people around me started using Anthropic models in the last few weeks and seem to like them more than OpenAI. For many of these people it was the second provider they ever used.
Of course this is part of what has lead to such insane demand and outages they've experienced since then.
Sure. Neither OpenAI or Anthropic do. Amazon and Google have followed institutional investors bidding up Anthropic over OpenAI in private markets, all of which—I suspect—followed user-pattern shifts following the fiasco. (Well, fiascos. Altman is a host unto himself.)
"(And how competently Dario has played his hand.)"
lol hes barely done anything, but sometimes that is all that's necessary when a bozo opponent is hell-bent on screwing things up. He didn't get fired the first time for no reason.
> Is the simpler explanation that Alpha was already an investor
Individually, yes. Anthropic surging in private markets the weekend after the supply-chain risk designation, and raising from not only Google but also Amazon in such short clip (following credibly reports of it turning down $800+ billion valuation cheques from financial investors), all while OpenAI gets pilloried in the press and struggles to hold its $800bn valuation in private markets, collectively—to me—paints a bigger picture.
Can’t speak to citations, unfortunately, but if you have a banker or broker with secondary flow right now, ask them which they can get you more of and at what valuation: OpenAI or Anthropic.
Opposite of what you said. The "dig" was not retrenching to more use, but rather I evaluated what I saw them doing and have migrated our company to much better options.
"The Alphabet subsidiary is committing to invest $10 billion now, at a $350 billion valuation for Anthropic, with another $30 billion to follow if Anthropic hits certain performance targets, according to Anthropic."
this is insane. on the secondary market the valuation is 2-3x that. what gives?
Anthropic raised $30 billion at a $350 billion valuation (pre-money) in February.
Google's deal from prior rounds likely lets them buy in at the same valuation other investors get every round, so they're just getting the February valuation.
Amazon did almost the same thing last week, at the same valuation.
Googles giving them something thats a lot more scares to them then dollars, large volumes of chips quickly.
If you gave anthropic 10b cash they couldn't get chips in the 0-6mo timeframe at scale. Anthropic is suffering reputational damage due to choices they have to make around capacity constraints.
Google, AWS, and Azure are the only people who can help them so they hold the cards, thus the good terms.
The GOOG and AMZN deals announced earlier this week would be considered part of the same Feb'26 round. I.e. it would have the same seniority rights as that round.
It is not uncommon to keep a round open after the formal announcement for a bit so that few investors who could not close for whatever reason are part of it. It can be hard to line up everyone at the same time, especially when they are public companies.
---
Specific to your point on why valuation can be lower than market at the same time - Goods(and stocks) while feel to be homogeneous, divisible, fungible, they are not. Size can value of its own.
A block of 10% shares may be worth more (or less) than unit share price, because them being available together has a property of its own, making it either more desirable when someone wants to acquire or harder to sell because there is not enough demand if all of them get dumped at the same time
[1]
In this deal terms, just cause few ten millions are trading at $850B, or some investors can put in say $1-2B doesn't mean you can raise $40B at the same valuation.
There isn't depth in the market to raise $65B (including the AMZN deal) at $850B valuation. There is always some demand at any price point in the demand supply curve, you will probably find few people who will buy few shares at $10T, or $100T or some ridiculous number but that doesn't mean you can raise a large round on that.
Strictly speaking it is not even $350B per se, i.e. Google and AWS benefit from this as vendors. It very much like vendor financing with convertible debt. Meaning it is worth that much to them, but not to you and me because we are not getting some of the money back as sales that boosts are own stock.
---
[1] In the same vein, price can also depend on what you are getting in return, hard immediate dollars is the highest value. However if you are getting shares in return, you can usually negotiate a premium depending on risk of the shares you are getting.
The recent SpaceX - Cursor deal is a good example, any founder would likely take say $10B all cash offer over the $60B from SpaceX, or price would be closer to cash if it GOOG, AMZN, APPL shares instead - proven deeply liquid market etc.
That's the last round they raised at. They had other offers from VCs at ~850B they rejected. Seems like may have been in works since that last round was being raised and just finished paperwork?
> Google is committing $10 billion now in cash at a $350 billion valuation and will invest a further $30 billion if Anthropic meets performance targets, the report said.
How much of this goes back to Google as cloud spend?
Google investing $40bn in a company that competes directly with Gemini is one of those moves that only makes sense if you think of it as buying compute customers, not backing a competitor. Anthropic pays Google for TPUs and Cloud services, a big chunk of this investment surely has to flow right back to Google.
Cool. Will they use their balance sheets to pour all of this cash or are they going to bring the banking system to its knees and then we bail out everyone again ?
I find it crazy that Google considers Anthropic to be worth almost 10% of Google itself (350B valuation mentioned in the article). Anthropic gets traction but has no moat, no infrastructure and relatively small team working for it. I feel for 40B you can get a lot of very smart people and a lot of very good hardware to outcompete it.
10B at their valuation from last November is an absolutely killer deal. If Anthropic had sufficient compute supply they could raise at 2x easily if not 3x.
Anthropic, meanwhile, is spending hundreds of millions buying customer commitments from PE firms to inflate that DAU number. They now have a larger war chest to spend on artificial user acquisition to further inflate that value for future funding rounds.
https://archive.ph/u274V
Context: a few weeks ago, Anthropic signed a deal to buy "multiple gigawatts of next-generation TPU capacity" from Google and Broadcom [1]. There have been several previous deals, too.
Some people call this sort of thing a "circular deal", but perhaps a better way to think of it is as a very large-scale version of vendor financing? The simple version of vendor financing is when a vendor gives a retailer time to pay for goods they purchased for resale. This is effectively a loan that's backed by the retailer's ability to resell the goods. There's a possibility that the retailer goes broke and doesn't pay, but the vendor has insight into how well the retailer is doing, so they know if they're a good risk.
Similarly, Google likely knows quite a lot about Anthropic because Anthropic buys computing services from Google for resale. They're making an equity investment rather than a loan, but the money will be coming back to Google, assuming Anthropic's sales continue to rise as fast as they have been.
Also, if you own Google stock, some small part of that is an investment in Anthropic?
[1] https://www.anthropic.com/news/google-broadcom-partnership-c...
To be honest, I think "vendor financing" is still a very risky premise.
Vendors may be positioned to know how a customer is doing, but they're also incentivized to overestimate how well a customer is going to perform.
GE Capital (edit: and GMCA) is a great example of how seemingly reasonable vendor financing can cause the lender serious problems.
The risks are different, but there's no getting around that the value of any investment is based on future cash flows and that's speculating about the future.
To the extent that Google and Anthropic are competing for AI business, Google is somewhat hedged against Anthropic winning market share. They still get data center revenue and they own equity.
IIRC Google already outright owns 15% of Anthropic.
It could be legit, it could be a thickly veiled accounting fraud continuing the valuation inflation with fake deals that count money multiple times.
Maybe a little bit of both.
Lots and lots of vendor financing during the dotcom era, and it ended up being a material part of those vendors' own difficulties. Especially when service providers were concerned (e.g. the huge crash in optical in particular).
Obviously it's not a perfect comparison, but you have to wonder how much of NVIDIA's income (for instance) is ultimately funded by its own money.
I think the subtext of the last few weeks is the Anthropic was becoming severely capacity constrained (or approaching that). They seem to have had to sign two somewhat adverse contracts with Amazon and Google in short succession. suddenly model quality is back up again.
That’s what’s needed when you go from $9B in ARR … to $30B in ARR literally just one quarter later.
That kind of insane growth & demand is unprecedented at that scale.
https://www.anthropic.com/news/google-broadcom-partnership-c...
What is all this AI doing? People are spending 10’s to 100’s of billions and no service or technology seems better or cheaper. Everything is more expensive and worse.
Where I work:
- Development velocity is very noticeably much higher across the board. Quality is not obviously worse, but it's LLM assisted, not vibe coding (except for experiments and internal tools).
- Things that would have been tactically built with TypeScript are now Rust apps.
- Things that would have been small Python scripts are full web apps and dashboards.
- Vibe coding (with Claude Desktop, nobody is using Replit or any of the others) is the new Excel for non tech people.
- Every time someone has any idea it's accompanied by a multi page "Clauded" memo explaining why it's a great idea and what exactly should be done (about 20% of which is useful).
- 80% of what were web searches now go to Claude instead (for at least a significant minority of people, could easily be over 50%).
- Nobody talks about ChatGPT any more. It's Claude or (sometimes) Gemini.
- My main job isn't writing code but I try to keep Claude Code (both my personal and corpo accounts) and OpenCode (also almost always Claude, via Copilot) busy and churning away on something as close to 100% of the time as I can without getting in the way of my other priorities.
We (~20 people) are probably using 2 orders of magnitude more inference than we were at the start of the year and it's consolidated away from cursor, ChatGPT and Claude to just be almost all Claude (plus a little Gemini as that's part of our Google Whateverspace plan and some people like it, mostly for non-engineering tasks).
No idea if any of this will make things better, exactly, but I think we'd be at a severe competitive disadvantage if we dropped it all and went back how things were.
> - Development velocity is very noticeably much higher across the board
It's an absolute tornado of PRs these days. Everyone making the most of these tools is effectively an engineering team lead.
Is your team measuring how much of your code is being written with claude and comparing amongst the team, like what works best in your codebase? How are you learning from each other?
I’m making a team version of my buildermark.dev open source project and trying to learn about how teams would like to use it.
Different teams are using it in very different ways so it can be tough to compare meaningfully.
Backends handling tens to hundreds of thousands of messages per second with extremely high correctness and resilience requirements are necessarily taking a different approach to less critical services that power various ancillary sites/pages or to front end web apps.
That said there's a lot of very open discussion around tooling, "skills", MCP, etc., harnesses, and approaches and plenty of sharing and cross-pollination of techniques.
It would be great to find ways to better quantify the actual value add from LLMs and from the various ways of using them, but our experience so far is that the landscape in terms of both model capability and tooling is shifting so fast that that's quite hard to do.
Thanks for the feedback. I agree that it’s changing very fast, which is why my thesis is that this tooling will be needed to help everyone on the team keep up.
I a am hobbyist playing around. Recently dropped CC (which gave me a sense of awe 2 months ago) because of recent shenanigans, then GH Copilot but couldn't understand their cost structure, ran out of quota half month in, now on Codex. I don't really see any difference for little stuff.
Have you shipped anything? It's all romantic, but except for layoffs, who's done anything with this? I am not a pessimist and it's too late for that anyway, but what's been done besides uncovering some 0-days?
It sounds very similar to my shop. I have QA people and Product Managers using Claude to develop better integration and reporting tools in Python. Business users are vibe coding all kinds of tools shared as Claude Artifacts, the more ambitious ones are building single page app prototypes. We ported one prototype to Next.js and hosted on Vercel in a couple of days and then handed it back to them with a Devcontainer and Claude Code so they can iterate on it themselves; and we also developed all the security infrastructure, scaffolding, agent instructions & policy required to do this for low stakes apps in a responsible way.
It hardly seems worth it to try to iterate on design when they can just build a completely functional prototype themselves in a few hours. We're building APIs for internal users in preference to UIs, because they can build the UIs themselves and get exactly what they need for their specific use cases and then share it with whoever wants it.
We replaced an expensive, proprietary vendor product in a couple of weeks.
I have no delusions about the scale or complexity limits of these projects. They can help with large, complex systems but mostly at the margins: help with impact analysis, production support, test cases, code review. We generate a lot of code too but we're not vibe coding a new system of record and review standards have actually increased because refactoring is so much cheaper.
The fact is that ordinary businesses have a LOT of unmet demand for low stakes custom software. The ones that lean into this will not develop superpowers but I do think they will out-compete slow adopters and those companies will be forced to catch up in the next few years.
I develop presentations now by dumping a bunch of context in a folder with a template and telling Claude Cowork what I want (it does much better than web version because of its python and shell tools and it can iterate, render, review, repeat until its excellent). The copy is quite good, I rewrite less than a third of it and the style and graphics are so much better than I could do myself in many hours.
No one likes reading a bunch of vibe coded slop and cultural norms about this are still evolving; but on balance its worth it by far.
I'm burning an insane number of tokens 8-12 hours a day for the dramatic improvement of some internal tooling at a big tech company. Using it heavily for an unannounced future project as well.
I presume I'm not the only one.
We suddenly have a proliferation of new internal tools and resources, nearly all of which are barely functional and largely useless with no discernible impact on the overall business trajectory but sure do seem to help come promo time.
Barely an hour goes by without a new 4-page document about something that that everyone is apparently ment to read, digest and respond to, despite its 'author' having done none of those steps, it's starting to feel actively adversarial.
Without good management AI is just a new way to make terrible work in unprecedented quantities.
With good management you will get great work faster.
The distinguishing feature between organisations competing in the AI era is process. AI can automate a lot of the work but the human side owns process. If it’s no good everything collapses. Functional companies become hyper functional while dysfunctional companies will collapse.
Bad ideas used to be warded off by workers who in some shape or form of malicious compliance just would slow down and redirect the work while advocating for better solutions.
That can’t happen as much anymore as your manager or CEO can vibe code stuff and throw it down the pipeline for the workers to fix.
If you have bad processes your company will die, or shrivel or stagnate at best. Companies with good process will beat you.
My main use of vibecoding is creating dozens of internal tools that have sped up tasks, or made tasks possible that were previously not. These tools would have taken weeks of time to build manually and would have been hard to justify, rather than just struggling with manual processes every now and again. AI has been life-changing in creating these kinda janky tools with janky UI that do everything they're supposed to perfectly, but are ugly as hell.
Are you able to describe any of those internal tools in more detail? How important are they on average? (For example, at a prior job I spent a bit of time creating a slackbot command "/wtf acronym" which would query our company's giant glossary of acronyms and return the definition. It wasn't very popular (read: not very useful/important) but it saved myself some time at least looking things up (saving more time than it took to create I'm sure). I'd expect modern LLMs to be able to recreate it within a few minutes as a one-shot task.)
My team has also adopted this - it's much easier to add another layer than to refine or simplify what exists. We have AI skills to help us debug microservices that call microservices that have circular dependencies.
This was possible before but someone would maybe notice the insane spaghetti. Now it's just "we'll fix it with another layer of noodles".
Unfortunately I saw this pre-AI with microservices, where while empowering developers with their beloved microservices, we create intense complexity and deployment headaches. AI will fix the slop with an obscuring layer of complexity on top.
I'm sorry to hear you have such poor leadership.
I'm sorry to hear that you have people abusing their new superpowers.
I run a team and am spending my time/tokens on serious pain points.
Such as?
I answered this in a different comment below, but a lot of the friction is around the amount of time it takes to test/review/submit etc, and a lot of this is centered around tooling that no one has had the time to improve, perf problems in clunky processes that have been around longer than anyone individual, and other things of this nature. Addressing these issues is now approachable and doable in one's "spare time".
The point of that friction is to keep the human in the loop wrt code quality, it's not meant to be meaningless busywork. It's difficult to believe that you sustain the benefit of those systems. Anthropic and Microsoft publicly failed to keep up code quality. They would probably be in a better spot currently if they used neither, no friction, no AI. But that friction exists for a reason and AI doesn't have the "context length" to benefit from it.
This the the difference between intentional and incidental friction, if your CI/CD pipeline is bad it should be improved not sidestepped. The first step in large projects is paving over the lower layer so that all that incidental friction, the kind AI can help with, is removed. If you are constantly going outside that paved area, sure AI will help, but not with the success of the project which is more contingent on the fact that you've failed to lay the groundwork correctly.
Creating stakeholder value
Promoting synergy
>Such as?
it's crazy that the experiences are still so wildly varying that we get people that use this strategy as a 'valid' gotcha.
AI works for the vast majority of nowhere-near-the-edge CS work -- you know, all the stuff the majority of people have to do every day.
I don't touch any kind of SQL manually anymore. I don't touch iptables or UFW. I don't touch polkit, dbus, or any other human-hostile IPC anymore. I don't write cron jobs, or system unit files. I query for documentation rather than slogging through a stupid web wiki or equivalent. a decent LLM model does it all with fairly easy 5-10 word prompts.
ever do real work with a mic and speech-to-text? It's 50x'd by LLM support. Gone are the days of saying "H T T P COLON FORWARD SLASH FORWARD SLASH W W W".
this isn't some untested frontier land anymore. People that embrace it find it really empowering except on the edges, and even those state-of-the-art edge people are using it to do the crap work.
This whole "Yeah, well let me see the proof!" ostrich-head-in-the-sand thing works about as long as it takes for everyone to make you eat their dust.
None of that is concrete though; it's all alleged speed-ups with no discernable (though a lot of claimed) impact.
> This whole "Yeah, well let me see the proof!" ostrich-head-in-the-sand thing works about as long as it takes for everyone to make you eat their dust.
People will stop asking for the proof when the dust-eating commences.
That's all well and good, but what happens when the price to run these AIs goes up 10x or even 100x.
It's the same model as Uber, and I can't afford Uber most of the time anymore. It's become cost prohibitive just to take a short ride, but it used to cost like $7.
It's all fun and games until someone has to pay the bill, and these companies are losing many billions of dollars with no end in sight for the losses.
I doubt the tech and costs for the tech will improve fast enough to stop the flood of money going out, and I doubt people are going to want to pay what it really costs. That $200/month plan might not look so good when it's $2000/month, or more.
It's an important concern for those footing the bill, but I expect companies really in the face of being impacted by it to be able to do a cost-benefit calculation and use a mix of models. For the sorts of things GP described (iptables whatever, recalling how to scan open ports on the network, the sorts of things you usually could answer for yourself with 10-600 seconds in a manpage / help text / google search / stack overflow thread), local/open-weight models are already good enough and fast enough on a lot of commodity hardware to suffice. Whereas now companies might say just offload such queries to the frontier $200/mo plan because why not, tokens are plentiful and it's already being paid for, if in the future it goes to $2000/mo with more limited tokens, you might save them for the actual important or latency-sensitive work and use lower-cost local models for simpler stuff. That lower-cost might involve a $2000 GPU to be really usable, but it pays for itself shortly by comparison. To use your Uber analogy, people might have used it to get to downtown and the airport, but now it's way more expensive, so they'll take a bus or walk or drive downtown instead -- but the airport trip, even though it's more expensive than it used to be, is still attractive in the face of competing alternatives like taxis/long term parking.
We’re seeing the exact same where I work. Our main Slack channels have become inundated with “new tool announcements!”, multiple per day, often solving duplicate problems or problems that don’t exist. We’ve had to stop using those channels for any real conversation because most people are muting them due to the slop noise.
And what’s worse is that when someone does build a decent tool, you can’t help but be skeptical because of all the absolute slop that has come out. And everyone thinks their slop doesn’t stink, so you can’t take them at their word when they say it doesn’t. Even in this thread, how are you to know who is talking about building something useful vs something they think is useful?
A lot of people that have always wanted to be developers but didn’t have the skills are now empowered to go and build… things. But AI hasn’t equipped them with the skill of understanding if it actually makes sense to build a thing, or how to maintain it, or how to evolve it, or how to integrate it with other tools. And then they get upset when you tell them their tool isn’t the best thing since sliced bread. It’s exhausting, and I think we’ve yet to see the true consequences of the slop firehose.
>Barely an hour goes by without a new 4-page document about something that that everyone is apparently ment to read, digest and respond to, despite its 'author' having done none of those steps, it's starting to feel actively adversarial.
well, isn't that what AI can be used effectively for - to generate [auto]response to the AI generated content.
Im convinced none of these people have any training in corporate finance. For if they did they'd realise they were wasting money.
I guess you gotta look busy. But the stick will come when the shareholders look at the income statement and ask... So I see an increase in operating expenses. Let me go calculate the ROIC. Hm its lower, what to do? Oh I know, lets fire the people who caused this (it wont be the C-Suite or management who takes the fall) lmao.
Do you really think companies have started spending millions on tokens and no one from finance has been involved?
You could argue that all the spending is wasted (doubtless some is), but insisting that the decision is being made in complete ignorance of financial concerns reeks of that “everyone’s dumb but me” energy.
> Do you really think companies have started spending millions on tokens and no one from finance has been involved?
Oh, they were involved all right. They ran their analyses and realized that the increase in Acme Corp's share price from becoming "AI-enabled" will pay for the tokens several times over. For today. They plan to be retired before tomorrow.
Sounds like they did train in corporate finance.
More that there is a poor incentive structure. Just like how PE can make money by leveraged buyouts and running businesses into the ground. Many of the financial instruments that make both that and the current AI bubble possible were legal then made illegal within the lifetimes of the last 16 presidents.
Round-tripping used to be regulated. SPVs used to be regulated. If you need a loan you used to have to go to something called a bank, now it comes from ???? who knows drug cartels, child traffickers, blackstone, russians & chinese oligarchs. Even assuming it doesn't collapse tommorow why should they make double digit returns on AI datacenters built on the backs of Americans?
My issue was not with criticism of the money being spent or how it’s being obtained. I was specifically commenting on this statement:
> “Im convinced none of these people have any training in corporate finance. For if they did they'd realise they were wasting money.”
This isn’t meaningful criticism. This is a vacuous “those guys are so dumb”.
AI is truly perfect for internal tooling. Security is less or no concern, bugs are more acceptable, performance / scalability rarely a concern. Quickest way to get things done, and speed up production development, MVP development etc.
> Security is less or no concern
[waits for chickens to come home to roost]
> [waits for chickens to come home to roost]
"We are writing down X billions over 4 years, and have cancel several ambitious programs related to our AI experiments. We were following standard practice in the industry, so [shareholders] can't blame us for these chickens coming to roost. If everyone is guilty, is anyone really guilty?"
Doesn't take long until someone has the bright idea to pipe customer tickets directly into the poorly written internal tool
When attackers can move laterally through everything because every internal tool leaks credentials and data there will be issues.
No problems at all except, unauthorized access to a model they were claiming was a weapon and couldn't be released to the public and having their cli code leaked in the last two weeks. Everything's just fine
Anthropic seems to be doing fine :)
This comment makes me want to scream.
I am, oddly, able to get really quite a lot of mileage out of $20/mo of OpenAI plan, and I have never encountered a usage limit. I have gotten warnings that I was close a couple times.
I wonder what I’m doing differently.
I did spend quite a bit of time, mostly manually, improving development processes such that the agent could effectively check its work. This made a difference between the agent mostly not working and mostly working. Maybe if I had instead spent gobs of money it would have worked output tooling improvements?
I'd be interested to learn what kind of internal tooling are you improving ?
I’m not them but we have vastly improved our internal pipeline monitoring/triage/root cause/etc by having a new system that basically its whole purpose is to hook into all of our other systems and consolidate it under a single view with an emphasis on shortening the amount of time it takes to triage and refine issues.
This will have previously been too ambitious to ever scope but we’ve been able to build essentially all of it in just two months. Since it sits on top of our other systems and acts as more of a window/pass through control pane, the fact that it’s vibe coded poses little risk since we still have all the existing infrastructure under it if something goes awry.
Personally, a static analysis PR check to catch some types of preventable runtime production errors in application code
We've had a lot of complaints about our review processes, time to submit, etc, and a lot of that boils down to tools no one has time to improve.
It's now trivial to fix these problems while still doing our day jobs -- shipping a product.
Same and it is working really well (I say contra to most individual reporting).
I have some coworker who says something similar, he vibe coded tons of cryptic code, which indeed solves some problem though could be way more compact and well structured. Now it is hitting complexity limitation, since llm now cant comprehend it, and human cant comprehend it by large a margin.
honest recommendation: nuke and pave after analyzing (w/ AI of course) where it went horribly wrong.
it's trivial to reimplement a better solution.
Its a bit of workspace politics, I would need to call that guy out to tell that he is not hyper-performer, but just pushed lots of low quality code which will produce lots of negative impact in a long term.
Also, I am not sure if it is trivial to implement. The code is injected into many scenarios and workflows, so replacement will be painful and risky if new solution break some edge case.
It sounds like you might have some larger process problems if someone can just inject a bunch of vibe-coded slop into critical workflows while more discerning eyes are dubious of the quality/reliability etc.
In some sense, sure. There’s a lot of processes that weren’t previously needed, because sloppy people who couldn’t or wouldn’t think things through were mostly incapable of producing PRs that passed all the existing tests.
its partially/largely management problem. One of tier1 productivity metric in the group is # of LoC created by engineers, so it creates dynamics of people exchanging favors of pushing AI slop to codebase, or be labeled as low performers.
The problem was definitely because they didn't use enough AI fast enough. They should just try again
Just wait a month, Opus 4.8 will comprehend it for sure.
it will comprehend it well enough to complicate it further into a rats-nest that only Opus 4.9 can comprehend, and so on. Good luck if you run into a bug before the N+1 version launches.
I guess that's one way to tout a technology as revolutionary without actually needing to provide any proof of it. Just say you're using it for "internal tooling" and "unannounced projects", that way nobody can look at them and notice they're indistinguishable from the slop that clogs up Show HN nowadays.
It's better than the "here's my code, it a giant pile of spaghetti but only luddites care about code quality and maintainability anyway" method, at least.
Haven't you seen all the layoffs? Ive been subscribed to r/layoffs for 5+ years, and since a couple of months ago, it's been crazy noisy.
My hypothesis is that companies dont want to offer cheaper nor better services. Only want to cut costs and keep the revenue for investors.
I other news, TQQQ is pretty high!
Subscribers will not enable these companies to make their money back. The only way is for them to eat the economy itself
I'm wondering whether the layoffs are partly targeting people who haven't adapted to using AI tools, particularly those who are openly dismissive of AI-assisted work.
That’s like firing someone because he uses vim instead of VSCode. Who cares about the tools someone uses if he still does his job well?
It's a great tool, and at 1/10 or 1/100th the cost of actual developers. In the context of yc I guess watch out getting re-disrupted by a smaller team faster than before. But that's really the trend the past 40 years so nothing is new. Well maybe the velocity combined with us loosing it's footing at the same time.
But yea it's not gonna make facebook 20% better tomorrow just that you need 5 people instead of 40 to build the next facebook.
It's not just code generation, either - more and more people in my own org are using Claude Code for infrastructure automation, devops, etc. Obviously some amount of code in there, but an absolute ton of tokens being consumed just dealing with Kubernetes work at scale.
I'm spending a ton of tokens because it insists on manually correcting code that fails the linter, despite the instructions in the AGENTS.md to run the linter with autocorrect.
And also because the Plan agent generates a huge plan, asks me a couple yes/no questions with an obvious answer, and then regenerates the entire plan again. Then the Build agent gets confused anyway and does something else, and I have to round-trip about 5 times with that full context each time.
I can say in one role in my job, I'm getting a lot of use and I know my colleagues are at least trying a lot of things. One use is a first-pass review of animal care and use protocols. The Claude project was given all of the relevant policies and guidelines as well as a fairly long prompt that explains the things we look for in protocol review. It's checking some things that the software we use makes very tedious to check and raising inconsistencies between sections. Some places have a full time "protocol reader" who does this kind of first check, but we've never had that, so it's helpful.
Another project I'm seeing in the same realm is taking an approved protocol and some study results and checking that the records of what was done match what they said they could do in the approved protocol. It can also make sure that surgical records have all the things they should have. This can help meet one of the requirements from the national accreditation organization to do "post approval monitoring".
Another way I've used it is to have it collate and compare a particular kind of policy across many institutions who transparently put their policies online. Seeing the commonality between the policies and where some excel helped me rewrite our policy.
This is work that just wasn't happening before or, more accurately, it was being spread over lots of people, and any improvement in efficiency or consistency is hard to measure.
For myself, its a massive boost when solo developing. Perhaps this is a different use case than most. It can work across multiple programming languages and frameworks that I had zero experience in. I use my existing knowledge of programming to ensure the new code written is correct. Also it really excels at translating from one language/framework to another. I can spend time getting it working well in a platform I know then just ask it to convert to another platform. It gets it 90% right in the first prompt, then its just a matter of fine-tuning, reviewing etc. This last 10% is where I supercharge my learning on those languages/framework. To lean all the new languages and frameworks would have taken me months before I would be productive. Now with a single prompt, we get 90% of the way there. That is incredible value for us.
>What is all this AI doing? People are spending 10’s to 100’s of billions and no service or technology seems better or cheaper. Everything is more expensive and worse.
That "more expensive" is someone's revenue. May be AI is the kind of technology that allows to make more and more revenue by making things more expensive and worse than by making them better and cheaper.
You seem to be under the impression that making services better or cheaper _for the consumer_ is the goal of any corporation. The goal is to make their own operations better and cheaper for them. They are laying off employees and adding features of questionable value as a pretext to raise prices. The playbook has not changed, it has only accelerated.
I keep seeing this take.
And yet.. building shit is no longer the sole domain of the software engineer.
That's the sea change.
I've literally had finance and GTM stand things up for themselves in the last few weeks. A few tweaks (obviously around security and access), and they are good to go.
They've gone from wrangling spreadsheets to smooth automated workflows that allow them to work at a higher level in a matter of months.
That's what all this AI is doing. The shit we could never get the time to get around to doing.
So... more 'busy work'.
The only thing that matters is the impact on the financials. The shareholders (the people who employ you) dont care about any of this if it does not enhance value.
Mind sharing what industry you’re seeing this in? I’ve never talked to finance or GTM as an engineer. I’m not sure GTM exists in my industry.
Run-rate revenue is not ARR. For all we know they could have a revenue of $100 and claim a run-rate revenue of $30B.
Given the fact that both Altman and Amodei are pathological liars, there's absolutely no reason to believe that Anthropic has $30B ARR.
For all we know they could have a revenue of $100 and claim a run-rate revenue of $30B.
Can you explain how that’d work? What would the $30B figure be based on if they only have $100 in revenue?
They're pointing out that run-rate revenue is based on essentially sampling revenue over some limited time interval, then extrapolating from there assuming revenue always occurs at the same rate (or greater) over all similar intervals in the future. More specifically, they're pointing out that estimates of ARR derived from this kind of sampling are fundamentally prone to error and can be arbitrarily inflated based on how the time interval is sampled.
As far as I understand run rate revenue is just a fancy way of saying that "the last month we had sales, and if that continues for a year we will have a AAR of 30B. meaning it's not 30B yet, but the sales numbers indicates that we get there by continue selling at the current speed. But to have revenue of $100 and get $30B in ARR I guess the period looked at needs to be seconds....
(Run Rate = Revenue in Period / # of Days in Period x 365)
Not even that. It's not based on actual sales in, for example, the past month. It's based on an expected continuous growth based on the growth of the past month (or whatever period you pick).
It's a forecast.
There are about 30 million seconds in a year. If they made $100 over the last hundred milliseconds, then that’s $30B annualized.
(That said, their numbers are much realer than that.)
If you make a hundred dollars in 0.1 seconds, you could say your annualized revenue is $100 / 0.1 * 60 * 60 * 24 * 365 = -$30 billion.
That said, most people would use a monthly or quarterly period to estimate ARR. I'm not sure what Anthropic used. Probably monthly.
the fact!?
I don't follow Anthropic closely enough to know what claims its CEO has made, but it is factual that Altman is a pathological liar. You can observe this for yourself by reading and listening to the things he says and then comparing them to reality. We have years of evidence to look back on. The chasm between Altman's reality and everyone else's is so large and so well-known that it was one of the chief factors cited by the board when he was fired.
(I would then argue that he was re-hired specifically because others involved with OpenAI understood that it is literally his job to lie and that OpenAI would not be where it is today as a corporate behemoth rather than a research non-profit without a world-class liar marketing it, but that is merely conjecture.)
I mean.. kinda everything about Mythos for example? Anthropic has a good product, but they also pretty consistently say some stupid ass shit if you're being generous, and blatant lies if you aren't
> suddenly model quality is back up again.
I agree about the core motivation behind these deals, however I'm skeptical as to how "suddenly" we'll see substantial improvements. Despite their size, I'd be surprised if Google or Amazon had uncommitted chunks of Anthropic-scale, top-tier AI compute sitting around waiting to be activated.
They're already over-subscribed and waiting for new data centers (and power plants) to come online. I suspect Anthropic will get a modest amount of new capacity right away with more added over coming quarters. These two deals don't change the total amount of AI compute available on planet Earth over the next 18 months. Anthropic parting with high-value equity has now made them the new highest bidder for an already over-bid resource. I suspect the net impact will be Amazon & Google pushing prices even higher on everyone else as they reallocate compute to their new top whale.
> Despite their size, I'd be surprised if Google or Amazon had uncommitted chunks of Anthropic-scale, top-tier AI compute sitting around waiting to be activated.
I doubt it was idle capacity. But for a chunk of equity in Anthropic I imagine they are willing to deprioritize other, possibly internal, uses. Certainly anything that's not contractually obligated could be on the chopping block.
Perhaps the adversity of the contracts cancels out with their sudden success and increase in valuation and it ends up a wash compared to the counterfactual scenario where they would have speculated on high growth early on.
It takes me 6 minutes minimum to get a response in the last 3 days, I don’t think model capacity is better.
You really think that for companies of this size, signing a contract would immediately reflect in you as end user noticing improved model quality?
Well to a certain extent it also blunts competition, Gemini is less of a threat if their main investor is also backing Anthropic. The issue is when the pyramid scheme collapses...
Both Amazon and Google provide the Claude models via their Kiro and Antigravity IDEs respectively. It could also be investing in their attempt to own the IDE space.
It feels like the market is full Wiley Coyote on frontier model makers, and I like Anthropic's B2B business model.
But all progress points to a commodification of foundation models--Google first named it as "we have no moat, neither does anyone else." So there must be some secondary play driving this, right? Hardware sales? Hedging for search ad revenue?
Still feels mispriced. I think asset inflation leaves too much money desperate for the Next Big Thing.
Google does have a sort of temporary moat. They have a much better hardware supply line story than anyone else and the revenue to maintain that edge indefinitely.
This is the thing - Google is a real company with well established business, money of their own, hardware, server farms, etc. ChatGPT and Anthropic have none of that in the same way google does. They have an incentive to lie and 'fake it till you make it' so they can get out of the 'risk zone' of collapsing back in on themselves. Google can throw money at Gemini all day.
That may be true for OpenAI, less so for Antropic - which has much better margins. Both of these companies CEOs have come in public saying the same.
No doubt as of currently Google has a better business. But the same argument could have been said about Instagram or Whatsapp before Facebook (now Meta) acquired them.
Running AI at a loss long enough to kill the competition would run afoul of antitrust laws. Even more so since they’re bundling their AI products with their search monopoly.
Although I doubt this will stop them if they think it’s advantageous…
Lower real operating costs isn't the same thing as below cost pricing.
US law here is nuanced. Good quick primer https://www.ftc.gov/advice-guidance/competition-guidance/gui...
I thought that these type of antitrust laws are in no way enforced anymore in the tech industry. And that it's been that way for decades. I mean the sheer existence of Google shows that right? What about Maps, Mail, Books... basically everything apart from Search? Why would an AI Mode as one category of Search results be any different? They're not actively promoting Gemini in those search results. They're simply augmenting it with this new tool that exists now.
Yes anti-trust is very much theatre nowadays.
As long it further's American interests globally - monopoly is fine. Other countries need to take notice and start picking winners nationally in order to compete with the large American big tech firms.
Eh, I think this is actually not a specifically American thing. More of a neo-liberal mindset. Competition may be good in the long term. But a monopoly now may mean more money in your pocket now. The tech giants definitely give the US some geo-political power in some cases but in general the US would be better off with more competition.
ed: @er2d, can't reply to your comment for some reason, so doing it here: I don't agree. In theory a monopoly decreases the necessity for R&D. Of course this becomes more complex if the R&D is funded or steered by the state. But look at the current state of LLMs. There is fierce competition between 3 US companies. But geopolitically it's the same as if there would be one monopoly. The US being the clear technological leader in an industry is not dependent on that industry being a domestic monopoly.
And for the Europe comment: Also don't agree. Look at Boeing & Airbus. Both are companies where the US & EU have decided that they need to ensure the existence of a domestic airplane manufacturer. So in these cases they support these companies (often in violation of international trade laws). But it has nothing to do with monopolies. If a state decides to support a company to ensure its existence, a monopoly is the logical consequence and not the aim. Because if that industry would be profitable it wouldn't need to be supported in the first place.
But all these tech companies are not in industries that would move off-shore or stop existing because they're not profitable enough, so it's an entirely different setting.
Nope the reason for a monopoly is incentives for R&D and innovation.
The US understands that and allows it to happen as the former yields a compounding effect of power.
European states certainly don't get this.
TSMC ?
Airbus ?
Are you claiming they are tech firms in the manner of a Apple, Google etc?
lol
> run afoul of antitrust laws
Now, that’s a name I haven’t heard in a long time.
> antitrust laws. Even more so since they’re bundling their AI products with their search monopoly.
couldn't this just be framed / spun as just using search data as training? i don't seem being bundled enough to run afoul with anti-trust.
Who's going to enforce antitrust laws in this environment, pray tell?
> Running AI at a loss long enough to kill the competition would run afoul of antitrust laws.
Running at a loss long enough to kill the competition is basically the name of the game these days.
When Uber started, they were basically setting VC money on fire by selling rides at a loss to destroy the taxi market.
>would run afoul of antitrust laws
Buwahahahahahahahhahah
They drop a little cash on some shitcoin the president controls and those problems go away.
If AI is commoditising, who is Bahrain and who are the Saudis?
The company with the access to cheap and plentiful energy and the real estate to build data centers will be Saudi Arabia in your analogy.
This is why SpaceX could be a dark horse in this race. Putting compute in space is expensive but so is building a data center in the US.
> Putting compute in space is expensive but so is building a data center in the US.
You know what's also really hard in a vacuum? Dissipating heat.
> You know what's also really hard in a vacuum? Dissipating heat
Correct. The economics of space-based DCs comes down to permitting delays versus radiator mass.
At ISS-weight radiators (12 to 15 W/kg (EDIT: kg/kW)), you need almost decade-long delays on the ground (or 10+ percent interest rates) to make lifting worthwhile. Get down to current state-of-the-art in the 5 to 10 W/kg (EDIT: kg/kW) range, however, and you only need permiting delays of 2 to 3 years.
If there is a game-changing start-up waiting to be built, it's in someone commercialising a better vacuum-rated radiator.
Would you want more wattage per kg for a better radiator?
Yes! Thank you–fixed.
Putting it centrally globally makes a lot of sense, just like connecting airports
Saudi will host the biggest data centers in the world
What does that mean?
> What does that mean?
I really couldn't have been more obscure, could I? :P
In 1932, "the first oil field in the Persian Gulf outside of Iran" was discovered in Bahrain [1]. (The same year Saudi Arabia announced unification [2].)
In the end, Saudi Arabia had larger reserves and wound up geopolitically dominating its first-moving rival. In commodities, the game tends to be scale in part through land grabbing. Less who got where first.
To close the analogy, if AI does wind up commoditised, the layers at which that commodity is held are probably between power and compute [3]. So if AI commoditises (commodifies?), Google selling computer (and indirectly power) to Anthropic and OpenAI is the smarter play than trying to advantage Gemini. (If AI doesn't commoditise, the opposite may be true–Google is supercharging a competitor.)
[1] https://en.wikipedia.org/wiki/Bahrain_Petroleum_Company
[2] https://en.wikipedia.org/wiki/Proclamation_of_the_Kingdom_of...
[3] The alternate hypothesis is it's at distribution.
I believe they were drawing a parallel to oil commoditization, but that's as far as I got.
The app layer is Bahrain.
I haven't thought about any secondary play, but if these companies converge on Google's TPUs, they would probably eagerly slice from NVIDIA's current market.
> In September 2025, Google is in talks with several "neoclouds," including Crusoe and CoreWeave, about deploying TPU in their datacenter. In November 2025, Meta is in talks with Google to deploy TPUs in its AI datacenters.
https://en.wikipedia.org/wiki/Tensor_Processing_Unit
I keep getting notification from my tooling that gemini models are overloaded so we switched you to openai. So I feel google is not ready to sell tpu’s just yet.
We have no moat could be a bad assessment. First, the models have personalities, and that matters. I like talking Claude better. OpenAI is really different from Grok. The ai models are an extension of the main concern of the company they’re in.
Also those personalities, quirks and choices accumulate. A lot of people talk about using Claude Code and Codex for different things. This is 100% my experience. Some people make better models, but on the top 3, there are often differences that are fixed only by switching between them. If I feel the need to switch between them, then there are significant enough differences and those differences will accumulate.
"we have no moat, neither does anyone else." is just an employee's personal work blog
YouTube is a kind of moat for Google.
Interesting. Wanna expand?
It is the biggest collection of video to train LLMs on.
It is very difficult for me to see any amount of money being thrown at Anthropic as a bad idea.
The amount of new revenue that I am personally able to create for my clients, using Claude models for dev, and Claude models inside the insanely agile products delivered, is astounding.
If I was not currently experiencing this myself, and someone told me that this was possible, I would be calling them names.
You could say the same about Codex (and other tooling). Opus as a model is market leading (trading blows with the greatest that OpenAI is peddling), but there will be a reckoning when open weight models are good enough - and I'd argue we are almost there with some of the latest releases. If you hook up the latest OpenAI models to something like OpenCode, its a taste of what an open harness with a powerful model (outside of a providers ecosystem) will be able to offer developers in the future.
I know there are multiple paths at this, thank the computing gods.
If we get to an end-state of monopoly/duopoly at this game, then we are truly screwed.
I was just stating my current use and revenue path. Anthropic has insane velocity, in April of 2026.
> when open weight models are good enough
I think Deepseek is already there.
Would you mind sharing what you can and want about how the sausage is made? I would love to hear concrete cases where actual leverage is measurable. I‘m asking in good faith, not to attack your standpoint.
I would do so on a 1:1 level. See bio for contact.
You’re paying the subsidized cost. Those margins will shrink once the real bill comes due. I really think everyone will look back at this time as the golden area of cheap AI. We are already seeing the costs (and restrictions/limits) creep up with the Western models.
I think the opposite. AI will get cheaper as models become more efficient and we solve the datacenter/energy problem. I bet 10 years from now AI, that is way better than what we have today, will be close to free.
> You’re paying the subsidized cost.
100% agree. I have been trying to tell everyone to build their ideas, and exploit this environment where 100B of VC money into OpenAI/Anthropic = some percentage of money invested into your idea. This is the golden era of building! The music is gonna stop soon. Build now ffs!
Why do AI boosters like yourself all have the same writing style? Was the comment AI generated?
It's like insane hype marketing speak. "insanely agile products delivered" like huh?
To me it is more like software consultant speak than AI booster speak. And it is not exactly surprising that the people in a particular subculture all talk similarly.
Well, I hear it from people who are regular devs and not consultants, although it's more common with people who aren't really working in the trenches anymore.
Like ex-developer turned PM who is now vibe coding everything they can and thinks it's the greatest thing ever.
> Why do AI boosters like yourself...
I believe that I am more of an AI realist. The agentic dev tools are really helping me out, but if I could wave a magic wand to make AI go away for a hundred years, I would do it.
I really hope that we can all laugh at how wrong I was.
However, I believe that the horrors will likely outweigh the benefits. Our global society/political systems are not ready for Stasi as a Service, mass unemployment, or any of this impending crap storm.
Getting in on the astounding action before the world turns to shit.
Who could call me a starry-eyed idealist? I have invested in bunkers.
It's like insane hype marketing speak because that is genuinely the difference from what it was like to develop software 6 months ago. You see many people using the same language, often in comments that are otherwise stylistically quite different, because many people are experiencing the same thing.
I get that it's tedious to sit on tech forums listening to an endless stream of people insisting that suchandsuch technology is world-changing. Many people and probably most people who say that are wrong. But sometimes the world really does change.
> I get that it's tedious to sit on tech forums listening to an endless stream of people insisting that suchandsuch technology is world-changing.
It's tedious because the insistence doesn't seem to be matched by much observable change.
It's "world changing" yet the world seems mostly the same other than the increasing enshittification of everything...
I'll trust someone who has an account since 2018 vs 71 days ago. Especially when your name already indicates you're biased.
I've had an account for a while too, and I do think that that GP comment has a style typical of "AI boosters" -- breathless, big on hyperbole, and low on detail.
To the GP: I'd like some details of these "insanely agile products". Is this insane agility reflected by your customers saying that they have a better, faster, more reliable product? How are you measuring this?
Wym "trust"? What is there to "trust" with my comment? Huh?
It feels like Anthropic is everybody's insurance policy against someone else winning the AI race. So you have Amazon, Google, Microsoft basically every major tech company pushing their own tech hard but simultaneously ensuring they have a survival level stake in Anthropic if they can't build or acquire their way to stay at frontier level performance themselves.
If you added up all the major AI valuations, it's apparently worth more than products Americans constantly buy and rely on for their main life. So either AI is going to be involved in every Americans life to a large degree, and paying real money for, or these valuations are insanely wrong.
At some point in American history you probably could have said the same about railroads.
there are plenty of people who basically believe this is the end of the human economy - there will be nothing left that isn't done by AI in the future. Even the bits left that humans do will be human facades on AI driven activity (like your hairdresser will be viewing you through AI powered glasses using AI powered scissors etc).
So from that point of view you can indeed look at it as the entire value of the economy should be invested into AI companies.
That is ultimately where it is headed and has been headed for over 100 years now.
The question is when will we get there.
If the answer is tomorrow, money means nothing and none of these investments matter. If the answer is 30 years, well lots of money to be made up until the inflection point of machines being able to design, build, and repair themselves.
> it's apparently worth more than products Americans constantly buy and rely on for their main life
What are you counting in this category?
There are countless examples, but let's say Ford. Worth $150 billion, $50 billion not counting debt.
My neighbors just gave Ford $60k. It'll be a while until my neighbor gives Anthropic $60k.
Valuations are based on future expected earnings, not revenue. It cost Ford a lot of money to make that $60k car. The margins for AI companies are unknown but the market is pricing that they’ll be higher at one point. Not that they’ll attract more revenue from the average person.
> My neighbors just gave Ford $60k. It'll be a while until my neighbor gives Anthropic $60k.
How much of that 60K does Ford actually keep? And how much will it be once BYD is allowed in the US? The forecast for Ford is pretty much only downwards, the possible upside on AI is huge.
If every company in the F500 starts spending $2000+ on AI credits per employee, then every consumer product will indirectly be funding AI companies. I think it's already the case that companies small enough to avoid/skip getting O365 or Google Suite subscriptions will pay for AI first.
Ford probably made 3k profit on that car. Given the falling costs of inference, what are the chances your neighbor gives anthropic 3k in profit over the next few years? Not terribly bad.
> My neighbors just gave Ford $60k. It'll be a while until my neighbor gives Anthropic $60k.
AI company revenues aren't driven by consumer subscriptions.
The people doing $20 or even $200 per month plans for their side projects aren't driving the demand. It's going to be business customers spending $1000/month or more per developer and all of the companies feeding their business processes through the API like call centers, document processing, and everything else.
If you're thinking of AI companies as consumer plays you're only seeing the tip of the iceberg. We get cheap access to Claude because they want us playing with it so when it comes time for our employers to choose something we can all lobby for Anthropic.
>when it comes time for our employers to choose something we can all lobby for Anthropic
They should stop messing with us then. Stealth model changes, threatening to take code away on the $20 plan, the list goes on.
I guess I’m not surprised that if one “added up all the major AI valuations,” it’s more than any single consumer purchase or even most single companies.
Did you add Google, Meta, Apple, Amazon in that because more people consume from these firms than Ford
His neighbour isn't spending $60,000 on all of those together
Count the Fords on the street.
Now count the Amazon deliveries in a year on said same street. And next year, and the year after, and.. however long one keeps a Ford these days..
It's quite a scary thought exercise.
At 20 year depreciation it’s $250 a month. Close to Anthropic’s $200 model. IMHO at this point a lot of developers would rather walk than code manually.
Yeah, but $200 a month is not a sustainable price.
Cable TV begs to differ. I grew up working poor and plenty of people around me dumped a lot of money into cable TV subscriptions, and $120 back in the late 90s is $240 now.
Computer costs keep collapsing. Image and audio generation is turned out to be less computer intensive than text (lol).
First company to launch 24/7 customized streaming AI slop wins!
Seems they are growing and model is overloaded. I suspect they’ll raise the prices.
$1k for a lot of developers here is totally worth it.
The valuations on AI companies are a bet on them capturing enough of the $60 trillion annual wages paid to people to have a good ROI.
I'm not sure exactly what kind of point you are making but the valuations are at least nominally based on the expected value of the business far into the future and aren't comparable to, say, purchases done over a year despite both being denoted in dollars.
Stocks vs Flows! You can't compare (as in subtract and check sign) $ and $/s!
I consider them competitors… This reminds me of Microsoft in 1997 investing $150 million in Apple, saving it from near bankruptcy
googles multiple businesses and gemini isn't the largest one.
anthropic is the anchor external customer of tpu's and nvidia is worth more than all of google. If tpu's actually breakout as a viable alternative over the next few years for multiple clients the business could easily be worth as much as search, maybe more.
Google cloud also need to be able to offer Anthropic models on Vertex otherwise they just won't be competitive.
Microsoft is in the same boat with Azure.
> If tpu's actually breakout as a viable alternative over the next few years
Why haven't they broken out yet, I wonder, if they're more efficient for inference and LLM costs are now weighted towards inference over training?
You essentially have to run in google to use them and that probably limits their ability to breakout. Anthropic might be doing this deal as a way to shore up their supply chain and cost of both inference and training by leveraging Google's hardware and chip manufacturing expertise.
Several customers like Citadel, run TPUs in their own datacenters (closer to Exchanges)
every tpu thats been made is in use and sold at a high margin, demand is not the issue.
there are literally not enough tpu's on earth for them to break out, every tpu thats been made is in use, the spike in demand is recent and google has heavy competition for foundry space.
Possibly because they just haven't been able to manufacture enough of them yet to be a viable business to others? They're fighting everyone else for foundry space and time.
If I remember correctly, Microsoft allegedly did that for the very selfish reason of looking better in terms of being a monopoly.
Of course this is well known. Everything Microsoft does is for selfish capitalist reasons and everything Apple does is for altruistic philanthropic reasons.
They’re publicly traded for-profit companies, selfishness is literally the definition of both of them and it’s the farthest thing from a secret.
Rather than for the altruistic reason of saving a struggling fellow company?
> Microsoft in 1997 investing $150 million in Apple, saving it from near bankruptcy.
If only Apple could pass the favor forward. But no, they can't be bothered to invest even a single million in Asashi Linux to benefit their own hardware.
Google is right (I think) to invest in winning compute share from Nvidia over winning token share from other frontier model builders.
They already had a non trivial stake in Anthropic though?
They are, but Google Vertex has been one of the official ways to use Claude since forever.
It just keeps the lights on for the whole industry.
The tech is great but valuations are out of control. It's cheaper to keep valuations high through these circular financing deals, rather than to allow for any deflation.
Anthropics erratic behavior is going to get Google regulated. This is "don't rock the boat" money. Google existentially needs AI for advertising.
That was precisely my thought on seeing the news. I did not know about Google's existing entanglements with anthropic, but it seemed like a clear message - Do not panic on the money, do the work.
"Do not panic on the money, do the work." - sorry what do you mean by that?
> Google existentially needs AI for advertising.
What's the explanation behind this? I am sure they use AI in their ad network (matching web sites with ad offerings, maybe generating ads automatically), but is there more to it?
I know AI companies are selling ad training into the models so the models know about your product. I'm not sure if that is what they were referring to, but it could be related.
Anyone else has an increasing feeling that all the AI hype is turning into a "Dot-Com Bubble x 2008 Credit Default Swaps" collab?
I feel the same until I’m reminded I’m paying Anthropic $100 every month for something that’s indispensable to me now and would probably pay a lot more. Very inelastic demand as long as competition is low at the frontier.
I pay TMobile $100 a month but they aren't worth a trillion dollars.
TMobile is effectively a monopolist in many US regions.
Are you paying that, or is your work paying for it?
If you’re using it for personal work, why is $100 worth it?
$100/month isn't much for developer tooling. If you add up how much I spend on hardware upgrades, other SaaS products like backup services, software licenses, and other things it's easy to justify $100/month for a powerful tool.
I pay for my own AI provider subscriptions because keeping work and personal strictly separated is important for me. I do know some people who secretly pay $200/month for Claude and use it at their job even though it's not approved. I do not recommend doing that, but it shows that some people value this for their work.
For developers earning more than $10K per month, spending less than 1% of salary on tooling to make the job easier is easy to justify.
It's an actual bubble specific to AI. This investment is just another example of the bubble. Pre-2008, all the investment would be coming from banks. Post-2008, all the investment came from VCs... but VCs got tapped out, so AI companies went to bigger private capital. They tapped out all the private capital. So now they're making the rounds, making deals with any corporations left with tens/hundreds of billions in cash, because they're the only possible investors left. When all of them are tapped out, and without a release of pressure from the hardware market, the only investor left will be the government. After that it's kaplooie.
You'll notice that all the really big deals have fallen through, because they're based on promises and meeting objectives that can't be met. So it's likely that there will be really big writeoffs but not a huge implosion like 2001/2008. The real losers will be the retail investors who put all their money in a handful of stocks at ridiculous valuations.
Which big deals have fallen through?
"Nvidia’s $100 billion OpenAI deal has seemingly vanished" https://arstechnica.com/information-technology/2026/02/five-...
"Disney cancels $1B deal with OpenAI after video platform Sora is shut down: 'The future is human'" https://finance.yahoo.com/sectors/technology/articles/disney...
And if I recall correctly the AI datacenter deal isn'tdoing Oracle stock any favours.
I think a lot of people suspect that, but no one is able to help themselves. Manias are a feature/bug of humanity.
x oil shock (due to Ormuz).
My opinion about this is that Google see it as a way to weaken OpenAI, and few other side benefits, including the option to acquire Anthropic.
And it may very well be bad news for OpenAI.
OpenAI was created to counter the threat of Google controlling a possible AGI. What if we still end up in the same state in the end? Both Anthropic and OpenAI have abandoned any pretense of altruism at this point and find themselves overwhelmed bythe forces of capitalism.
> including the option to acquire Anthropic.
I have feeling that Dario is not the type of man who would want to be acquired and then have Google's CEO telling him what to do.
It'd be funny if Google offered 750m in stock + cash just to see what happened... :D
The drama on HN alone would last for days. Twitter would implode in on itself.
That boat has sailed off. Not even Google has the cash to buy a company valued at almost a trillion dollars.
Maybe, I think there is a lot of uncertainty about valuations of AI labs in the near to medium future.
OpenAI crashing would be good news and bad news for Anthropic investors.
Valued at a trillion by basically, no one who would actually invest anywhere close to that
You don't have to buy companies with cash.
>> $10 billion now ... another $30 billion to follow if Anthropic hits certain performance targets...
Hopefully this money means more compute infrastructure to help Anthropic counter the efficiency changes that have created this perceived downtrend in claude quality.
The puzzling thing is why Google would try to help with that. Aren't they competitors? Wouldn't they want their competitor to have problems?
It's more understanding for Amazon or Microsoft to make such an investment, because they're not as competitive in the model space.
There's always three:
And then hopefully some open source models save us from this nightmare before China commadatises everything.Edit: I forgot Amazon. Who knows what they will do. They're the wildcard anyway.
OpenAI buying Microsoft.. I honestly think I'd like to see that.
Anything to invigorate the desktop.
Microsoft buying OpenAI.. 10 minutes later it's rebranded Copilot.. and.. nothing much changes in the world. Oh, except all the AI improvements are around Enterprise governance.
Google owned 14ish percent of Anthropic before this investment, so presumably this could bring it up to as much as 25%?
Deepmind is heavily using Claude. This could help secure computing power.
I'm not up to date, I think. How so?
Google was already an investor in Anthropic but I don’t think they are truly competitors in this space.
What if Google can't compete? They don't want to be left behind and all this money being throw around is just nonsense anyway.
> the efficiency changes that have created this perceived downtrend in claude quality”
Why the euphemism? What Anthropic did was an aggressive degradation of their model to save compute, and it's not just “perceived downtrend”, Anthropic themselves have acknowledged the quality of service degradation.
At this point if you have cash or compute credits laying around in the tens of billions, better to hedge your bets than to find out the winner that took all was not you.
Unless none of the current crop of AI companies is “the winner,” either because a newcomer appears or the craze fizzles… in which case have $40B in the bank seems superior.
Weren't there reports of Anthropic's stock trading on secondary markets at $1T valuation recently? Now Google invests at a $350B valuation. I get valuations are often times just smoke and mirrors, but this seems like a pretty big disconnect. What's going on there?
Amazon and Google get discounts because they bring more than just cash and help solve a very immediate problem for Anthropic
Great position to be in if you're Amazon and Google
There's always backroom negotiations going on with investments like these. Private valuations are normally hyped-up, and with the current batch of AI companies, 100x so.
I assume Anthropic said something like "We'll give you 3% of our company for $30B, since we're valued at $1T now! So cheap!", and Google immediately came back with "Hell no. We'll give you even more, $40B... but it's for 11% of the company. Take it or leave it." With all the issues they're having, what leverage does Anthropic have at that point?
Basically, Google made them an offer they couldn't refuse.
A 10B insurance policy on google’s business sounds like a bargain?
And with cashback through gcp usage!
$40B. Numbers mean nothing anymore
Yup. You can actually buy several European airlines with that kind of money.
For example, you can buy KLM Air france for less than $3B.
It is a profitable business that does $30B in sales and $1B in profit. (and has been profitable since for the past 4-5 years)
It has $40B in liabilities.
[PDF] https://www.airfranceklm.com/sites/default/files/2026-02/202...
Airlines are down there amongst cinema chains and video game retail stores in terms of being terrible businesses
"$30B in sales and $1B in profit."
This margin seems terrible.
4% seems reasonable, it's pretty much standard across the board in Europe (median sits around 6% if I recall correctly), not many companies can pull 10% profit. For example in Spain, major conglomerates like INDITEX have a 11%, Iberdrola has a 10%. We also don't use the same metrics and parameters as the US for profit, so the values are skewed.
That said, certain sectors like software (as in custom enterprise grade software dev) pull revenues that are much much higher sitting around 35%, but it's not that common.
Yes, and it's incredibly wasteful.
yep, you know what's better than billions? trillions.
In the last couple of weeks, seeing all the announcements of new models by OAI, Anthropic and Chinese companies I was thinking if Google has something up their sleeve, but this news suggests otherwise.
They just announced their new chip, and they are the ones created transformers yet investing this amount in a competitor?
I don’t know what to make of it
I wonder if Google regrets publishing that article on transformers.
Urs used to talk (internally) about not publishing "industry-enabling papers" which is why most Google infrastructure papers were describing something that had already been turned off, or was already in the process of being replaced by the next system (GFS, Vitess, etc). The things that did get published were either considered not key advantages, that other companies simply cannot do, things that other companies wouldn't bother doing, or experiments that never worked at all. There were exceptions of course. But it led to a public perception of the Google stack involving mostly technologies that were long dead or were never adopted.
"Attention Is All You Need" was a very very different thing and I also wonder if they are glad they published it. But I imagine if they hadn't, the motivation for researchers to leave Google would have been even larger.
So Google allowed publishing the Attention paper because they didn't understand its value.
They patented it. When the dumb money stops sloshing around, we'll start to see the fallout from that.
Why do you think Google considers Anthropic a competitor?
Given that anthropic is probably paying it all back to them in compute bills, they may not be giving them anything.
It makes every bit as much sense as investing in Snap while still operating their own social network product. Seems to have worked out fine (for Google, not Snap).
So $40B in google cloud credits in return for % in equity.
Didn't Amazon AWS do the same recently?
Anthropic takes $5B from Amazon and pledges $100B in cloud spending in return
https://news.ycombinator.com/item?id=47848276
Google seems to own a bit of everyone.
you might even say they own the whole alphabet at this point
Is anthropic really that good when you got deepseel V4 that has a fraction of the cost and works just as good ?
I think their cli still leading for some reason.
Not sure if it’s going to be good enough to replace IDEs with neatly integrated superior models.
my take is Anthropic needs a large cash infusion since it's the one of the popular model providers.
if it runs of out of cash - then it's bad for the whole industry.
same as OpenAI. so all players - will provide cash & compute to keep them going.
They need compute
> if it runs of out of cash - then it's bad for the whole industry.
Why? I don’t think we would suffer if anthropic disappeared tomorrow
If Anthropic disappeared tomorrow due to running out of cash it would cause a great panic, no?
Google, Microsoft, Oracle, Meta, Nvidia. All their stock gains in the last 2 or so years were because of the AI hype. And who knows how much money the borrowed and promises they made on the assumption that their stock will continue to rise in the same pace for years to come. When one domino falls, they will follow. So they have every incentive to keep the music going for one of their "friends".
Regardless of if this is “vendor financing” or “circular financing” the history books are riddled with this sort of stuff ending very badly.
It’s concerning that the only thing that seems to be keeping the AI bubble inflated at this point is money from the folks selling things to AI companies. That’s very much not a good sign no matter how you spin it.
I’m a fan of AI and there’s clearly value to it… however that value seems completely out of whack with the money pumping into the ecosystem and at some point such irrational behaviors break.
It’s pretty wild how badly Altman siding with Hegseth has backfired. (And how competently Dario has played his hand.)
I don’t think that’s the ultimate cause of the turnaround in fortunes. But it strikes me, at least from the investor and potentially urban-consumer perspectives, as a pivotal moment in both companies’ fortunes.
What backfired?
Ant's recent rise has little to none to do with retail subscribers, it is Claude Code with Opus 4.5+, followed by their Mythos stunt
I would say the flood of $20 Claude Subscribers due to news cycle backfired on them, now everyone is getting worse outputs and exposed their shortage on compute, which they can't fix anytime soon.
Pretty much everyone I know has both cc and codex now, just because how unreliable cc has become.
> would say the flood of 20+ Claude Subscribers due to news cycle backfired
This is a good hypothesis. I suspect we are both correct.
The PR boost from Anthropic standing its ground drove signups. That, in turn, drove investors. But the users also drove utilization, which degraded quality across the board.
My hypothesis rests on Anthropic’s user mix having significantly shifted to consumers (versus enterprise) after the mix-up. Whenever we get public numbers it would be interesting to test that.
> What backfired?
I think it was psychological to a degree. For many consumers OpenAI, or at least ChatGPT was AI. The controversy was enough for folks to be introduced to competitors in the AI space and suddenly OpenAI's success felt a lot less inevitable.
I agree with OP though that this won't actually be the cause of OpenAI's downfall, should it happen. But I still think it's an interesting inflection point.
> introduced to competitors in the AI space and suddenly OpenAI's success felt a lot less inevitable.
This is true. OpenAI WAS the story of AI, now it is just 50% of it, at max. Losing the monopoly of imagination towards AGI is bad for them.
One thing I don't agree though, consumers aren't the important part of AI, they are a liability.
AI is too expensive, consumers can't pay for it. Instead they will compete with enterprise for the same tokens, with less money.
> controversy was enough for folks to be introduced to competitors
This is my suspicion. Consumers hadn’t previously heard of Anthropic and Claude. Now they had, particularly in cities.
> this won't actually be the cause of OpenAI's downfall, should it happen. But I still think it's an interesting inflection point
Also agree. Hence why I said “I don’t think” the fight is “the ultimate cause.”
Anecdotally a whole lot more people around me started using Anthropic models in the last few weeks and seem to like them more than OpenAI. For many of these people it was the second provider they ever used.
Of course this is part of what has lead to such insane demand and outages they've experienced since then.
I use both CC and Codex because one is not enough and 5x for $100 is too much.
>> followed by their Mythos stunt
"Stunt", eh?
Alphabet makes $30 billion profit per quarter.
> Alphabet makes $30 billion profit per quarter
Sure. Neither OpenAI or Anthropic do. Amazon and Google have followed institutional investors bidding up Anthropic over OpenAI in private markets, all of which—I suspect—followed user-pattern shifts following the fiasco. (Well, fiascos. Altman is a host unto himself.)
Which means they can allow themselves to blast money left and right? Its still a big investment.
they can't allow themselves NOT to blast money left and right
Yes
No, they have a fiduciary duty to shareholders to not make obviously bad investments.
"(And how competently Dario has played his hand.)"
lol hes barely done anything, but sometimes that is all that's necessary when a bozo opponent is hell-bent on screwing things up. He didn't get fired the first time for no reason.
Is the simpler explanation that Alpha was already an investor and Anthropic has been making strides in their business model?
> Is the simpler explanation that Alpha was already an investor
Individually, yes. Anthropic surging in private markets the weekend after the supply-chain risk designation, and raising from not only Google but also Amazon in such short clip (following credibly reports of it turning down $800+ billion valuation cheques from financial investors), all while OpenAI gets pilloried in the press and struggles to hold its $800bn valuation in private markets, collectively—to me—paints a bigger picture.
Please share how OpenAI is struggling in the private markets.
There is more supply than demand flat to OpenAI’s recent raise. That’s simply not the case for Anthropic, at last raise or at comparable valuations.
Citation? Were you working on the deal?
Can’t speak to citations, unfortunately, but if you have a banker or broker with secondary flow right now, ask them which they can get you more of and at what valuation: OpenAI or Anthropic.
It was enough for me to dig much deeper into OpenAI, where before we almost exclusively used them for services with any form of SLA.
You're saying it was a turning point for you to get more embedded with them? Way to be killer robot positive, I guess...
Good call out because I was a little unclear.
Opposite of what you said. The "dig" was not retrenching to more use, but rather I evaluated what I saw them doing and have migrated our company to much better options.
"The Alphabet subsidiary is committing to invest $10 billion now, at a $350 billion valuation for Anthropic, with another $30 billion to follow if Anthropic hits certain performance targets, according to Anthropic."
this is insane. on the secondary market the valuation is 2-3x that. what gives?
Anthropic raised $30 billion at a $350 billion valuation (pre-money) in February.
Google's deal from prior rounds likely lets them buy in at the same valuation other investors get every round, so they're just getting the February valuation.
Amazon did almost the same thing last week, at the same valuation.
Googles giving them something thats a lot more scares to them then dollars, large volumes of chips quickly.
If you gave anthropic 10b cash they couldn't get chips in the 0-6mo timeframe at scale. Anthropic is suffering reputational damage due to choices they have to make around capacity constraints.
Google, AWS, and Azure are the only people who can help them so they hold the cards, thus the good terms.
Top of the book? Nobody on the secondary market is investing $30bn
> Nobody on the secondary market is investing $30bn
Correct. But I think $5 to 10bn are sitting ready for $700 to 800, which strongly implies Google is getting a solid deal on this.
The GOOG and AMZN deals announced earlier this week would be considered part of the same Feb'26 round. I.e. it would have the same seniority rights as that round.
It is not uncommon to keep a round open after the formal announcement for a bit so that few investors who could not close for whatever reason are part of it. It can be hard to line up everyone at the same time, especially when they are public companies.
---
Specific to your point on why valuation can be lower than market at the same time - Goods(and stocks) while feel to be homogeneous, divisible, fungible, they are not. Size can value of its own.
A block of 10% shares may be worth more (or less) than unit share price, because them being available together has a property of its own, making it either more desirable when someone wants to acquire or harder to sell because there is not enough demand if all of them get dumped at the same time [1]
In this deal terms, just cause few ten millions are trading at $850B, or some investors can put in say $1-2B doesn't mean you can raise $40B at the same valuation.
There isn't depth in the market to raise $65B (including the AMZN deal) at $850B valuation. There is always some demand at any price point in the demand supply curve, you will probably find few people who will buy few shares at $10T, or $100T or some ridiculous number but that doesn't mean you can raise a large round on that.
Strictly speaking it is not even $350B per se, i.e. Google and AWS benefit from this as vendors. It very much like vendor financing with convertible debt. Meaning it is worth that much to them, but not to you and me because we are not getting some of the money back as sales that boosts are own stock.
---
[1] In the same vein, price can also depend on what you are getting in return, hard immediate dollars is the highest value. However if you are getting shares in return, you can usually negotiate a premium depending on risk of the shares you are getting.
The recent SpaceX - Cursor deal is a good example, any founder would likely take say $10B all cash offer over the $60B from SpaceX, or price would be closer to cash if it GOOG, AMZN, APPL shares instead - proven deeply liquid market etc.
That's the last round they raised at. They had other offers from VCs at ~850B they rejected. Seems like may have been in works since that last round was being raised and just finished paperwork?
I wonder what happens to the “Gemini enterprise”. Will it do a Google plus or Google wave ?
Gemini seems more tailored towards information retrieval and product integration (including Android and even iOS via Apple's deal).
Google may reckon they can't (yet) reconcile their vision of Gemini with the raw coding performance of Claude and Codex.
It's a little weird. I work for Google, but I spend way more time helping get Anthropic serving and running than anything to do with Gemini.
That's b/c the people working on Gemini serving are in GDM.
This is a good strategy. Internal competition between Gemini and GCP.
> Google is committing $10 billion now in cash at a $350 billion valuation and will invest a further $30 billion if Anthropic meets performance targets, the report said.
How much of this goes back to Google as cloud spend?
Google investing $40bn in a company that competes directly with Gemini is one of those moves that only makes sense if you think of it as buying compute customers, not backing a competitor. Anthropic pays Google for TPUs and Cloud services, a big chunk of this investment surely has to flow right back to Google.
Cool. Will they use their balance sheets to pour all of this cash or are they going to bring the banking system to its knees and then we bail out everyone again ?
I find it crazy that Google considers Anthropic to be worth almost 10% of Google itself (350B valuation mentioned in the article). Anthropic gets traction but has no moat, no infrastructure and relatively small team working for it. I feel for 40B you can get a lot of very smart people and a lot of very good hardware to outcompete it.
the moat is the tool itself. You understand this after you start using it.
> You understand this after you start using it.
Its just amazing people that people talk about Anthropic and have never used it.
> I feel for 40B you can get a lot of very smart people and a lot of very good hardware to outcompete it
Nah, see Meta
25% ;)
No, Google's market cap is $4.1T, over 10 times $350B.
10B at their valuation from last November is an absolutely killer deal. If Anthropic had sufficient compute supply they could raise at 2x easily if not 3x.
They need it to fend off Crabby Rathbun from watching YouTube videos and commenting. The paperclip race is on, and we must win it!
Anthropic, meanwhile, is spending hundreds of millions buying customer commitments from PE firms to inflate that DAU number. They now have a larger war chest to spend on artificial user acquisition to further inflate that value for future funding rounds.