It's interesting that Amazon don't appear interested in acquiring Anthropic, which would have seemed like somewhat of a natural fit given that they are already partnered, Anthropic have apparently optimized (or at least adapted) for Trainium, and Amazon don't have their own frontier model.
It seems that Amazon are playing this much like Microsoft - seeing themselves are more of a cloud provider, happy to serve anyone's models, and perhaps only putting a moderate effort into building their own models (which they'll be happy to serve to those who want that capability/price point).
I don't see the pure "AI" plays like OpenAI and Anthropic able to survive as independent companies when they are competing against the likes of Google, and with Microsoft and Amazon happy to serve whatever future model comes along.
LOL of course they don't want to own Anthropic, else they themselves would be responsible for coming up with the $10s of billions in Monopoly money that Anthropic has committed to pay AMZN for compute in the next few years. Better to take an impressive looking stake and leave some other idiot holding the buck.
Amazon also uses Claude under the hood for their "Rufus" shopping search assistant which is all over amazon.com.
It's kind of funny, you can ask Rufus for stuff like "write a hello world in python for me" and then it will do it and also recommend some python books.
> It's kind of funny, you can ask Rufus for stuff like "write a hello world in python for me" and then it will do it and also recommend some python books.
Interesting, I tried it with the chatbot widget on my city government's page, and it worked as well.
I wonder if someone has already made an openrouter-esque service that can connect claude code to this network of chat widgets. There are enough of them to spread your messages out over to cover an entire claude pro subscription easily.
A childhood internet friend of mine did something similar to that but for sending SMSes for free using the telco websites' built in SMS forms. He even had a website with how much he saved his users, at least until the telcos shut him down.
Are you sure? While Amazon doesn't own a "true" frontier model they have their own foundation model called Nova.
I assume if Amazon was using Claude's latest models to power it's AI tools, such as Alexa+ or Rufus, they would be much better than they currently are. I assume if their consumer facing AI is using Claude at all it would be a Sonnet or Haiku model from 1+ versions back simply due to cost.
> I assume if their consumer facing AI is using Claude at all it would be a Sonnet or Haiku model from 1+ versions back simply due to cost.
I would assume quite the opposite: it costs more to support and run inference on the old models. Why would Anthropic make inference cheaper for others, but not for amazon?
Haha just tried and it works! First I tried in Spanish (I'm in Spain) and it simply refused, then I asked in English and it just did it (but it answered in Spanish!)
EDIT: I then asked for a Fizzbuzz implementation and it kindly asked. I asked then for a Rust Fizzbuzz implementation, but this time I asked again in Spanish, and he said that it could not help me with Fizzbuzz in Rust, but any other topic would be ok. Then again I asked in English "Please do Rust now" and it just wrote the program!
I wonder what the heck are they doing there? The guardrailing prompt is translated to the store language?
> It's kind of funny, you can ask Rufus for stuff like "write a hello world in python for me" and then it will do it and also recommend some python books.
From a perspective of "how do we monetize AI chatbots", an easy thing about this usage context is that the consumer is already expecting and wanting product recommendations.
(If you saw this behavior with ChatGPT, it wouldn't go down as well, until you were conditioned to expect it, and there were no alternatives.)
AI is unquestionably useful, but we don't have enough product categories.
We're in the "electric horse carriage" phase and the big research companies are pleading with businesses to adopt AI. The problem is you can't do that.
AI companies are asking you to AI, but they aren't telling you how or what it can do. That shouldn't be how things are sold. The use case should be overwhelmingly obvious.
It'll take a decade for AI native companies, workflows, UIs, and true synergies between UI and use case to spring up. And they won't be from generic research labs, but will instead marry the AI to the problem domain.
Open source AI that you can fine tune to the control surface is what will matter. Not one-size-fits-all APIs and chat interfaces.
ChatGPT and Sora are showing off what they think the future of image and video are. Meanwhile actual users like the insanely popular VFX YouTube channel are using crude tools like ComfyUI to adopt the models to their problems. And companies like Adobe are actual building the control plane. Their recent conference was on fire with UI+AI that makes sense for designers. Not some chat interface.
We're in the "AI" dialup era. The broadband/smartphone era is still ahead of us.
These companies and VCs thought they were going to mint new Googles and Amazons, but it's more than likely they were the WebVans whose carcasses pave the way.
After watching The Thinking Game documentary, maybe Amazon has little appetite for "research" companies that don't actually solve real world problems, like Deepseek did.
It's safe to assume that a company like Anthropic has been getting (and rejecting) a steady stream of acquisition offers, including from the likes of Amazon, from the moment they got proninent in the AI space.
I get the feeling Amazon wants to be the shovel seller for the AI rush than be a frontier model lab.
There is no moat in being a frontier model developer. A week, month, or a year later there will be a open source alternative which is about 95% as good for most tasks people care about.
I don't know how much they are spending to be fair.
I am basing my observation on the noises they are making.
They did put out a model called Nova but they are not drumming it up at all.
The model page makes no claims of benchmarks or performance.
There are no signs of them poaching talent.
Their CEO has not been in the press singing praises about AI unlike every big tech CEO.
Maybe they have a skunk-works team on it but something tells me they are waiting for the paint to dry.
They're likely just waiting out the eventual crash and waiting to buy at the resulting fire sale. Microsoft has done a very good job of investing in the space enough to see a potentially lucrative pay out while managing the risk enough to not be sunk if it doesn't pan out.
why would you take on that burn rate when you can invest, get the investment back over time in cloud spend, and maybe make off like bandits when they ipo
Sort of. You can do what Zuck did; give your shares more votes, so you stay in control. (He owns 13% of the shares, but more than 50% of the voting power.) That's less doable with an acquisition.
In one case your ownership is diluted by maybe 10%, and you keep full decision making power and everything else. In the other it is diluted by 100% and you are now an employee. They are very different outcomes.
I think Claude Code is the moat (though I definitely recognize it's a pretty shallow moat). I don't want to switch to Codex or whatever the Gemini CLI is, I like Claude Code and I've gotten used to how it works.
Again, I know that's a shallow moat - agents just aren't that complex from a pure code perspective, and there are already tools that you can use to proxy Claude Code's requests out to different models. But at least in my own experience there is a definite stickiness to Claude that I probably won't bother to overcome if your model is 1.1x better. I pay for Google Business or whatever it's called primarily to maintain my vanity email and I get some level of Gemini usage for free, and I barely touch it, even though I'm hearing good things about it.
(If anything I'm convincing myself to give Gemini a closer look, but I don't think that undermines my overarching (though slightly soft) point).
Google had PageRank, which gave them much better quality results (and they got users to stick with them by offering lots of free services (like gmail) that were better quality than existing paid services). The difference was night and day compared to the best other search engines at the time (WebCrawler was my goto, then sometimes AltaVista). The quality difference between "foundation" models is nil. Even the huge models they run in datacenters are hardly better than local models you can run on a machine 64gb+ ram (though faster of course). As Google grew it got better and better at giving you good results and fighting spam, while other search engines drowned in spam and were completely ruined by SEO.
PageRank, everything before PageRank was more like yellow pages than a search engine as we know it today. Google also had a patent on it, so it's not like other people could simply copy it.
Google was also way more minimal (and therefore faster on slow connections) and it raised enough money to operate without ads for years (while its competitors were filled with them).
Not really comparable to today, when you have 3-4 products which are pretty much identical, all operating under a huge loss.
Is Claude Code even running at a marginal profit? (who knows)
Is the marginal profit large enough to pay for continued R&D to stay competitive (no)
Does Claude Code have a sustainable advantage over what Amazon, Microsoft and Google can do in this space using their incumbency advantage and actual profits and using their own infrastructure?
Assuming by "they" you mean current shareholders (who include Google and Amazon and VCs) if they are selling at least in part, why would at least some of them not be willing to sell their entire stakes?
> They could make more money keeping control of the company and have control.
why exit now and become a stuffed AI driven animal when you can keep running this ship yourself, doing your dream job and getting all the woos and panties?
It is spending a lot of money to do the same thing (selling the shovels), and gaining maybe a bit bigger cut if the bubble doesn't burst too violently.
Anthropic is a $1T company in the making (by 2030), already raised their last round at ~$200B valuation. Do you really think Amazon can acquire them? They already invested a lot of money in them and probably own at least 20% of Anthropic, which was the smartest thing Jassy did in a while. Not to mention, if Adobe wasn't allowed to buy Figma, do you think Amazon will be allowed to buy Anthropic? No way it's going to be approved.
> I don't see the pure "AI" plays like OpenAI and Anthropic able to survive as independent companies when they are competing against the likes of Google, and with Microsoft and Amazon happy to serve whatever future model comes along.
One thing you're right about - Anthropic isn't surviving - it's thriving. Probably the fastest growing revenue in history.
> margins are either good or can soon become good.
Their margins are negative and every increase in usage results in more cost. They have a whole leaderboard of people who pay $20 a month and then use $60,000 of compute.
That site seems to date from the days before there were real usage limits on Claude Code. Note that none of the submissions are recent. As such, I think it's basically irrelevant - the general observation is that Claude Code will rate limit you long, long before you can pull off the usage depicted so it's unlikely you can be massively net-profit-negative on Claude Code.
Do you mind giving a bit more details in layman's terms about this assuming the $60k per subscriber isn't hyperbole? Is that the total cost of the latest training run amortized per existing subscriber plus the inference cost to serve that one subscriber?
If you tell me to click the link, I did, but backed out because I thought you'd actually be willing to break it down here instead. I could also ask Claude about it I guess.
It counted up the tokens that users on “unlimited” Max/Pro plans consumed through CC, and calculated what it would cost to buy that number of tokens through the API.
$60K in a month was unusual (and possibly exaggerated); amounts in the $Ks were not. For which people would pay $200 on their Max plan.
Since that bonanza period Anthropic seem to have reined things in, largely through (obnoxiously tight) weekly consumption limits for their subscription plans.
It’s a strange feeling to be talking about this as if it were ancient history, when it was only a few months ago… strange times.
So they're now putting in aggressive caps and the other two paths they have to address the gap is to drive the/their cost of those tokens way down and/or the user pays many multiples of their current subscription. That's not to say that's odd for any business to expect their costs to decrease substantially and their pricing power to increase, but even if the gap is "only" low thousands to $200 that's...significant. Thanks for the insight.
Yes, those using the tools use the tools, but I don't really see those developers absolutely outpacing the rest of developers who do it the old fashioned way still.
I think you're definitely right, for the moment. I've been forcing myself to use/learn the tools almost exclusively for the past 3-4 months and I was definitely not seeing any big wins early on, but improvement (of my skills and the tools) has been steady and positive, and right now I'd say I'm ahead of where I was the old-fashioned way, but on an uneven basis. Some things I'm probably still behind on, others I'm way ahead. My workflow is also evolving and my output is of higher quality (especially tests/docs). A year from now I'll be shocked if doing nearly anything without some kind of augmented tooling doesn't feel tremendously slow and/or low-quality.
I think inertia and determinism play roles here. If you invest months in learning an established programming language, it's not likely to change much during that time, nor in the months (and years) that follow. Your hard-earned knowledge is durable and easy to keep up to date.
In the AI coding and tooling space everything seems to be constantly changing: which models, what workflows, what tools are in favor are all in flux. My hesitancy to dive in and regularly include AI tooling in my own programming workflow is largely about that. I'd rather wait until the dust has settled some.
Of course they are. The two things aren’t contradictory at all, in fact one strongly implies the other. If AI is writing 90% of your code, that means the total contribution of a developer is 10× the code they would write without AI. This means you get way more value per developer, so why wouldn’t you keep hiring developers?
This idea that “AI writes 90% of our code” means you don’t need developers seems to spring from a belief that there is a fixed amount of software to produce, so if AI is doing 90% of it then you only need 10% of the developers. So far, the world’s appetite for software is insatiable and every time we get more productive, we use the same amount of effort to build more software than before.
The point at which Anthropic will stop hiring developers is when AI meets or exceeds the capabilities of the best human developers. Then they can just buy more servers instead of hiring developers. But nobody is claiming AI is capable of that so far, so of course they are going to capitalise on their productivity gains by hiring more developers.
that’s not what he claimed, just to be clear. I’m too lazy to look up the full quote but not lazy enough to not comment this is A) out of context B) mis-phrased as to entirely misconstrue the already taken-out-of-context quote
>"I think we will be there in three to six months, where AI is writing 90% of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code," Amodei said at a Council of Foreign Relations event on Monday.
>Amodei said software developers would still have a role to play in the near term. This is because humans will have to feed the AI models with design features and conditions, he said.
>"But on the other hand, I think that eventually all those little islands will get picked off by AI systems. And then, we will eventually reach the point where the AIs can do everything that humans can. And I think that will happen in every industry," Amodei said.
you’re once again cutting the quote short — after “all of the code” he has more to say that’s very important for understanding the context and avoiding this rage-bait BS we all love to engage in
edit: sorry you mostly included it paraphrased; it does a disservice (I understand it’s largely the media’s fault) to cut that full quote short though. I’m trying to specifically address someone claiming this person said 90% of developers would be replaced in a year over a year ago, which is beyond misleading
edit to put the full quote higher:
> "and in 12 months, we might be in a world where the ai is writing essentially all of the code. But the programmer still needs to specify what are the conditions of what you're doing. What is the overall design decision. How we collaborate with other code that has been written. How do we have some common sense with whether this is a secure design or an insecure design. So as long as there are these small pieces that a programmer has to do, then I think human productivity will actually be enhanced"
> "and in 12 months, we might be in a world where the ai is writing essentially all of the code. But the programmer still needs to specify what are the conditions of what you're doing. What is the overall design decision. How we collaborate with other code that has been written. How do we have some common sense with whether this is a secure design or an insecure design. So as long as there are these small pieces that a programmer has to do, then I think human productivity will actually be enhanced"
uh it proves the original comment I responded to is extremely misleading (which is my only point here); CEO did not say 90% of developers would be replaced, at all
I was thinking this is going to happen because last night I got an email about them fixing how they collect sales taxes. Having been part of a couple of IPO/acquisitions, I thought to myself: "Nobody cares about sales taxes until they need to IPO or sell."
Honestly these IPOs are likely to kill the market. Once the necessary disclosures are out, and the worse-case math people are assuming turns out to have been way more optimistic than the actual truth, the entire market is likely crashing since the money is so spread out. So far there has been zero good news from an investment perspective out of LLM centered companies outside of what are ultimately just complex financial engineered investments.
If they get into the S&P 500 at a $300B market cap that puts them at #30, just behind Coca-Cola. They'll make up about half a percent of the index and then will have a ready supply of price-insensitive buyers in the form of everybody who puts their retirement fund into an index fund on autopilot.
Well they'll hit the requirements for company size and country of domicile, but aren't yet at the other requirements, of profitability and a minimum of 12 months after an IPO so they have a chance of being added.
As to the size of the bump they'll get there isn't a single rule of thumb but larger cap companies tend to get a smaller bump, which you'd expect. I've seen models estimate a 2-5% bump for large companies and a 4-7% bump for mid level and 6-12% for "small" under $20 Billion dollar market cap companies.
SP500 is a capitalization* weighted index, hence it is very price sensitive.
Everybody who puts their retirement fund into an index fund are buying the index fund without relation to the index fund's price (aka price insensitive). But the index fund itself is buying shares based on each company's relative performance, hence the index fund is price sensitive. That is evidenced by companies falling out of the SP500 and even failing.
>The goal of float adjustment is to adjust each company’s total shares outstanding for long-term, strategic shareholders, whose holdings are not considered to be available to the market.
The S&P 500 is inversely price sensitive, as a capitalization-weighted index. Normally you want to buy low and sell high. An S&P500 index fund buys more of high-priced stocks and sells the low-priced ones, by definition. The highest market caps are the stocks with the highest prices (adjusted for number of shares outstanding, of course).
For most ordinary investors, this doesn't really matter, because you put your money into your retirement fund every month and you only take it out at retirement. But if you're looking at the short term, it absolutely matters. I've heard S&P 500 indexing referred to as a momentum investment strategy: it buys stocks whose prices are going up, on the theory that they will go up more in the future. And there's an element of a self-fulfilling prophecy to that, since if everybody else is investing in the index fund, they also will be buying those same stocks, which will cause them to go up even more in the future.
If you want something that buys shares based on each company's relative performance, you want a fundamental-weighted index. I've looked into that and I found a few revenue-weighted index funds, but couldn't find a single earnings-weighted index fund, which is what I actually want. Recommendations wanted; IMHO the S&P 500 is way overvalued on fundamentals and heavily exposed to certain fairly bubbly stocks (the Mag-7 alone make up 35% of your index fund, and one of them is my employer, and all of them employ heavily in my geographic area and are pushing up my home value), so I've been looking for a way to diversify into companies that actually have solid earnings.
While it is true that being added to the SP500 can lead to an increase in demand, and hence cause the index fund to pay more for the share, there are evidently opposing forces that modulate share prices for companies in the SP500.
>I've been looking for a way to diversify into companies that actually have solid earnings.
No one has more solid earnings than the top tech companies. Assuming you don't work for Tesla, you already are doing about the best you can in the US. Your options to diversify is to invest in other countries, develop your political connections, and possibly get into real estate development. Maybe have a bunch of kids.
I love claude, but looking at google it seems like it will just be a matter of time before Google/Gemini will be a better product. Just looking at how much Google have improved their AI game the last couple months. I'm putting my money on google, I assume the reason they are doing an IPO right now is to be able to cash in on the investment before google surpasses them.
Opus 4.5 is good. At least in Cursor it’s much better than Gemini 3 Pro for writing a lot of code autonomously: faster and calls tools better.
That said Gemini is still very, very good at reviews, SQL, design and smaller (relatively) edits; but today it is not at all obvious that Google is going to win it all. They’re positioned very well, but execution needs to be top notch.
There have been multiple model generations now where Anthropic have proven that they're ahead of everyone with developing LLMs for coding - if anything the gap has broadened with Opus 4.5.
Just how much of the market do retail investors control? I thought they were a drop in the bucket.
Also, is there a way to know how much of the total volume of shares is being traded now? If I kept hyping my company (successfully), and drove the share price from $10 to $1000, thanks to retail hype, I could 100x the value of my company lets say from $100m to $10B, while the amount of money actually changing hands would be miniscule in comparison.
When you add in money managed on behalf of retail investors it gets big fast, thinking indexed funds, pensions etc. they are not immune, and ETFs by definition need to participate
Retail has gotten alot bigger lately( last 10 years and mostly since covid) and alot more "organized".
Goldman puts out their retail reports weekly that show retail is 20% of trading in alot of names and higher in alot of the meme stock names.
They used to be so tiny due to $50/trade fees, but with the advent of all the free money in the system since covid and GenZ feeling like real estate won't be their path to freedom, and option trading for retail, and zero commission trading retail has a real voice in the markets.
Retail is a big deal these days. Used to be sub 10%, now it’s in the 30-40% of daily volume range IIUC.
You can easily look up the numbers you are asking for, the TLDR is that the volume in most stocks is high enough that you can’t manipulate it much. If it’s even 2x overpriced then there’s 100m on the table for whoever spots this and shorts, ie enough money that plenty of smart people will be spending effort on modeling and valuation studies.
Index investors aren't exposed to IPOs, since the common indexes (SPX etc) don't include IPOs (and if you invest in a YOLO index that does, that's on you).
Also:
> The US led a sharp rebound, driven by a surge in IPO filings and strong post-listing returns following the Federal Reserve’s rate cut.
This isn't really true. IPOs provide access to much more money in a very short time frame. They also allow parties involved to make huge coin before, during and immediately after the process.
Whatever you think about AI, it is a good that Anthropic go public and I argue it’s consistent with their mission. It’s better for the public to have a way to own a piece of the company.
In an interview Sam Altman said he preferred to stay away from an IPO, but the notion of the public having an interest in the company appealed to him. Actions speak louder than words, and so it is fitting from a mission standpoint that Anthropic may do it first.
I spend $0 on AI. My employer spends on it for me, but I have no idea how much nor how it compares to vast array of other SaaS my employer provides for me.
While I anecdotally know of many devs who do pay out of pocket for relatively expensive LLM services, they a minority compared to folks like me happy to leach off of free or employer-provided services.
I’m very excited to hopefully find out from public filings just how many individuals pay for Claude vs businesses.
Okay, let’s see you guys get passed the inference costs disclosure. According to WSJ it is enough to kill the frontier shop business model. It’s one of the biggest things blocking OpenAI
You did not parse that article properly. It regurgitates only what everyone else keeps saying: when you conflate R&D costs with operating costs, then you can say these companies are 'unprofitable'. I'd propose with a proper GAAP accounting they are profitable right now; by proper I mean that you amortize out the costs of R&D against the useful life of the models as best you can.
I am not aware of any frontier inference disclosures that put margins at less than 60%. Inference is profitable across the industry, full stop.
Historically R&D has been profitable for the frontier labs -- this is obscured because the emphasis on scaling the last five years has meant they just keep 10xing their R&D compute budget. But for each cycle of R&D, the results have returned more in inference margin than they cost in training compute. This is one major reason we keep seeing more spend on R&D - so far it has paid, in the form of helping a number of companies hit > $1bn in annual revenue faster than almost any companies in history have done so.
All that said, be cautious shorting these stocks when they go public.
Yes to IPO you have to submit an S-1 form which requires the last 3 years of your full financials and much more. You can’t just IPO without disclosing how your business works and whether it makes or loses money and how much.
Inference costs aren't a problem, selling inference is almost certainly profitable. The problem is that its (probably) not profitable enough to cover the training and other R&D costs.
Incorrect. They're not a C corp, they're a public benefit corporation. They have a different legal obligation. Notably, they have a legal obligation to deliver on their mission. That's why Anthropic is the only actual mission-driven AI company. They do have to balance that legal obligation with the traditional legal obligations that a for-profit corporation has. But most importantly, it is actually against the law for them not to balance prioritizing making money and prioritizing AI safety!
Do you think they currently exist to prioritize AI safety? That shit won’t pay the bills, will it? Then they don’t exist. Goals are nice, OKRs yay, but at the end of the day, we all know the dollar drives everything.
It's simple, they will redefine the term (just like OpenAI redefined "AGI" into "just makes a lot of money) into "doesn't leak user data" and then claim success
Does this mean that Anthropic has more than reached AGI, seeing as OpenAI has officially defined "AGI" as any AI that manages to create more than a hectocorn's worth (100 unicorns, or $100B) in economic value?
It's interesting that Amazon don't appear interested in acquiring Anthropic, which would have seemed like somewhat of a natural fit given that they are already partnered, Anthropic have apparently optimized (or at least adapted) for Trainium, and Amazon don't have their own frontier model.
It seems that Amazon are playing this much like Microsoft - seeing themselves are more of a cloud provider, happy to serve anyone's models, and perhaps only putting a moderate effort into building their own models (which they'll be happy to serve to those who want that capability/price point).
I don't see the pure "AI" plays like OpenAI and Anthropic able to survive as independent companies when they are competing against the likes of Google, and with Microsoft and Amazon happy to serve whatever future model comes along.
LOL of course they don't want to own Anthropic, else they themselves would be responsible for coming up with the $10s of billions in Monopoly money that Anthropic has committed to pay AMZN for compute in the next few years. Better to take an impressive looking stake and leave some other idiot holding the buck.
Isn't taking an impressive looking stake, in effect, leaving them holding the buck?
Amazon also uses Claude under the hood for their "Rufus" shopping search assistant which is all over amazon.com.
It's kind of funny, you can ask Rufus for stuff like "write a hello world in python for me" and then it will do it and also recommend some python books.
> It's kind of funny, you can ask Rufus for stuff like "write a hello world in python for me" and then it will do it and also recommend some python books.
Interesting, I tried it with the chatbot widget on my city government's page, and it worked as well.
I wonder if someone has already made an openrouter-esque service that can connect claude code to this network of chat widgets. There are enough of them to spread your messages out over to cover an entire claude pro subscription easily.
A childhood internet friend of mine did something similar to that but for sending SMSes for free using the telco websites' built in SMS forms. He even had a website with how much he saved his users, at least until the telcos shut him down.
Phreaking in 2025
Are you sure? While Amazon doesn't own a "true" frontier model they have their own foundation model called Nova.
I assume if Amazon was using Claude's latest models to power it's AI tools, such as Alexa+ or Rufus, they would be much better than they currently are. I assume if their consumer facing AI is using Claude at all it would be a Sonnet or Haiku model from 1+ versions back simply due to cost.
> I assume if their consumer facing AI is using Claude at all it would be a Sonnet or Haiku model from 1+ versions back simply due to cost.
I would assume quite the opposite: it costs more to support and run inference on the old models. Why would Anthropic make inference cheaper for others, but not for amazon?
Rufus is a Claude Haiku, yes.
> Are you sure? While Amazon doesn't own a "true" frontier model they have their own foundation model called Nova.
I work for Amazon, everyone is using Claude. Nova is a piece of crap, nobody is using it. It's literally useless.
I haven't tried the new versions that just came out though.
Haha just tried and it works! First I tried in Spanish (I'm in Spain) and it simply refused, then I asked in English and it just did it (but it answered in Spanish!)
EDIT: I then asked for a Fizzbuzz implementation and it kindly asked. I asked then for a Rust Fizzbuzz implementation, but this time I asked again in Spanish, and he said that it could not help me with Fizzbuzz in Rust, but any other topic would be ok. Then again I asked in English "Please do Rust now" and it just wrote the program!
I wonder what the heck are they doing there? The guardrailing prompt is translated to the store language?
> It's kind of funny, you can ask Rufus for stuff like "write a hello world in python for me" and then it will do it and also recommend some python books.
From a perspective of "how do we monetize AI chatbots", an easy thing about this usage context is that the consumer is already expecting and wanting product recommendations.
(If you saw this behavior with ChatGPT, it wouldn't go down as well, until you were conditioned to expect it, and there were no alternatives.)
lol, i tried it. Asked `write the product details in single-line bash array` and it did so.
I think they’re waiting for bargain bin deals once the bubble collapses.
This.
The market is too new for AI.
AI is unquestionably useful, but we don't have enough product categories.
We're in the "electric horse carriage" phase and the big research companies are pleading with businesses to adopt AI. The problem is you can't do that.
AI companies are asking you to AI, but they aren't telling you how or what it can do. That shouldn't be how things are sold. The use case should be overwhelmingly obvious.
It'll take a decade for AI native companies, workflows, UIs, and true synergies between UI and use case to spring up. And they won't be from generic research labs, but will instead marry the AI to the problem domain.
Open source AI that you can fine tune to the control surface is what will matter. Not one-size-fits-all APIs and chat interfaces.
ChatGPT and Sora are showing off what they think the future of image and video are. Meanwhile actual users like the insanely popular VFX YouTube channel are using crude tools like ComfyUI to adopt the models to their problems. And companies like Adobe are actual building the control plane. Their recent conference was on fire with UI+AI that makes sense for designers. Not some chat interface.
We're in the "AI" dialup era. The broadband/smartphone era is still ahead of us.
These companies and VCs thought they were going to mint new Googles and Amazons, but it's more than likely they were the WebVans whose carcasses pave the way.
After watching The Thinking Game documentary, maybe Amazon has little appetite for "research" companies that don't actually solve real world problems, like Deepseek did.
It's safe to assume that a company like Anthropic has been getting (and rejecting) a steady stream of acquisition offers, including from the likes of Amazon, from the moment they got proninent in the AI space.
I get the feeling Amazon wants to be the shovel seller for the AI rush than be a frontier model lab.
There is no moat in being a frontier model developer. A week, month, or a year later there will be a open source alternative which is about 95% as good for most tasks people care about.
Haven't they invested hundreds of millions trying to train frontier models?
I don't know how much they are spending to be fair.
I am basing my observation on the noises they are making. They did put out a model called Nova but they are not drumming it up at all. The model page makes no claims of benchmarks or performance. There are no signs of them poaching talent. Their CEO has not been in the press singing praises about AI unlike every big tech CEO.
Maybe they have a skunk-works team on it but something tells me they are waiting for the paint to dry.
They're likely just waiting out the eventual crash and waiting to buy at the resulting fire sale. Microsoft has done a very good job of investing in the space enough to see a potentially lucrative pay out while managing the risk enough to not be sunk if it doesn't pan out.
why would you take on that burn rate when you can invest, get the investment back over time in cloud spend, and maybe make off like bandits when they ipo
Something something selling shovels in a gold rush.
Maybe Anthropic simply don’t want to be acquired
You understand that doing an IPO is quite literally selling big chunks of yourself to the highest bidder, right?
Sort of. You can do what Zuck did; give your shares more votes, so you stay in control. (He owns 13% of the shares, but more than 50% of the voting power.) That's less doable with an acquisition.
In one case your ownership is diluted by maybe 10%, and you keep full decision making power and everything else. In the other it is diluted by 100% and you are now an employee. They are very different outcomes.
The current leadership retains power in an IPO. Is there a minimum size chuck one has to sell when IPO-ing? How do you know it will be big chunks?
uh... thats exactly why anthropic wouldnt want to be acquired? weird response to that comment IMO
Would have made a lot of sense a few years ago, but not now.
Why are you assuming Anthropic is for sale? They have a clear path to profitability, booming growth, and a massive and mission driven founding team.
They could make more money keeping control of the company and have control.
> They have a clear path to profitability
I'd love to see evidence for such a thing, because it's not clear to me at all that this is the case.
I personally think they're the best of the model providers but not sure if any foundation model companies (pure play) have a path to profitability.
What do you mean by pure play? Claude code alone is 1B revenue. It's not just the API they make money on.
https://www.anthropic.com/news/anthropic-acquires-bun-as-cla...
But there's no moat around these models, they're all interchangeable and leapfrogging each other at a decent pace.
Gemini could get much better tomorrow and their entire customer base could switch without issue.
I think Claude Code is the moat (though I definitely recognize it's a pretty shallow moat). I don't want to switch to Codex or whatever the Gemini CLI is, I like Claude Code and I've gotten used to how it works.
Again, I know that's a shallow moat - agents just aren't that complex from a pure code perspective, and there are already tools that you can use to proxy Claude Code's requests out to different models. But at least in my own experience there is a definite stickiness to Claude that I probably won't bother to overcome if your model is 1.1x better. I pay for Google Business or whatever it's called primarily to maintain my vanity email and I get some level of Gemini usage for free, and I barely touch it, even though I'm hearing good things about it.
(If anything I'm convincing myself to give Gemini a closer look, but I don't think that undermines my overarching (though slightly soft) point).
And, if your revenue is $1B but your costs are $2B it only lasts until the music stops....
I don’t think they are losing money on inference.
Model training, sure. But that will slow down at some point.
What was the moat in search?
Google had PageRank, which gave them much better quality results (and they got users to stick with them by offering lots of free services (like gmail) that were better quality than existing paid services). The difference was night and day compared to the best other search engines at the time (WebCrawler was my goto, then sometimes AltaVista). The quality difference between "foundation" models is nil. Even the huge models they run in datacenters are hardly better than local models you can run on a machine 64gb+ ram (though faster of course). As Google grew it got better and better at giving you good results and fighting spam, while other search engines drowned in spam and were completely ruined by SEO.
PageRank, everything before PageRank was more like yellow pages than a search engine as we know it today. Google also had a patent on it, so it's not like other people could simply copy it.
Google was also way more minimal (and therefore faster on slow connections) and it raised enough money to operate without ads for years (while its competitors were filled with them).
Not really comparable to today, when you have 3-4 products which are pretty much identical, all operating under a huge loss.
The sheer amount of data and infrastructure Google has relative to their competitors.
Just having far more user search queries and click data gives them a huge advantage.
And the same question I always ask.
Are they profitable (no),
Is Claude Code even running at a marginal profit? (who knows)
Is the marginal profit large enough to pay for continued R&D to stay competitive (no)
Does Claude Code have a sustainable advantage over what Amazon, Microsoft and Google can do in this space using their incumbency advantage and actual profits and using their own infrastructure?
Which is not a lot at all compared to their cost and especially the valuation discussed here.
They are selling, to public equity investors, because they can get a better price that way than selling to another company!
Why are you assuming Anthropic is for sale?
They're preparing for IPO?
Assuming by "they" you mean current shareholders (who include Google and Amazon and VCs) if they are selling at least in part, why would at least some of them not be willing to sell their entire stakes?
> They could make more money keeping control of the company and have control.
It depends on how much they can sell for.
We’re not assuming anything, this whole post is about them doing an IPO…
signed D. Amodei lmao
why exit now and become a stuffed AI driven animal when you can keep running this ship yourself, doing your dream job and getting all the woos and panties?
Hence the need to cash out.
Amazon and Microsoft are protecting themselves from the bubble.
Yes, repackaging and reselling AI is a starkly better business than creating frontier models
I too would be sitting back and watching my competitors commit insane capital to this unlikely bet.
It is spending a lot of money to do the same thing (selling the shovels), and gaining maybe a bit bigger cut if the bubble doesn't burst too violently.
Anthropic is a $1T company in the making (by 2030), already raised their last round at ~$200B valuation. Do you really think Amazon can acquire them? They already invested a lot of money in them and probably own at least 20% of Anthropic, which was the smartest thing Jassy did in a while. Not to mention, if Adobe wasn't allowed to buy Figma, do you think Amazon will be allowed to buy Anthropic? No way it's going to be approved.
> I don't see the pure "AI" plays like OpenAI and Anthropic able to survive as independent companies when they are competing against the likes of Google, and with Microsoft and Amazon happy to serve whatever future model comes along.
One thing you're right about - Anthropic isn't surviving - it's thriving. Probably the fastest growing revenue in history.
Well, just to show you a microcosm of what happens when VCs find the bigger fool in the public market when they IPO money losing companies….
https://medium.com/@Arakunrin/the-post-ipo-performance-of-y-...
> One thing you're right about - Anthropic isn't surviving - it's thriving. Probably the fastest growing revenue in history.
Growing revenue and losing money is not “thriving”
Lol, no one would want to buy that trash.
Same w/ Perplexity.
That S1 is gonna make for a fun read. It'll make Adam Neumann blush.
Because of unprofitability? ARR and growth are very high, and margins are either good or can soon become good.
Is the claim that coding agents can't be profitable?
> margins are either good or can soon become good.
Their margins are negative and every increase in usage results in more cost. They have a whole leaderboard of people who pay $20 a month and then use $60,000 of compute.
https://www.viberank.app
That site seems to date from the days before there were real usage limits on Claude Code. Note that none of the submissions are recent. As such, I think it's basically irrelevant - the general observation is that Claude Code will rate limit you long, long before you can pull off the usage depicted so it's unlikely you can be massively net-profit-negative on Claude Code.
Do you mind giving a bit more details in layman's terms about this assuming the $60k per subscriber isn't hyperbole? Is that the total cost of the latest training run amortized per existing subscriber plus the inference cost to serve that one subscriber?
If you tell me to click the link, I did, but backed out because I thought you'd actually be willing to break it down here instead. I could also ask Claude about it I guess.
It counted up the tokens that users on “unlimited” Max/Pro plans consumed through CC, and calculated what it would cost to buy that number of tokens through the API.
$60K in a month was unusual (and possibly exaggerated); amounts in the $Ks were not. For which people would pay $200 on their Max plan.
Since that bonanza period Anthropic seem to have reined things in, largely through (obnoxiously tight) weekly consumption limits for their subscription plans.
It’s a strange feeling to be talking about this as if it were ancient history, when it was only a few months ago… strange times.
So they're now putting in aggressive caps and the other two paths they have to address the gap is to drive the/their cost of those tokens way down and/or the user pays many multiples of their current subscription. That's not to say that's odd for any business to expect their costs to decrease substantially and their pricing power to increase, but even if the gap is "only" low thousands to $200 that's...significant. Thanks for the insight.
> margins are either good or can soon become good
This is always the pitch for money-losing IPOs. Occasionally, it is true.
let's see them then
that wework s1 was gold
Elevating the world's consciousness! https://www.wework.com/newsroom/wecompany
Dario Amodei gives of strong Adam Neumann vibes. He claimed "AI will replace 90% of developers within 6 months" about a year ago...
It was "writing 90% of the code", which seems to be pretty accurate, if not conservative, for those keeping up with the latest tools.
Yes, those using the tools use the tools, but I don't really see those developers absolutely outpacing the rest of developers who do it the old fashioned way still.
I think you're definitely right, for the moment. I've been forcing myself to use/learn the tools almost exclusively for the past 3-4 months and I was definitely not seeing any big wins early on, but improvement (of my skills and the tools) has been steady and positive, and right now I'd say I'm ahead of where I was the old-fashioned way, but on an uneven basis. Some things I'm probably still behind on, others I'm way ahead. My workflow is also evolving and my output is of higher quality (especially tests/docs). A year from now I'll be shocked if doing nearly anything without some kind of augmented tooling doesn't feel tremendously slow and/or low-quality.
it’s wild that engineers need months or years to properly learn programming languages but dismiss AI tooling after one bad interaction
I think inertia and determinism play roles here. If you invest months in learning an established programming language, it's not likely to change much during that time, nor in the months (and years) that follow. Your hard-earned knowledge is durable and easy to keep up to date.
In the AI coding and tooling space everything seems to be constantly changing: which models, what workflows, what tools are in favor are all in flux. My hesitancy to dive in and regularly include AI tooling in my own programming workflow is largely about that. I'd rather wait until the dust has settled some.
Motivated reasoning combined with incomplete truths is the perfect recipe for this.
I kind of get it, especially if you are stuck on some shitty enterprise AI offering from 2024.
But overall it’s rather silly and immature.
And 12 months later Anthropic is listing 200 open positions for humans: https://www.anthropic.com/jobs
Of course they are. The two things aren’t contradictory at all, in fact one strongly implies the other. If AI is writing 90% of your code, that means the total contribution of a developer is 10× the code they would write without AI. This means you get way more value per developer, so why wouldn’t you keep hiring developers?
This idea that “AI writes 90% of our code” means you don’t need developers seems to spring from a belief that there is a fixed amount of software to produce, so if AI is doing 90% of it then you only need 10% of the developers. So far, the world’s appetite for software is insatiable and every time we get more productive, we use the same amount of effort to build more software than before.
The point at which Anthropic will stop hiring developers is when AI meets or exceeds the capabilities of the best human developers. Then they can just buy more servers instead of hiring developers. But nobody is claiming AI is capable of that so far, so of course they are going to capitalise on their productivity gains by hiring more developers.
that’s not what he claimed, just to be clear. I’m too lazy to look up the full quote but not lazy enough to not comment this is A) out of context B) mis-phrased as to entirely misconstrue the already taken-out-of-context quote
I think it was also back in March, not a year ago
https://www.businessinsider.com/anthropic-ceo-ai-90-percent-... (March 2025):
>"I think we will be there in three to six months, where AI is writing 90% of the code. And then, in 12 months, we may be in a world where AI is writing essentially all of the code," Amodei said at a Council of Foreign Relations event on Monday.
>Amodei said software developers would still have a role to play in the near term. This is because humans will have to feed the AI models with design features and conditions, he said.
>"But on the other hand, I think that eventually all those little islands will get picked off by AI systems. And then, we will eventually reach the point where the AIs can do everything that humans can. And I think that will happen in every industry," Amodei said.
I think it's a silly and poorly defined claim.
you’re once again cutting the quote short — after “all of the code” he has more to say that’s very important for understanding the context and avoiding this rage-bait BS we all love to engage in
edit: sorry you mostly included it paraphrased; it does a disservice (I understand it’s largely the media’s fault) to cut that full quote short though. I’m trying to specifically address someone claiming this person said 90% of developers would be replaced in a year over a year ago, which is beyond misleading
edit to put the full quote higher:
> "and in 12 months, we might be in a world where the ai is writing essentially all of the code. But the programmer still needs to specify what are the conditions of what you're doing. What is the overall design decision. How we collaborate with other code that has been written. How do we have some common sense with whether this is a secure design or an insecure design. So as long as there are these small pieces that a programmer has to do, then I think human productivity will actually be enhanced"
can you post the full quote then? He has posted what the rest of us read
I believe:
> "and in 12 months, we might be in a world where the ai is writing essentially all of the code. But the programmer still needs to specify what are the conditions of what you're doing. What is the overall design decision. How we collaborate with other code that has been written. How do we have some common sense with whether this is a secure design or an insecure design. So as long as there are these small pieces that a programmer has to do, then I think human productivity will actually be enhanced"
from https://www.youtube.com/live/esCSpbDPJik?si=kYt9oSD5bZxNE-Mn
(sorry have been responding quickly on my phone between things; misquotes like this annoy the fuck out of me)
how does that quote disprove anything
uh it proves the original comment I responded to is extremely misleading (which is my only point here); CEO did not say 90% of developers would be replaced, at all
Is this the new 'next year is the year of the Linux desktop'?
SoftBank is just waiting to invest in this …
Meanwhile I tap bankruptcy lawyers as I race Anthropic and OpenAI to stay solvent.
If you spend your last capital on acetone and ethanol reserves you might be able to stay solvent for a lot longer.
How would this work, given that Anthropic is a public benefit corporation?
Etsy was a B-corp at the time of their IPO, so there is some precedent.
I was thinking this is going to happen because last night I got an email about them fixing how they collect sales taxes. Having been part of a couple of IPO/acquisitions, I thought to myself: "Nobody cares about sales taxes until they need to IPO or sell."
So would a $300B Anthropic get included in the SP500?
I think there are profitability requirements, right?
Profitability in both 3 month and 12 month spans. Also minimum 12 months of trading history after IPO.
See page ~9 of https://www.spglobal.com/spdji/en/documents/methodologies/me...
It could be smart for them to get in now with so much talk of a bubble or potential stock market correction.
"Be first, be smarter, or cheat" well. Being first might really be the best game theory move if the collapse will start from you.
But they aren't the first. Google is the first frontier model lab to go public.
I guess the clock is ticking. Probably OAI will try to IPO soon also.
...this -> those bags wont hold themselves now, will they ?
Honestly these IPOs are likely to kill the market. Once the necessary disclosures are out, and the worse-case math people are assuming turns out to have been way more optimistic than the actual truth, the entire market is likely crashing since the money is so spread out. So far there has been zero good news from an investment perspective out of LLM centered companies outside of what are ultimately just complex financial engineered investments.
If they get into the S&P 500 at a $300B market cap that puts them at #30, just behind Coca-Cola. They'll make up about half a percent of the index and then will have a ready supply of price-insensitive buyers in the form of everybody who puts their retirement fund into an index fund on autopilot.
Well they'll hit the requirements for company size and country of domicile, but aren't yet at the other requirements, of profitability and a minimum of 12 months after an IPO so they have a chance of being added.
As to the size of the bump they'll get there isn't a single rule of thumb but larger cap companies tend to get a smaller bump, which you'd expect. I've seen models estimate a 2-5% bump for large companies and a 4-7% bump for mid level and 6-12% for "small" under $20 Billion dollar market cap companies.
So if things go perfectly--it'll be good. Good to know.
SP500 is a capitalization* weighted index, hence it is very price sensitive.
Everybody who puts their retirement fund into an index fund are buying the index fund without relation to the index fund's price (aka price insensitive). But the index fund itself is buying shares based on each company's relative performance, hence the index fund is price sensitive. That is evidenced by companies falling out of the SP500 and even failing.
*specifically float-adjusted market capitalization
https://www.spglobal.com/spdji/en/documents/index-policies/m...
>The goal of float adjustment is to adjust each company’s total shares outstanding for long-term, strategic shareholders, whose holdings are not considered to be available to the market.
see also:
https://www.spglobal.com/spdji/en/methodology/article/sp-us-...
The S&P 500 is inversely price sensitive, as a capitalization-weighted index. Normally you want to buy low and sell high. An S&P500 index fund buys more of high-priced stocks and sells the low-priced ones, by definition. The highest market caps are the stocks with the highest prices (adjusted for number of shares outstanding, of course).
For most ordinary investors, this doesn't really matter, because you put your money into your retirement fund every month and you only take it out at retirement. But if you're looking at the short term, it absolutely matters. I've heard S&P 500 indexing referred to as a momentum investment strategy: it buys stocks whose prices are going up, on the theory that they will go up more in the future. And there's an element of a self-fulfilling prophecy to that, since if everybody else is investing in the index fund, they also will be buying those same stocks, which will cause them to go up even more in the future.
If you want something that buys shares based on each company's relative performance, you want a fundamental-weighted index. I've looked into that and I found a few revenue-weighted index funds, but couldn't find a single earnings-weighted index fund, which is what I actually want. Recommendations wanted; IMHO the S&P 500 is way overvalued on fundamentals and heavily exposed to certain fairly bubbly stocks (the Mag-7 alone make up 35% of your index fund, and one of them is my employer, and all of them employ heavily in my geographic area and are pushing up my home value), so I've been looking for a way to diversify into companies that actually have solid earnings.
>inversely price sensitive
This isn't a term used in economics. The typical terms used are positive price sensitivity and negative price sensitivity.
https://www.investopedia.com/terms/p/price-sensitivity.asp
While it is true that being added to the SP500 can lead to an increase in demand, and hence cause the index fund to pay more for the share, there are evidently opposing forces that modulate share prices for companies in the SP500.
>I've been looking for a way to diversify into companies that actually have solid earnings.
No one has more solid earnings than the top tech companies. Assuming you don't work for Tesla, you already are doing about the best you can in the US. Your options to diversify is to invest in other countries, develop your political connections, and possibly get into real estate development. Maybe have a bunch of kids.
https://companiesmarketcap.com/most-profitable-companies/
I hope to see it because I want to see their real numbers. If I were into gambling, I'd take the opposite side of that bet.
I love claude, but looking at google it seems like it will just be a matter of time before Google/Gemini will be a better product. Just looking at how much Google have improved their AI game the last couple months. I'm putting my money on google, I assume the reason they are doing an IPO right now is to be able to cash in on the investment before google surpasses them.
It's a hot take, I know :D
Opus 4.5 is good. At least in Cursor it’s much better than Gemini 3 Pro for writing a lot of code autonomously: faster and calls tools better.
That said Gemini is still very, very good at reviews, SQL, design and smaller (relatively) edits; but today it is not at all obvious that Google is going to win it all. They’re positioned very well, but execution needs to be top notch.
There have been multiple model generations now where Anthropic have proven that they're ahead of everyone with developing LLMs for coding - if anything the gap has broadened with Opus 4.5.
Codex is and has been superior for some time (though it is slower)
What types of tasks do you find Codex superior?
Have you tried Opus 4.5?
It's an absolute workhorse.
It is so proactive in fixing blockers - 90% of the time for me, choosing the right path forward.
No company has come close to producing as close as of a reliable agentic coding solution as Anthropic has.
Amodei is at the NYT Dealbook Summit today at 1:40 Eastern
Source: https://giftarticle.ft.com/giftarticle/actions/redeem/3ffefa...
Retail investors yoloing into AI at peak bubble vibes sounds about right
Just how much of the market do retail investors control? I thought they were a drop in the bucket.
Also, is there a way to know how much of the total volume of shares is being traded now? If I kept hyping my company (successfully), and drove the share price from $10 to $1000, thanks to retail hype, I could 100x the value of my company lets say from $100m to $10B, while the amount of money actually changing hands would be miniscule in comparison.
When you add in money managed on behalf of retail investors it gets big fast, thinking indexed funds, pensions etc. they are not immune, and ETFs by definition need to participate
Is that not considered institutional? If i own a Vanguard ETF, the stock that comprises the ETF is classified as being owned by Vanguard, right?
Genuinely asking.
You are correct in the main thing you were trying to communicate, but I'll just correct this part:
> ETFs by definition need to participate
You meant to say "index funds". There are many different kinds of ETFs.
Retail has gotten alot bigger lately( last 10 years and mostly since covid) and alot more "organized".
Goldman puts out their retail reports weekly that show retail is 20% of trading in alot of names and higher in alot of the meme stock names.
They used to be so tiny due to $50/trade fees, but with the advent of all the free money in the system since covid and GenZ feeling like real estate won't be their path to freedom, and option trading for retail, and zero commission trading retail has a real voice in the markets.
Retail is a big deal these days. Used to be sub 10%, now it’s in the 30-40% of daily volume range IIUC.
You can easily look up the numbers you are asking for, the TLDR is that the volume in most stocks is high enough that you can’t manipulate it much. If it’s even 2x overpriced then there’s 100m on the table for whoever spots this and shorts, ie enough money that plenty of smart people will be spending effort on modeling and valuation studies.
> Retail is a big deal these days. Used to be sub 10%, now it’s in the 30-40% of daily volume range IIUC.
But that isn't relevant? If they trade a lot but own less than 10% of the shares they're still a small piece.
The institutional investors are likely not trading much, things like 401k are all long term investments
>Retail is a big deal these days. Used to be sub 10%, now it’s in the 30-40% of daily volume range IIUC.
This isn't going to end well is it.
This is the real note - if the company was truly valuable, they wouldn't IPO, they'd get slurped up by someone big.
Modern IPOs are mainly dumping on retail and index investors.
Index investors aren't exposed to IPOs, since the common indexes (SPX etc) don't include IPOs (and if you invest in a YOLO index that does, that's on you).
Also:
> The US led a sharp rebound, driven by a surge in IPO filings and strong post-listing returns following the Federal Reserve’s rate cut.
https://www.ey.com/en_us/insights/ipo/trends
VTI and VT, two of the largest index funds, DO invest in unprofitable companies.
And for the rest (SP 500 etc), these companies are going to fake profits using some sort of financial engineering to be included.
What index fund is buying into IPOs ? The S&P 420?
This isn't really true. IPOs provide access to much more money in a very short time frame. They also allow parties involved to make huge coin before, during and immediately after the process.
I guess the bubble pops the day these IPO
I guess (hope) this means they don’t see a bailout happening soon enough
IPO makes sense for those who might want to cash out before the bubble bursts.
Whatever you think about AI, it is a good that Anthropic go public and I argue it’s consistent with their mission. It’s better for the public to have a way to own a piece of the company.
In an interview Sam Altman said he preferred to stay away from an IPO, but the notion of the public having an interest in the company appealed to him. Actions speak louder than words, and so it is fitting from a mission standpoint that Anthropic may do it first.
> In a statement, an Anthropic spokesperson said: “We have not made any decisions about when, or even whether, to go public.”
They are going public.
Well, they have to. Every grift needs bagholders.
If they get to be a memestock, they might even keep the grift going for a good while. See Tesla as a good example of this.
interesting HN is so bearish considering most of them spend more on AI daily than any other saas category
Even the best product in the world can come from a company whose own valuation is too high.
Anyone is bearish on Nvidia today if the share price would be at a $10T valuation.
Citation needed?
I spend $0 on AI. My employer spends on it for me, but I have no idea how much nor how it compares to vast array of other SaaS my employer provides for me.
While I anecdotally know of many devs who do pay out of pocket for relatively expensive LLM services, they a minority compared to folks like me happy to leach off of free or employer-provided services.
I’m very excited to hopefully find out from public filings just how many individuals pay for Claude vs businesses.
Let the enshitification of Claude commence!
Now that's the beginning of a bubble worth investing in
Okay, let’s see you guys get passed the inference costs disclosure. According to WSJ it is enough to kill the frontier shop business model. It’s one of the biggest things blocking OpenAI
https://www.wsj.com/tech/ai/big-techs-soaring-profits-have-a...
You did not parse that article properly. It regurgitates only what everyone else keeps saying: when you conflate R&D costs with operating costs, then you can say these companies are 'unprofitable'. I'd propose with a proper GAAP accounting they are profitable right now; by proper I mean that you amortize out the costs of R&D against the useful life of the models as best you can.
I am not aware of any frontier inference disclosures that put margins at less than 60%. Inference is profitable across the industry, full stop.
Historically R&D has been profitable for the frontier labs -- this is obscured because the emphasis on scaling the last five years has meant they just keep 10xing their R&D compute budget. But for each cycle of R&D, the results have returned more in inference margin than they cost in training compute. This is one major reason we keep seeing more spend on R&D - so far it has paid, in the form of helping a number of companies hit > $1bn in annual revenue faster than almost any companies in history have done so.
All that said, be cautious shorting these stocks when they go public.
Do you mean as part of going public they need to make public how much they spend on inference versus how much they make?
Yes to IPO you have to submit an S-1 form which requires the last 3 years of your full financials and much more. You can’t just IPO without disclosing how your business works and whether it makes or loses money and how much.
Inference costs aren't a problem, selling inference is almost certainly profitable. The problem is that its (probably) not profitable enough to cover the training and other R&D costs.
Don't forget all the other costs of their business, like paying sales and solutions people (expensive, not going away any time soon).
AGI will become IPO and everyone will forget and move on.
This seems contrary to their stated goal to prioritize AI safety.
It is against the law to prioritize AI safety if you run a public company. You must prioritize profits for your shareholders.
Unless you're a benefit corp, this is true for private companies as well. Quick q - which of the AI companies are benefit corps?
Anthropic, the corporation we're talking about.
> It is against the law to prioritize AI safety if you run a public company. You must prioritize profits for your shareholders
This is nonsense. Public companies are just as free as private companies to maximise whatever sharedholders wants them to.
Incorrect. They're not a C corp, they're a public benefit corporation. They have a different legal obligation. Notably, they have a legal obligation to deliver on their mission. That's why Anthropic is the only actual mission-driven AI company. They do have to balance that legal obligation with the traditional legal obligations that a for-profit corporation has. But most importantly, it is actually against the law for them not to balance prioritizing making money and prioritizing AI safety!
Yes, they will prioritize AI safety until their board of directors says that needs to change.
"We expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers."
-google cofounders Larry Page and Sergey Brin
then came the dot com bubble.
One of Google's stated goals was "don't be evil". This stuff shouldn't be trusted - it's pure marketing.
Do you think they currently exist to prioritize AI safety? That shit won’t pay the bills, will it? Then they don’t exist. Goals are nice, OKRs yay, but at the end of the day, we all know the dollar drives everything.
It's simple, they will redefine the term (just like OpenAI redefined "AGI" into "just makes a lot of money) into "doesn't leak user data" and then claim success
No that's not what they think, that's why they used sarcasm.
Does this mean that Anthropic has more than reached AGI, seeing as OpenAI has officially defined "AGI" as any AI that manages to create more than a hectocorn's worth (100 unicorns, or $100B) in economic value?
If they have reached AGI (whatever the definition), we should be prioritizing looking for signs of misanthropy.
https://assets1.cbsnewsstatic.com/hub/i/2024/11/15/3ea53e31-...
Google Gemini
That was $100B in profits, not valuation.
Profits over what timeframe? Valuation is just the total sum of profit discounted for time and risk.
defined ? You mean re-defined to turn it into goal that's achievable within reasonable timeframe
It kind of feels like Anthropic needs to IPO before OpenAI.
If OpenAI IPO's first, it'd be huge. Then Anthropic does, but AI IPO hype has sailed.
If Anthropic IPO's first, they get the AI IPO hype. OpenAI IPO probably huge either way.