> There are certain bullsh*t jobs out there — some parts of management, consultancy, jobs where people don’t check if you’re getting it right or don’t know if you’ve got it right.
I suggest AI is cover to reign these jobs in. All those people who had a nice paying job, but did about 2 hrs work a day. AI is coming for them. In some respects, management previously looked the other way, but that is becoming less frequent and its easy to blame the reduction on AI.
Usually not - the people writing these comments have neither the understanding nor the courage of their conviction to bet based on their own analysis.
If they did, the articles would look less like “wow, numbers are really big,” and more like, “disclaimer: I am short. Here’s my reasoning”
They don’t even have to be short for me to respect it. Even being hedged or on the sidelines I would understand if you thought everything was massively overvalued.
It’s a bit like saying you think the rapture is coming, but you’re still investing in your 401k…
Edit: sorry to respond to this comment twice. You just touched on a real pet peeve of mine, and I feel a little like I’m the only one who thinks this way, so I got excited to see your comment
That sounds like a variation on: "If you're so smart, why aren't you rich?" which rests on some very shaky (yet comforting) set of assumptions in a "just world."
Heck, just look at yesterday: Myself and several million other people wouldn't have needed to march if smart people reliably ended up in charge.
I think it's more valuable to flip the lens around, and ask: "If you're so rich, why aren't you smart?"
Fair point - meaning, you can be right (and rich) but for the wrong reasons? Like… you can place your bet based on a coin flip and get it right without actually being smart?
While it seems foolish to discount all effect from individual agency or merit, we do know that random chance is sufficient to lead to the trends we see. [0] Much like how an iceberg always has some ~10% portion above the water: The top water molecules probably aren't special snowflakes (heh) compared to the rest, we're mostly just seeing What Ice Does.
Combine that with how humans seem hardwired to dislike/ignore random chance, and it's reasonable to think we overestimate the importance of personal qualities in getting rich. Consider how basically anyone flipping a coin starts thinking of of causal stories like "hot streaks" or "cold streaks" or "now I'm overdue for a different outcome", even when they already know it's 50/50.
________________
A simple trading simulation of equally-smart equally-lucky agents still demonstrates oligarchic outcomes [0]. When you also add a redistributing effect (like taxing the rich to keep the poor alive) it generates outcomes that resemble real-world statistics for different countries.
> If you simulate this economy, a variant of the yard sale model, you will get a remarkable result: after a large number of transactions, one agent ends up as an “oligarch” holding practically all the wealth of the economy, and the other 999 end up with virtually nothing.
> It does not matter how much wealth people started with. It does not matter that all the coin flips were absolutely fair. It does not matter that the poorer agent's expected outcome was positive in each transaction, whereas that of the richer agent was negative. Any single agent in this economy could have become the oligarch—in fact, all had equal odds if they began with equal wealth.
I like to think of becoming wealthy like catching a ball at an MLB game.
First, you have to show up at a game in person. No one watching the game on TV or ignoring it altogether is catching a ball.
Next, you have a greater chance at catching a ball if you bring a glove.
Then, it also helps your chances if you've practiced catching balls.
However, all of that preparation is for naught if a ball is never hit to you.
For every person who strikes it rich, there are hundreds if not thousands of people who were just as smart, worked just as hard, and did all the same right things, but they simply didn't make it.
or you could sell a single broad market etf lol. or buy a short etf.. it hasn't been hard to selectively exposure yourself to dang near any slice of equities since the ETF boom
Short ETFs are usually leveraged and make for a really good way to lose money.
Realistically, timing is the issue. "This is a bubble" is worth ~nothing. "This is a bubble and it will pop in late December" is worth a lot if you're correct.
Market bubble is essentially a gambling event gone wrong. Shorting stock is widely recognized by people smarter than me, as high risk gambling, due multiple factors. So now please tell me, why would people concerned about gambling gone wrong, voluntarily engage in a reverse gambling themselves? Let imagine football and a spectator who is moderately in the know about this sport. He sees that multiple people are gambling large sums on the team he deems would likely lose. Why would such a person go and bet unreasonable sums on the opposite team, even if it's a likely win? It's still gambling and still not a reasonably defined event.
tl;dr - it is really tiring, reading these "clever" quips about "why won't you short then?", mainly because they are neither clever nor in any way new. We have heard that for a decade about "why won't you short BTC them?". You are not original.
This is also why all online stock pundits are full of shit. None of them will publicly disclose their P&L's from trading because they make most of their money from YouTube and peddling courses.
The bubble referenced in the article is $1 Trillion, compared to Google's $3 trillion market cap. And OpenAI / Anthropic legitimately compete with Google Search. I feel weirdly like AI's detractors are somehow drinking too much of the AI Kool-Aid. All AI has to do to justify these valuations is capture 1/3rd of Google. Unless Google is wildly overvalued, which it may be, but that's not a phenomenon that has anything to do with AI hype.
And there are legitimately applications beyond search, I don't know how big those markets are, but it doesn't seem that odd to suggest they might be larger than the search market.
Most of Google's value is the moat they've built around the things that bring in money... their advertising market, google play store, vertical integration, etc. See also Doctorow's Chokepoint Capitalism.
Building even a tiny fraction of those moats is mind-bogglingly difficult. Building a third of that moat is insanely hard. To claim that the AI industry's "expected endgame moat size" is one-third of Google's current moat is a ludicrous prediction. You'd be better off playing the lottery than making that bet.
I would be happy to bet against this if I could do it without making a Keynes-wager (that I can remain solvent longer than markets remain irrational), but I see no way to do so. Put options expire, futures can be force-liquidated by margin calls, and short sales have unlimited downside risk.
Is there a reason why AI cannot be far better than Google at providing results to queries?
Inherently, they are in the same business, but I am not very aware of any AI specifically aimed right at Google's business....... but it is completely logical that they would.
Furthermore, it appears that Google just sells off placement to the highest bidder, and these AIs could easily beat that by giving free AI access in exchange for nibbling at the queries and adding a tab of 'sponsored results'
“At the heart of the note is a golden rule I’ve developed, which is that if you use large language model AI to create an application or a service, it can never be commercial.
One of the reasons is the way they were built. The original large language model AI was built using vectors to try and understand the statistical likelihood that words follow each other in the sentence. And while they’re very clever, and it’s a very good bit of engineering required to do it, they’re also very limited.
The second thing is the way LLMs were applied to coding. What they’ve learned from — the coding that’s out there, both in and outside the public domain — means that they’re effectively showing you rote learned pieces of code. That’s, again, going to be limited if you want to start developing new applications.”
Frankly kind of amazing to be so wrong right out of the gate. LLMs do not predict the most likely next token. Base models do that, but the RLed chat models we actually use do not — RL optimizes for reward and the unit of being rewarded is larger than a single token. On the second point, approximately all commercial software consists of a big pile of chunks of code that are themselves rote and uninteresting on their own.
They may well end up at the right conclusion, but if you start out with false premises as the pillars of your analysis, the path that leads you to the right place can only be accidental.
The base model is a pure next token predictor. It just continues whatever prompt you give it — if you ask it a question, it might just keep elaborating the question. To turn these models into something that can actually chat (and more recently, that can do things like tool calls) they do a second phase of training, including reinforcement learning, which teaches the model to maximize some kind of reward signal meant to represent good answers of various kinds. This reward signal applies at the level of the whole response (or possibly parts of the response) so it is not predicting the most likely next token. I don’t know in an absolute sense how much this ends up changing the base model weights, and it’s surprisingly hard to find discussions of this, I guess because the state of the art is quite secret. But it’s clear that RL is important for getting the models to become useful.
There are other posttraining techniques that are not strictly speaking RL (again, not an expert) but it sounds to me like they are still not teaching straightforward next token prediction in the way people mean when they say LLMs can’t do X because they’re merely predicting the most likely next token based on the training corpus.
The argument about Tether wasn't that they didn't have any assets backing the coins. It was that the assets they had were more risky than the boring <1 mo maturity treasuries they should be holding. Just because tether didn't implode , doesn't mean it wasn't a very real possibility. It's not very different from "the market can stay irrational longer than you can stay solvent"
every penny I made in the market over the last 30 years can be in some (or all) way attributed to exactly this. but this has to be backed by fundamentals. and fundamentals are weakening… this is a good read on OpenAI shit happening recently but it is industry-wide related - https://www.wheresyoured.at/openai400bn/
People here are still in denial that crypto will ever have a use case, meanwhile you have Larry Fink saying that he wants to tokenize the financial ecosystem.
Token do have use case, obviously. Like, we can see countless usecases with our own eyes. Tokens don't have any legal and at the same time competitive use case, that was the argument. All of those castle in sky constructs about how there would be property deeds on blockchain (technically and legally impossible), how there there would be game assets on blockchain (also technically impossible plus no game studio would ever be interested), how ticket scalpers would solved on blockchain (technically possible, but no ticket vendor is interested because they are the ones who benefit from scalpers) etc. And the list goes on. All of those legal use cases were a dud, because it is simply a shitty technology.
But to reiterate, there is great and massive actual use case for the tokens, yes. No one would argue against it :) . We just think that it is bad.
And they did not in fact had dollars to back them up. They did not had them for a few years continuously. The lesson is, you never bet even on a surefire stake if there is market corruption involved. Or if mafia money involved. In case of Tether it was both.
It was a good lesson for me personally, to always check wider picture and consider unknown factors.
I've quipped a lot here about s/AI/statistics/g, but the applications where that is most straightforwardly true are probably the most solid that are going to produce a lot of value over the long term.
Before computers came along, we really couldn't fit curves to data much beyond simple linear regression. Too much raw number crunching to make the task practical. Now that we have computers—powerful ones—we've developed ever more advanced statistical inference techniques and the payoff in terms of what that enables in research and development is potentially immense.
Yep. Right now it’s hard for biomed companies to compete on salary from the AI craze, but if the bubble bursts salaries will come back to down to earth. Deep/machine learning will, imo, prove to have large societal benefits over the next decade.
Even if this is true, a possible takeaway is that after the bubble bursts and the dust settles, AI's effect will be 17 times stronger than that of the Internet...
Personally, I think it will end up being much higher, but that doesn't mean I'm going to invest in it any time soon
Bubble/Not bubble, what does that really change? The economy will rise and fall one way or another; it is really in cycles. If the bubble pops, it will be a sharper fall. Unless you own AI, tech stocks - probably not a big deal
It’s disingenuous because since the dotcom bubble there has been at least 2x inflation, and then on top of that the tech market has expanded a lot more than what it was in 1999, so of course it will be bigger. This is nothing.
It's not a bubble yet. Many companies are already getting direct value out of AI. The Dot com burst happened because there were lots of unsustainable business models. I don't see them as equal.
> Despite $30–40 billion in enterprise investment into GenAI, this report uncovers a surprising
result in that 95% of organizations are getting zero return
Meta was profitable long before they went public and never had any significant amount of losses and Amazon had profitable unit economics and they were investing in real things like warehouses.
But still that is the ultimate survivorship bias. Is each new customer that Cursor has bringing in more money than they cost Cursor?
if we learned anything over the last decade or so it is that profitability is absolutely irrelevant. just look at UBER… value is the only thing that matters, you can be significantly unprofitable for a very, very long time
True. But it’s the whole idea that if they lose a lot of money now, they will definitely be successful. This is the thought process these let’s startups - especially a lot of the YC companies - to underpay developers and give them equity that will statistically be worthless.
> There are certain bullsh*t jobs out there — some parts of management, consultancy, jobs where people don’t check if you’re getting it right or don’t know if you’ve got it right.
Market Analyst, perhaps?
I suggest AI is cover to reign these jobs in. All those people who had a nice paying job, but did about 2 hrs work a day. AI is coming for them. In some respects, management previously looked the other way, but that is becoming less frequent and its easy to blame the reduction on AI.
"Earlier this month, Garran published a report claiming that we are in "the biggest and most dangerous bubble the world has ever seen.""
Here is the report
https://www.youtube.com/watch?v=uz2EqmqNNlE
What I want to know is whether people who believe in a bubble actually short AI/tech-related stocks.
Usually not - the people writing these comments have neither the understanding nor the courage of their conviction to bet based on their own analysis.
If they did, the articles would look less like “wow, numbers are really big,” and more like, “disclaimer: I am short. Here’s my reasoning”
They don’t even have to be short for me to respect it. Even being hedged or on the sidelines I would understand if you thought everything was massively overvalued.
It’s a bit like saying you think the rapture is coming, but you’re still investing in your 401k…
Edit: sorry to respond to this comment twice. You just touched on a real pet peeve of mine, and I feel a little like I’m the only one who thinks this way, so I got excited to see your comment
That sounds like a variation on: "If you're so smart, why aren't you rich?" which rests on some very shaky (yet comforting) set of assumptions in a "just world."
Heck, just look at yesterday: Myself and several million other people wouldn't have needed to march if smart people reliably ended up in charge.
I think it's more valuable to flip the lens around, and ask: "If you're so rich, why aren't you smart?"
Fair point - meaning, you can be right (and rich) but for the wrong reasons? Like… you can place your bet based on a coin flip and get it right without actually being smart?
> based on a coin flip
To simplify: Yes.
While it seems foolish to discount all effect from individual agency or merit, we do know that random chance is sufficient to lead to the trends we see. [0] Much like how an iceberg always has some ~10% portion above the water: The top water molecules probably aren't special snowflakes (heh) compared to the rest, we're mostly just seeing What Ice Does.
Combine that with how humans seem hardwired to dislike/ignore random chance, and it's reasonable to think we overestimate the importance of personal qualities in getting rich. Consider how basically anyone flipping a coin starts thinking of of causal stories like "hot streaks" or "cold streaks" or "now I'm overdue for a different outcome", even when they already know it's 50/50.
________________
A simple trading simulation of equally-smart equally-lucky agents still demonstrates oligarchic outcomes [0]. When you also add a redistributing effect (like taxing the rich to keep the poor alive) it generates outcomes that resemble real-world statistics for different countries.
> If you simulate this economy, a variant of the yard sale model, you will get a remarkable result: after a large number of transactions, one agent ends up as an “oligarch” holding practically all the wealth of the economy, and the other 999 end up with virtually nothing.
> It does not matter how much wealth people started with. It does not matter that all the coin flips were absolutely fair. It does not matter that the poorer agent's expected outcome was positive in each transaction, whereas that of the richer agent was negative. Any single agent in this economy could have become the oligarch—in fact, all had equal odds if they began with equal wealth.
[0] https://www.scientificamerican.com/article/is-inequality-ine...
I like to think of becoming wealthy like catching a ball at an MLB game.
First, you have to show up at a game in person. No one watching the game on TV or ignoring it altogether is catching a ball.
Next, you have a greater chance at catching a ball if you bring a glove.
Then, it also helps your chances if you've practiced catching balls.
However, all of that preparation is for naught if a ball is never hit to you.
For every person who strikes it rich, there are hundreds if not thousands of people who were just as smart, worked just as hard, and did all the same right things, but they simply didn't make it.
Usually not, because shorting a broad chunk of market is very hard. "Markets can remain irrational longer than you can remain solvent".
or you could sell a single broad market etf lol. or buy a short etf.. it hasn't been hard to selectively exposure yourself to dang near any slice of equities since the ETF boom
Short ETFs are usually leveraged and make for a really good way to lose money.
Realistically, timing is the issue. "This is a bubble" is worth ~nothing. "This is a bubble and it will pop in late December" is worth a lot if you're correct.
It's a bit hard to short private companies, of which most AI companies have chosen to remain, to avoid scrutiny from shareholders.
Market bubble is essentially a gambling event gone wrong. Shorting stock is widely recognized by people smarter than me, as high risk gambling, due multiple factors. So now please tell me, why would people concerned about gambling gone wrong, voluntarily engage in a reverse gambling themselves? Let imagine football and a spectator who is moderately in the know about this sport. He sees that multiple people are gambling large sums on the team he deems would likely lose. Why would such a person go and bet unreasonable sums on the opposite team, even if it's a likely win? It's still gambling and still not a reasonably defined event.
tl;dr - it is really tiring, reading these "clever" quips about "why won't you short then?", mainly because they are neither clever nor in any way new. We have heard that for a decade about "why won't you short BTC them?". You are not original.
I moved my pension in to an index that doesn't include the big AI companies.
The whole market was propped up by AI stocks though. So realistically you'd have to move out of the markets to avoid exposure.
On a more degenerate forum, the policy you’re referring to would be “positions or ban”
This is also why all online stock pundits are full of shit. None of them will publicly disclose their P&L's from trading because they make most of their money from YouTube and peddling courses.
> What I want to know is whether people who believe in a bubble actually short AI/tech-related stocks.
Why? What does that tell you?
Stated preference vs. revealed preference
The common phrase is "putting one's money where their mouth is"
So, every single human opinion must be followed up with a real money gambling bet or it is meaningless?
I'm against sporting bets, should I bet against it?
They do and the majority lose everything. The few winners who happen to time the top are praised for their genius.
The bubble referenced in the article is $1 Trillion, compared to Google's $3 trillion market cap. And OpenAI / Anthropic legitimately compete with Google Search. I feel weirdly like AI's detractors are somehow drinking too much of the AI Kool-Aid. All AI has to do to justify these valuations is capture 1/3rd of Google. Unless Google is wildly overvalued, which it may be, but that's not a phenomenon that has anything to do with AI hype.
And there are legitimately applications beyond search, I don't know how big those markets are, but it doesn't seem that odd to suggest they might be larger than the search market.
Most of Google's value is the moat they've built around the things that bring in money... their advertising market, google play store, vertical integration, etc. See also Doctorow's Chokepoint Capitalism.
Building even a tiny fraction of those moats is mind-bogglingly difficult. Building a third of that moat is insanely hard. To claim that the AI industry's "expected endgame moat size" is one-third of Google's current moat is a ludicrous prediction. You'd be better off playing the lottery than making that bet.
I would be happy to bet against this if I could do it without making a Keynes-wager (that I can remain solvent longer than markets remain irrational), but I see no way to do so. Put options expire, futures can be force-liquidated by margin calls, and short sales have unlimited downside risk.
> All AI has to do to justify these valuations is capture 1/3rd of Google.
Is that all? It really is that easy huh.
> And OpenAI / Anthropic legitimately compete with Google Search
They compete legitimately with Google Search as I compete legitimately with Jay-Z over Beyonce :)
Just ask yourself this question though...
Is there a reason why AI cannot be far better than Google at providing results to queries?
Inherently, they are in the same business, but I am not very aware of any AI specifically aimed right at Google's business....... but it is completely logical that they would.
Furthermore, it appears that Google just sells off placement to the highest bidder, and these AIs could easily beat that by giving free AI access in exchange for nibbling at the queries and adding a tab of 'sponsored results'
“At the heart of the note is a golden rule I’ve developed, which is that if you use large language model AI to create an application or a service, it can never be commercial.
One of the reasons is the way they were built. The original large language model AI was built using vectors to try and understand the statistical likelihood that words follow each other in the sentence. And while they’re very clever, and it’s a very good bit of engineering required to do it, they’re also very limited.
The second thing is the way LLMs were applied to coding. What they’ve learned from — the coding that’s out there, both in and outside the public domain — means that they’re effectively showing you rote learned pieces of code. That’s, again, going to be limited if you want to start developing new applications.”
Frankly kind of amazing to be so wrong right out of the gate. LLMs do not predict the most likely next token. Base models do that, but the RLed chat models we actually use do not — RL optimizes for reward and the unit of being rewarded is larger than a single token. On the second point, approximately all commercial software consists of a big pile of chunks of code that are themselves rote and uninteresting on their own.
They may well end up at the right conclusion, but if you start out with false premises as the pillars of your analysis, the path that leads you to the right place can only be accidental.
Can you explain a bit more on the topic of what happens after the base model?
The base model is a pure next token predictor. It just continues whatever prompt you give it — if you ask it a question, it might just keep elaborating the question. To turn these models into something that can actually chat (and more recently, that can do things like tool calls) they do a second phase of training, including reinforcement learning, which teaches the model to maximize some kind of reward signal meant to represent good answers of various kinds. This reward signal applies at the level of the whole response (or possibly parts of the response) so it is not predicting the most likely next token. I don’t know in an absolute sense how much this ends up changing the base model weights, and it’s surprisingly hard to find discussions of this, I guess because the state of the art is quite secret. But it’s clear that RL is important for getting the models to become useful.
This is a reasonable explanation, though as a non-expert I can’t vouch for the formal parts: https://www.harysdalvi.com/blog/llms-dont-predict-next-word/
There are other posttraining techniques that are not strictly speaking RL (again, not an expert) but it sounds to me like they are still not teaching straightforward next token prediction in the way people mean when they say LLMs can’t do X because they’re merely predicting the most likely next token based on the training corpus.
When everybody agrees about something in finance, it's typically the other way around.
Reminds me of the "everybody knows Tether doesn't have the dollars it claims and it's collapse is imminent" that was parroted here for years.
The argument about Tether wasn't that they didn't have any assets backing the coins. It was that the assets they had were more risky than the boring <1 mo maturity treasuries they should be holding. Just because tether didn't implode , doesn't mean it wasn't a very real possibility. It's not very different from "the market can stay irrational longer than you can stay solvent"
every penny I made in the market over the last 30 years can be in some (or all) way attributed to exactly this. but this has to be backed by fundamentals. and fundamentals are weakening… this is a good read on OpenAI shit happening recently but it is industry-wide related - https://www.wheresyoured.at/openai400bn/
People here are still in denial that crypto will ever have a use case, meanwhile you have Larry Fink saying that he wants to tokenize the financial ecosystem.
Token do have use case, obviously. Like, we can see countless usecases with our own eyes. Tokens don't have any legal and at the same time competitive use case, that was the argument. All of those castle in sky constructs about how there would be property deeds on blockchain (technically and legally impossible), how there there would be game assets on blockchain (also technically impossible plus no game studio would ever be interested), how ticket scalpers would solved on blockchain (technically possible, but no ticket vendor is interested because they are the ones who benefit from scalpers) etc. And the list goes on. All of those legal use cases were a dud, because it is simply a shitty technology.
But to reiterate, there is great and massive actual use case for the tokens, yes. No one would argue against it :) . We just think that it is bad.
Number go up isn’t a use case.
This seems to be the disconnect.
And they did not in fact had dollars to back them up. They did not had them for a few years continuously. The lesson is, you never bet even on a surefire stake if there is market corruption involved. Or if mafia money involved. In case of Tether it was both.
It was a good lesson for me personally, to always check wider picture and consider unknown factors.
All i know is that I’m looking forward to picking up deep learning programmers for biomed applications in about nine months time.
I've quipped a lot here about s/AI/statistics/g, but the applications where that is most straightforwardly true are probably the most solid that are going to produce a lot of value over the long term.
Before computers came along, we really couldn't fit curves to data much beyond simple linear regression. Too much raw number crunching to make the task practical. Now that we have computers—powerful ones—we've developed ever more advanced statistical inference techniques and the payoff in terms of what that enables in research and development is potentially immense.
Yep. Right now it’s hard for biomed companies to compete on salary from the AI craze, but if the bubble bursts salaries will come back to down to earth. Deep/machine learning will, imo, prove to have large societal benefits over the next decade.
Even if this is true, a possible takeaway is that after the bubble bursts and the dust settles, AI's effect will be 17 times stronger than that of the Internet... Personally, I think it will end up being much higher, but that doesn't mean I'm going to invest in it any time soon
Previously:
https://news.ycombinator.com/item?id=45465969
Bubble/Not bubble, what does that really change? The economy will rise and fall one way or another; it is really in cycles. If the bubble pops, it will be a sharper fall. Unless you own AI, tech stocks - probably not a big deal
Almost everyone owns tech stocks, if just through indexes.
True, but I only own stocks of renewables.
through funds or individual stocks?
Ok but Amazon is how many times bigger than the dot-com bubble?
See also https://news.ycombinator.com/item?id=45493287 246 comments
and https://news.ycombinator.com/item?id=45465969 111 comments
both on "AI bubble is 17 times bigger"
By the way the 17 times refers to an interest rate model and is largely unrelated to AI. Explained here https://www.youtube.com/watch?v=uz2EqmqNNlE&t=306
People make this type of prediction every year. useless.What if it becomes 20x bigger? There is nothing actionable contained in this observation.
Each time this happens, there's a new generation of people that think "surely the market will become rational again before I'm insolvent"
as long as TSLA share is greater than $50 the market is not rational :)
Market is very rational, AI is exponential and everyone understands what that means for world GDP.
AI is exponential? What do you mean, like, inversely exponential?
> AI is exponential
Hasn't the performance been asymptotic?
It’s disingenuous because since the dotcom bubble there has been at least 2x inflation, and then on top of that the tech market has expanded a lot more than what it was in 1999, so of course it will be bigger. This is nothing.
It's not a bubble yet. Many companies are already getting direct value out of AI. The Dot com burst happened because there were lots of unsustainable business models. I don't see them as equal.
> Many companies are already getting direct value out of AI.
There's always some "value" in a bubble, but how does one confirm that it's enough "direct value" that the investments are proportionate?
Enormous investments should go with enormous benefits, and by now a very-measurable portion of expected benefit should have arrived...
customers also got value out of pets.com selling them products for below cost.
> Many companies are already getting direct value out of AI.
Source that immediately refutes this claim: https://www.artificialintelligence-news.com/wp-content/uploa...
> Despite $30–40 billion in enterprise investment into GenAI, this report uncovers a surprising result in that 95% of organizations are getting zero return
> Many companies are already getting direct value out of AI
Name one
NVidia :).
Cursor?
Is cursor profitable?
getting direct value and being profitable are two different things… META and AMZN and … have all been non-profitable early
Meta was profitable long before they went public and never had any significant amount of losses and Amazon had profitable unit economics and they were investing in real things like warehouses.
But still that is the ultimate survivorship bias. Is each new customer that Cursor has bringing in more money than they cost Cursor?
if we learned anything over the last decade or so it is that profitability is absolutely irrelevant. just look at UBER… value is the only thing that matters, you can be significantly unprofitable for a very, very long time
Again survivorship bias. And all of the companies that failed? Let’s just look at how the YC companies that have gone public are doing
https://medium.com/@Arakunrin/the-post-ipo-performance-of-y-...
some companies prosper, some die. some AI companies will prosper, some will die
True. But it’s the whole idea that if they lose a lot of money now, they will definitely be successful. This is the thought process these let’s startups - especially a lot of the YC companies - to underpay developers and give them equity that will statistically be worthless.
Because AI is indeed working.